> Automatic creation of an initial billboard: Upon starting the program, a predefined list of movies currently showing must be automatically generated, including their details (title, genre, duration, and showtimes).
I would say that these results might be relevant for a university CS program setting, but I would make the distinction between this and actually learning to program.
The context of this task is definitely a very contrived "Let's learn OOP" assignment that, for example, just tires to cram in class inheritance without really justifying it's use in the software that's being built. It's a lazy kind of curriculum building that doesn't actually tell the students about OOP.
In that sense it's no wonder that AI is not that helpful in the context of the assignment and learning.
I wouldn't chalk this up to "AI doesn't help you learn". I would put this in the category of, in an overly academic assignment with contrived goals, AI doesn't help the student accomplish the goals of the course. That conclusion could be equally applied to French literature 102.
And that's very different from whether or not an AI coding assistant can help you learn to code or not. (I'm actually not sure if it can, but I think this study doesn't say anything new).
That analogy sounds good on the surface, but breaks down quickly. In the context of sport/athletics it's your body doing the work, which is difficult to change without exercise (even with steroids). But for knowledge work, as long as you have access to the llm machine, you can quickly pretend it's your work. For students it can be very deceiving, they type their ideas on a device, one button fixes their spelling, another button "fixes" their wording. The line is very blurry, literally the divider in the UI between these two buttons.
This sounds like one of the "Ironies of Automation" as Lisain Bainbridge pointed out several years ago.
The more prevalent automation is, the worse humans do when that automation is taken away. This will be true for learning now .
Ultimately the education system is stuck in a bind. Companies want AI-native workers, students want to work with AI, parents want their kids to be employable. Even if the system wants to ensure that students are taught how to learn and not just a specific curriculum, their stakeholders have to be on board.
I think we're shifting to a world where not only will elite status markers like working at places like McKinsey and Google be more valuable but also interview processes will be significantly lengthened because companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation
> companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation
Speak for yourself, but that's been how many companies have been operating for decades at this point.
Perhaps the credentials will change as these academic institutions become more like dinosaurs and other kinds of institutes will arise which give better markers of ability.
I don't know what AI native folks will look like. To me, it looks like just replacing skilled labors with unskilled labors as opposed to giving humans new skills.
AI to me will be valuable when it's helping humans learn and think more strategically and when they're actually teaching humans or helping humans spot contradiction and vetting for reliable information. Fact checking is extremely labor intensive work after all.
Or otherwise if AI is so good, just replace humans.
Right now, the most legible use of AI is AI slop and misinformation.
I am not a student and I wonder often whether we fill in memorization for the idea of learning, as though it’s somehow more valuable to be able to write valid syntax from memory on a blank file than it is to know and practice the broader strokes of abstractions, operators, readability and core concepts which make up good software craftsmanship.
Sometimes I’m doing something in a new to me language, using an LLM to give me a head start on structure and to ask questions about conventions and syntax, and wondering to myself how much I’m missing had I started just by reading the first half of a book on the language. I think I probably would take a lot longer to do anything useful, but I’d probably also have a deeper understanding of what I know and don’t know. But then, I can just as easily discover those fundamental concepts to a language via the right prompt. So am I learning? Am I somehow fooling myself? How?
I'm not sure we really know how much of learning is memorization. As we memorize more stuff, we find patterns to compress it and memorize more efficiently.
But the magic is in the “find patterns” stuff as memorization is just data storage. If you think of the machine learning algorithms as assigning items a point in a space, then it does uncover neighbors, sometimes ones we might not expect, and that’s interesting for sure.
But I’m not sure it’s analogous to what people do when they uncover patterns.
You have to know the basics to build higher level knowledge and skills. What’s the use of high level book learning without the ability to operationalize it
Interesting to see quotes but note N=20 and the methodology doesn’t seem all that rigorous. I didn’t see anything that wasn’t exactly what you would expect to hear.
The sad reality is that this is probably not a solvable problem. AI will improve more rapidly than the education system can adapt. Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
My recommendation would be to encourage students to ask the LLM to quiz and tutor them, but ultimately I think most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
> AI will improve more rapidly than the education system can adapt
We’ll see a new class division scaffolded on the existing one around screens. (Schools in rich communities have no screens. Students turn in their phones and watches at the beginning of the day. Schools in poor ones have them everywhere, including everywhere at home.)
Every school has students work off their Chromebooks here in Colorado, regardless of how rich community is. This started with the Covid lockdowns and is pretty much standard now.
> most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
If we assume that AI will automate many/most programming jobs (which is highly debatable and I don't believe is true, but just for the sake of argument), isn't this a good outcome? If most parts of programming are automatable and only the really tricky parts need human programmers, wouldn't it be convenient if there are fewer human programmers but the ones that do exist are really skilled?
Well, as a college student planning to start a CS program, I can tell you that it actually sounds fine to me.
And I think that teachers can adapt. A few weeks ago, my English professor assigned us an essay where we had to ask ChatGPT a question and analyze its response and check its sources. I could imagine something similar in a programming course. "Ask ChatGPT to write code to this spec, then iterate on its output and fix its errors" would teach students some of the skills to use LLMs for coding.
This is probably useful and better than nothing, but the problem is that by the time you graduate it's unlikely that reading the output of the LLM will be useful.
Tons of devs (CS grad devs that is) have made their career writing basic CRUD apps, iOS apps, or python stuff that probably doesn't scratch the surface of all the CS course work they did in their degree. It's just like everyone cramming for leetcode interviews but never using that stuff in the job. Being familiar with LLMs today will give you an advantage when they change tomorrow, you can adapt with the technology after college is over. Granted, there likely will be less devs needed but the demand for the highly skilled ones could be moving upwards as the demand for this new AI tech increases
Fair point. Perhaps I'm just too pessimistic or narrow-minded, but I don't believe that LLMs will progress to that level of capability any time soon. If you think that they will, your view makes a great deal of sense. Agree to disagree.
Right, but if AI gets to the point where it can replace developers (which includes a lot of fuzzy requirement interpretation etc.); then it will replace most other jobs as well, and it wouldn't have helped to become a lawyer or doctor.
Its not cruel, its stupid. Why would we organize our society in such a way that people would be drawn towards such paths in the first place, where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?
It may highlight some "fraud people" (do not know how to say it in english .. you know, people who fake the job so hard but are just clowns, do not produce anything, are basically worthless and just here to grab some money as long as the fraud is working)
You can still argue that LLMs won't replace human programmers without downplaying their capabilities. Modern SOTA LLMs can often produce genuinely impressive code. Full stop. I don't personally believe that LLMs are good enough to replace human developers, but claiming that LLMs are only capable of writing bad code is ridiculous and easily falsifiable.
An LLM is a tool and its just as mad as slide rules, calculators and PCs (I've seen them all although slide rules were being phased out in my youth)
Coding via prompt is simply a new form of coding.
Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.
A purist might note that using Python is not sufficiently close to the bare metal to be really productive.
My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.
I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.
They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!
> Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages.
High-level languages are deterministic and reliable, making it possible for developers to be confident that their high-level code is correct. LLMs are anything but deterministic and reliable.
You keep saying this but have you used an LLM for coding before? You just don’t vibe code up some generated code (well, you can, but it will suck). You are asking it to iterate on code and multiple artifacts at the same time (like tests) in many steps, and you are providing feedback, getting feedback, providing clarifications, checking small chunks of work (because you didn’t just have it do everything at once), etc. You just aren’t executing “vibecode -d [do the thing]” like you would with a traditional shoot once code generator.
It isn’t deterministic like a real programmer isn’t deterministic, and that’s why iteration is necessary.
Not all code written by humans is deterministic and reliable. And properly guard-railed LLM can check its output, you can even employ several, for higher consensus certainty. And we're just fuckin starting.
Unreliable code is incorrect thus undesirable. We limit the risk through review and understanding what we're doing which is not possible when delegating the code generation and review.
Checking output can be done by testing but test code in itself can be unreliable and testing in itself is no correctness guarantee.
The only way reliable code could be produced without human touching it would be using formal specifications, having the LLM write the formal proof at the same time as the code and using some software to validate the proof. The formal specification would have to be written using some kind of programming language, and then we're somewhat back to square one (but with maybe a new higher level language where you only define the specs formally rather than how you implement them).
But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.
Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.
> My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies
It's obviously not quite the same as programming, but my English professor assigned an essay a few weeks ago where we had to ask ChatGPT a question and then analyze its response, check its sources, and try to spot hallucinations. It was worth about 5% of our overall grade. I thought that it was a fascinating exercise in teaching responsible LLM use.
> Coding via prompt is simply a new form of coding.
No, it isn't. "Write me a parser for language X" is like pressing a button on a photocopier. The LLM steals content from open source creators.
Now the desperate capital starved VC companies can downvote this one too, but be aware that no one outside of this site believes the illusion any longer.
Maybe pointless, but I for one disagree with such rulings. Existing copyright law was formed as a construct between human producers and human consumers. I doubt that any human producers prior to a few years ago had any clue that their work would be fed into proprietary AI systems in order to build machines that generate huge quantities of more such works, and I think it fair to consider that they might have taken a different path had they known this.
To retroactively grant propriety AI training rights on all copyrighted material on the basis that it's no different from humans learning is, I think, misguided.
there isn’t a company in the united states of 50 or more people which doesn’t have daily/weekly/monthly “ai” meetings (I’ve been attending dozens this year, as recently as tuesday). comments like yours exist only on HN where selected group of people love talking about bubbles and illusions while the rest of us are getting sh*t done at pace we could not fathom just year or so ago…
I am sure that "AI" is great for generating new meetings and for creating documentation how valuable those meetings are. Also it is great at generating justifications for projects and how it speeds up those projects.
I am sure that the 360° performance reviews have never looked better.
Your experience is contradicted by the usually business friendly Economist:
this is same as polling data when Trump is running - no one wants to admit they will vote for DJT much like no one wants to admit these days that “AI” is doing (lots of) their work :)
jokes aside I do trust economist’s heart is in the right place but misguided IMO. “the investors” (much like many here on HN) expected “AI” to be magic thing and are dealing with some disappointment that most of us are still employed. the next stage of “investor sentiment” just may be “shoot, not magic but productivity is through the roof”
the numbers I could provide you are just what I have been involved with and we are 2.5/3.0x points-wise from 16 months ago. my team decided to actually measure productivity gains so we kept the estimation process the same (i.e. if we AI-automated something we still estimate as if we have to do it manually). we are about to stop this on Jan 1
since you referenced a trusted Economist here’s much-more-we-know-what-we-are-talking-about MIT saying 12% of workforce is replaceable by AI (I think this is too low) - https://iceberg.mit.edu/
>AI will improve more rapidly than the education system can adapt.
Is entirely obvious, and:
> Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
is not obvious, but quite clear from how things are going. I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
> I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
This doesn’t seem like a good example. People who engineer systems that rely on integrals still know what an integral is. They might not be doing it manually, but it’s still part of the tower of knowledge that supports whatever work they are doing now. Say you are modeling some physical system in Matlab - you know what an integral is, how it connects with the higher level work that you’re doing, etc.
An example from programming: you know what process isolation is, and how memory is allocated, etc. You’re not explicitly working with that when you create a new python list that ends up on the heap, but it’s part of your tower of knowledge. If there’s a need, you can shake off the cobwebs and climb back down the tower a bit to figure something out.
So here’s my contention: LLMs make it optional to have the tower of knowledge that is required today. Some people seem to be very productive with agentic coding tools today - because they already have the tower. We are in a liminal state that allows for this, since we all came up in the before time, struggling to get things to compile, scratching our heads at core dumps, etc.
What happens when you no longer need to have a mental model of what you’re doing? The hard problems in comp sci and software engineering are no less hard after the advent of LLMs.
Architects are not civil engineers and often don't know details of construction, project management, structural engineering etc. For a few years there will still be a role for a human "architect" but most of the specific low level stuff will be automated. Eventually there won't be an architect either but that may be 10 years away
I am actually hoping someone there studies such interventions the way they did with CMU's intelligent tutor — which if I recall correctly did not have net strong evidence in its favors as far as educational outcomes per the reports in WWC — given the fall in grade level scores in math and reading since 2015/16 across multiple grades in middle school. It is vital to know if any of these things help kids succeed.
Crazy idea but: what if we built an AI pair programmer that actually pair programmed? That is, sometimes it was the driver and you navigated, pretty much as it is today, but sometimes you drive and it navigates.
I surmise that would help people learn to code better.
> Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work unaided, pointing to potential overreliance and gaps in foundational knowledge transfer.
This is basically what would be expected. However n=20 is too small. This needs to be replicated with x10 the n.
> Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work un-aided, pointing to potential over reliance and gaps in foundational knowledge transfer.
As someone studying CS/ML this is dead on but I don't think the side-effects of this are discussed enough. Frankly, cheating has never been more incentivized and it's breaking the higher education system (at least that's my experience, things might be different at the top tier schools).
Just about every STEM class I've taken has had some kind of curve. Sometimes individual assignments are curved, sometimes the final grade, sometimes the curve isn't a curve but some sort of extra credit. Ideally it should be feasible to score 100% in a class but I think this actually takes a shocking amount of resources. In reality, professors have research or jobs to attend to and same with the students. Ideally there are sections and office hours and the professor is deeply conscious of giving out assignments that faithfully represent what students might be tested on. But often this isn't the case. The school can only afford two hours of TA time a week, the professors have obligations to research and work, the students have the same. And so historically the curve has been there to make up for the discrepancy between ideals and reality. It's there to make sure that great students get the grades that they deserve.
LLMs have turned the curve on its head.
When cheating was hard the curve was largely successful. The great students got great grades, the good students got good grades, those that were struggling usually managed a C+/B-, and those that were checked out or not putting in the time failed. The folks who cheated tended to be the struggling students but, because cheating wasn't that effective, maybe they went from a failing grade to just passing the class. A classic example is sneaking identities into a calculus test. Sure it helps if you don't know the identities but not knowing the identities is a great sign that you didn't practice enough. Without that practice they still tend to do poorly on the test.
But now cheating is easy and, I think it should change the way we look at grades. This semester, not one of my classes is curved because there is always someone who gets a 100%. Coincidentally, that person is never who you would expect. The students who attend every class, ask questions, go to office hours, and do their assignments without LLMs tend to score in B+/A- range on tests and quizzes. The folks who set the curve on those assignments tend to only show up for tests and quizzes and then sit in the far back corners when they do. Just about every test I take now, there's a mad competition for those back desks. Some classes people just dispense with the desk and take a chair to the back of the room.
Every one of the great students I know is murdering themselves to try to stay in the B+/A- range.
A common refrain when people talk about this is "cheaters only cheat themselves" and while I think has historically been mostly true, I think it's bullshit now. Cheating is just too easy, the folks who care are losing the arms race. My most impressive peers are struggling to get past the first round of interviews. Meanwhile, the folks who don't show up to class and casually get perfect scores are also getting perfect scores on the online assessments. Almost all the competent people I know are getting squeezed out of the pipeline before they can compete on level-footing.
We've created a system that massively incentivizes cheating and then invented the ultimate cheating tool. A 4.0 and a good score on an online assessment used to be a great signal that someone was competent. I think these next few years, until universities and hiring teams adapt to LLMs, we're going to start seeing perfect scores as a red flag.
If sitting in the back and cheating guarantees a good grade, that's a shit school, honestly. The school seems to know that people cheat, and how, but nothing is being done. Randomize seating, have a proctor stand in the back of the class, suspend/expel people who are caught cheating.
Ya it drives me crazy. I know someone who scored an 81% on a midterm where a few people scored in the high 90%. The professor told them, that among the people they didn’t suspect of cheating, they got the highest score. No curve, no prosecution of the cheaters.
Look, I agree with the sibling that the school needs to do something about cheating.
Individual instructors should do something about it, even.
The fact that there is no feedback loop causing instructors to do this is a real problem.
If there were ever a stats page showing results in your compilers course were uncorrelated with understanding of compilers on a proctored exit exam you bet people would change or be fired.
So in a way, I blame the poor response on the systematic factors.
FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.
They had opted out of the lectures, believing that they were inefficient or ineffective (or just poorly scheduled). Not everyone learns best in a lecture format. And not everyone is starting with the same level of knowledge of the topic.
Also:
> A 4.0 and a good score on an online assessment used to be a great signal that someone was competent
... this has never been true in my experience, as a student or hiring manager.
> FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.
For many classes this is still the case, and I lump these folks in with the great students. They still care about learning the material.
My experience has been that these students are super common in required undergrad classes and not at all common in the graduate-level electives that I’ve seen this happening in.
> ... this has never been true in my experience, as a student or hiring manager.
Good to know. What’ve you focused on when you’re hiring?
It is notable that so many publications try to salvage "AI" ("need for new pedagogical approaches that integrate AI effectively") rather than ditch "AI" completely.
The world worked perfectly before 2023, there is no need to outsource information retrieval or thinking.
Speaking as someone that communicates primarily through text (high likelihood of Autism) the internet was the first chance a lot of us had to ... speak.. and be heard
People have a need to be heard and understood. That’s half of what we are doing here posting.
Many (“not disabled”) people don’t fit in with their local peer group / society. The internet gave them a way to connect with other like-minded individuals.
Do I need to give examples? Let’s say: struggling with a rare disease.
There are far, far too many people who genuinely think disabled people should just disappear or die for it to be "safe" to be facetious about that without a clear sarcasm indicator.
Not much has changed, only people get diagnosed now. I think GP makes actually a good point that, with all its downsides, there are also net positive upsides to the internet.
there are upsides but I dont know if its net upside. In this particular example, communicating by text - letter writing has existed for millenia and has arguably degraded considerably in this age of instant messaging
Sorry, i know it's a bit "flavour of the month" but I mentioned it because I have a difficulty communicating face to face, which is common amongst a certain group of people, and I figured that mentioning it would help people understand my thinking.
Ah yes, the perfect world we had when governments could get away with anything because the press was not enough to showcase their attrocities. A beautiful, perfect world, with rubella and a global population living in extreme poverty close to 50% (compared to today's 10%).
I see this mentality almost exclusively in americans and/or anglo people in general, it's incredible... if you're not that, I guess you're just too young or completely isolated from reality and I wish you the best in the ongoing western collapse.
(... I actually wish you're joking and I didn't catch it, though).
last sentence in your first paragraph has nothing to do with the current state of the internet and certainly not AI. first sentence? turns out governments can still get away with pretty much anything and propaganda is easier than ever.
It is so much harder now. There are people who are willfully ignorant now, almost proud to be; snooty about it. But it's impossible for governments and institutions to lie like they used to be able to. People are trading primary source documents online within the day.
It's why the popularity of long-ruling institutional parties is dropping everywhere, and why the measures to stop people from communicating and to monitor what they're saying are becoming more and more draconian and desperate.
beyond irony that you pose as some tech optimist while also mentioning “western collapse” and then speak about a uniquely American pessimism, a nation that is presently under the thumb of a government that does not respect the rule of law and actively manipulates capital/big business.
and you cannot simply hand-wave away the massive acceleration of the surveillance state and characterize it as a tool of the “institutional parties”
Calculators give wrong answers all the time. The differentiator from AI is that you can trust that a garbage answer from a calculator was caused by bad input, where bad AI answers aren't debuggable.
>Yes, but the machine itself is deterministic and logically sound.
Because arithmetic itself, by definition, is.
Human language is not. Which is why being able to talk to our computers in natural language (and have them understand us and talk back) now is nothing short of science fiction come true.
My point is, needing to use something with care doesn't prevent it becoming from wildly successful. LLM's are wrong way more often but are also more versatile than a calculator.
> LLM's are wrong way more often but are also more versatile than a calculator.
LLMs are wrong infinitely more than calculators, because calculators are never wrong (unless they're broken).
If you input "1 + 3" into your calculator and get "4", but you actually wanted to know the answer to "1 + 2", the calculator wasn't "wrong". It gave you the answer to the question you asked.
Now you might say "but that's what's happening with LLMs too! It gave you the wrong answer because you didn't ask the question right!" But an LLM isn't an all-seeing oracle. It can only interpolate between points in its training data. And if the correct answer isn't in its training data, then no amount of "using it with care" will produce the correct answer.
There's no such thing as a correct result to a search query. It certainly delivered exactly what was asked for, a grep of the web, sorted by number of incoming links.
They also don't use it at all anymore, they barely even care about your search query.
Google is successful, however, because they innovated once, and got enough money together as a result to buy Doubleclick. Combining their one innovation with the ad company they bought enabled them to buy other companies.
Did you learn how to do long division in schools? I did, and I wasn't allowed to use calculators on a test until I was in highschool and basic math wasn't what was being taught or evaluated.
> Automatic creation of an initial billboard: Upon starting the program, a predefined list of movies currently showing must be automatically generated, including their details (title, genre, duration, and showtimes).
I would say that these results might be relevant for a university CS program setting, but I would make the distinction between this and actually learning to program.
The context of this task is definitely a very contrived "Let's learn OOP" assignment that, for example, just tires to cram in class inheritance without really justifying it's use in the software that's being built. It's a lazy kind of curriculum building that doesn't actually tell the students about OOP.
In that sense it's no wonder that AI is not that helpful in the context of the assignment and learning.
I wouldn't chalk this up to "AI doesn't help you learn". I would put this in the category of, in an overly academic assignment with contrived goals, AI doesn't help the student accomplish the goals of the course. That conclusion could be equally applied to French literature 102.
And that's very different from whether or not an AI coding assistant can help you learn to code or not. (I'm actually not sure if it can, but I think this study doesn't say anything new).
I have access to so many videos and even video games that teach me exactly how to perform as a world class athlete.
If I don’t exercise, will I ever become one?
That analogy sounds good on the surface, but breaks down quickly. In the context of sport/athletics it's your body doing the work, which is difficult to change without exercise (even with steroids). But for knowledge work, as long as you have access to the llm machine, you can quickly pretend it's your work. For students it can be very deceiving, they type their ideas on a device, one button fixes their spelling, another button "fixes" their wording. The line is very blurry, literally the divider in the UI between these two buttons.
This sounds like one of the "Ironies of Automation" as Lisain Bainbridge pointed out several years ago.
The more prevalent automation is, the worse humans do when that automation is taken away. This will be true for learning now .
Ultimately the education system is stuck in a bind. Companies want AI-native workers, students want to work with AI, parents want their kids to be employable. Even if the system wants to ensure that students are taught how to learn and not just a specific curriculum, their stakeholders have to be on board.
I think we're shifting to a world where not only will elite status markers like working at places like McKinsey and Google be more valuable but also interview processes will be significantly lengthened because companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation
> companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation
Speak for yourself, but that's been how many companies have been operating for decades at this point.
Perhaps the credentials will change as these academic institutions become more like dinosaurs and other kinds of institutes will arise which give better markers of ability.
I don't know what AI native folks will look like. To me, it looks like just replacing skilled labors with unskilled labors as opposed to giving humans new skills.
AI to me will be valuable when it's helping humans learn and think more strategically and when they're actually teaching humans or helping humans spot contradiction and vetting for reliable information. Fact checking is extremely labor intensive work after all.
Or otherwise if AI is so good, just replace humans.
Right now, the most legible use of AI is AI slop and misinformation.
I am not a student and I wonder often whether we fill in memorization for the idea of learning, as though it’s somehow more valuable to be able to write valid syntax from memory on a blank file than it is to know and practice the broader strokes of abstractions, operators, readability and core concepts which make up good software craftsmanship.
Sometimes I’m doing something in a new to me language, using an LLM to give me a head start on structure and to ask questions about conventions and syntax, and wondering to myself how much I’m missing had I started just by reading the first half of a book on the language. I think I probably would take a lot longer to do anything useful, but I’d probably also have a deeper understanding of what I know and don’t know. But then, I can just as easily discover those fundamental concepts to a language via the right prompt. So am I learning? Am I somehow fooling myself? How?
I'm not sure we really know how much of learning is memorization. As we memorize more stuff, we find patterns to compress it and memorize more efficiently.
Sounds awfully like machine learning, doesn't it?
That’s an interesting idea.
But the magic is in the “find patterns” stuff as memorization is just data storage. If you think of the machine learning algorithms as assigning items a point in a space, then it does uncover neighbors, sometimes ones we might not expect, and that’s interesting for sure.
But I’m not sure it’s analogous to what people do when they uncover patterns.
Definitely interesting to ponder though.
No, because ML is compression via interpolation and does not imply decompression.
That really depends on the particular algorithm.
You need both. If you don’t memorize the syntax how can you possibly expect to effectively express your ideas for the “broader strokes”?
I frequently manage to do this writing bash scripts.
Because not everyone can truly be great at their craft, but everyone can memorize syntax.
Schools compromise their curriculum so that every student has a chance in the interests of fairness.
You have to know the basics to build higher level knowledge and skills. What’s the use of high level book learning without the ability to operationalize it
Which school teaches programming as memorization? My school, KTH in Sweden, did not. I feel you may be trying to solve an already solved problem.
Testing Regurgitation on concepts or process is a large part of what learning is too often the case.
You should only use the word learning (without scare quotes) if it’s something you believe is learning.
One of the first precepts of ML is that “memorization is not learning”.
Learning is generalization, application to new circumstances.
Schooling might not have learning as a product, but that’s a different problem.
I'm referring to students learning in school, relative to student perceptions of their learning experience in using early stage coding assistants.
Interesting to see quotes but note N=20 and the methodology doesn’t seem all that rigorous. I didn’t see anything that wasn’t exactly what you would expect to hear.
In these studies, the qualitative data is often a lot more informative than the quantitative.
Understanding how concrete people navigate a domain and noting the common points between them can be illuminating.
Trying to calculate a generalisable statistical result from them… probably not so much.
The sad reality is that this is probably not a solvable problem. AI will improve more rapidly than the education system can adapt. Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
My recommendation would be to encourage students to ask the LLM to quiz and tutor them, but ultimately I think most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
> AI will improve more rapidly than the education system can adapt
We’ll see a new class division scaffolded on the existing one around screens. (Schools in rich communities have no screens. Students turn in their phones and watches at the beginning of the day. Schools in poor ones have them everywhere, including everywhere at home.)
Every school has students work off their Chromebooks here in Colorado, regardless of how rich community is. This started with the Covid lockdowns and is pretty much standard now.
Even the Waldorf schools?
> it won’t make sense to learn how to code.
Sure. So we can keep paying money to your employer, Anthropic, right?
> most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
If we assume that AI will automate many/most programming jobs (which is highly debatable and I don't believe is true, but just for the sake of argument), isn't this a good outcome? If most parts of programming are automatable and only the really tricky parts need human programmers, wouldn't it be convenient if there are fewer human programmers but the ones that do exist are really skilled?
[flagged]
Well, as a college student planning to start a CS program, I can tell you that it actually sounds fine to me.
And I think that teachers can adapt. A few weeks ago, my English professor assigned us an essay where we had to ask ChatGPT a question and analyze its response and check its sources. I could imagine something similar in a programming course. "Ask ChatGPT to write code to this spec, then iterate on its output and fix its errors" would teach students some of the skills to use LLMs for coding.
This is probably useful and better than nothing, but the problem is that by the time you graduate it's unlikely that reading the output of the LLM will be useful.
Tons of devs (CS grad devs that is) have made their career writing basic CRUD apps, iOS apps, or python stuff that probably doesn't scratch the surface of all the CS course work they did in their degree. It's just like everyone cramming for leetcode interviews but never using that stuff in the job. Being familiar with LLMs today will give you an advantage when they change tomorrow, you can adapt with the technology after college is over. Granted, there likely will be less devs needed but the demand for the highly skilled ones could be moving upwards as the demand for this new AI tech increases
Fair point. Perhaps I'm just too pessimistic or narrow-minded, but I don't believe that LLMs will progress to that level of capability any time soon. If you think that they will, your view makes a great deal of sense. Agree to disagree.
Right, but if AI gets to the point where it can replace developers (which includes a lot of fuzzy requirement interpretation etc.); then it will replace most other jobs as well, and it wouldn't have helped to become a lawyer or doctor.
> It's not good if you're a freshman currently starting a CS program
CS is the new MBA. A thoughtless path to a safe, secure job.
Cruelly, but necessarily, a society has to destroy those pathways. Otherwise, it becomes sclerotic.
Its not cruel, its stupid. Why would we organize our society in such a way that people would be drawn towards such paths in the first place, where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?
How about switching to English? There is a high demand for people who are very good at communication and writing nowadays.
The only task required from a dev is to think
AI does not think
Ergo, AI will not take "programming jobs"
It may highlight some "fraud people" (do not know how to say it in english .. you know, people who fake the job so hard but are just clowns, do not produce anything, are basically worthless and just here to grab some money as long as the fraud is working)
For what it’s worth: OpenAI seems to be encouraging this with their “Study” mode
on some ChatGPT interfaces.
> Within a few years it won't make sense for people to learn how to write actual code
Why?
Because LLMs are capable of sometimes working snippets of usually completely unmaintainable code?
You can still argue that LLMs won't replace human programmers without downplaying their capabilities. Modern SOTA LLMs can often produce genuinely impressive code. Full stop. I don't personally believe that LLMs are good enough to replace human developers, but claiming that LLMs are only capable of writing bad code is ridiculous and easily falsifiable.
[flagged]
It would seem relevant to disclose you work at Anthropic.
Perhaps. They're still 100% right.
Not at all whatsoever
An LLM is a tool and its just as mad as slide rules, calculators and PCs (I've seen them all although slide rules were being phased out in my youth)
Coding via prompt is simply a new form of coding.
Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.
A purist might note that using Python is not sufficiently close to the bare metal to be really productive.
My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.
I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.
They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!
> Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages.
High-level languages are deterministic and reliable, making it possible for developers to be confident that their high-level code is correct. LLMs are anything but deterministic and reliable.
You keep saying this but have you used an LLM for coding before? You just don’t vibe code up some generated code (well, you can, but it will suck). You are asking it to iterate on code and multiple artifacts at the same time (like tests) in many steps, and you are providing feedback, getting feedback, providing clarifications, checking small chunks of work (because you didn’t just have it do everything at once), etc. You just aren’t executing “vibecode -d [do the thing]” like you would with a traditional shoot once code generator.
It isn’t deterministic like a real programmer isn’t deterministic, and that’s why iteration is necessary.
Not all code written by humans is deterministic and reliable. And properly guard-railed LLM can check its output, you can even employ several, for higher consensus certainty. And we're just fuckin starting.
Unreliable code is incorrect thus undesirable. We limit the risk through review and understanding what we're doing which is not possible when delegating the code generation and review.
Checking output can be done by testing but test code in itself can be unreliable and testing in itself is no correctness guarantee.
The only way reliable code could be produced without human touching it would be using formal specifications, having the LLM write the formal proof at the same time as the code and using some software to validate the proof. The formal specification would have to be written using some kind of programming language, and then we're somewhat back to square one (but with maybe a new higher level language where you only define the specs formally rather than how you implement them).
But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.
Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.
> My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies
It's obviously not quite the same as programming, but my English professor assigned an essay a few weeks ago where we had to ask ChatGPT a question and then analyze its response, check its sources, and try to spot hallucinations. It was worth about 5% of our overall grade. I thought that it was a fascinating exercise in teaching responsible LLM use.
> My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies
This reminds me of folks teaching their kids Java ten years ago.
You’re teaching a tool. Versus general tool use.
> Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role
If correct, the child will be competent in the new world. If not, they will have wasted time developing general intelligence.
This doesn’t strike me as a good strategy for anything other than time-consuming babysitting.
Calculators dont make you forget math.
> Coding via prompt is simply a new form of coding.
No, it isn't. "Write me a parser for language X" is like pressing a button on a photocopier. The LLM steals content from open source creators.
Now the desperate capital starved VC companies can downvote this one too, but be aware that no one outside of this site believes the illusion any longer.
> The LLM steals content from open source creators.
Not according to court cases.
Courts ruled that machine learning is a transformative use, and just fine.
Pirating material to perform the training is still piracy, but open source licenses don't get that protection.
A summary of one such court case: https://www.jurist.org/news/2025/06/us-federal-judge-makes-l...
> "Write me a parser for language X" is like pressing a button on a photocopier.
What is the prompt "review this code" in your view? Because LLM-automated code review is a thing now.
Maybe pointless, but I for one disagree with such rulings. Existing copyright law was formed as a construct between human producers and human consumers. I doubt that any human producers prior to a few years ago had any clue that their work would be fed into proprietary AI systems in order to build machines that generate huge quantities of more such works, and I think it fair to consider that they might have taken a different path had they known this.
To retroactively grant propriety AI training rights on all copyrighted material on the basis that it's no different from humans learning is, I think, misguided.
there isn’t a company in the united states of 50 or more people which doesn’t have daily/weekly/monthly “ai” meetings (I’ve been attending dozens this year, as recently as tuesday). comments like yours exist only on HN where selected group of people love talking about bubbles and illusions while the rest of us are getting sh*t done at pace we could not fathom just year or so ago…
I am sure that "AI" is great for generating new meetings and for creating documentation how valuable those meetings are. Also it is great at generating justifications for projects and how it speeds up those projects.
I am sure that the 360° performance reviews have never looked better.
Your experience is contradicted by the usually business friendly Economist:
https://www.economist.com/finance-and-economics/2025/11/26/i...
this is same as polling data when Trump is running - no one wants to admit they will vote for DJT much like no one wants to admit these days that “AI” is doing (lots of) their work :)
jokes aside I do trust economist’s heart is in the right place but misguided IMO. “the investors” (much like many here on HN) expected “AI” to be magic thing and are dealing with some disappointment that most of us are still employed. the next stage of “investor sentiment” just may be “shoot, not magic but productivity is through the roof”
>productivity is through the roof
Where are the hard numbers? Number of games on Steam, new GitHub projects, new products released, GDP growth—anything.
the numbers I could provide you are just what I have been involved with and we are 2.5/3.0x points-wise from 16 months ago. my team decided to actually measure productivity gains so we kept the estimation process the same (i.e. if we AI-automated something we still estimate as if we have to do it manually). we are about to stop this on Jan 1
since you referenced a trusted Economist here’s much-more-we-know-what-we-are-talking-about MIT saying 12% of workforce is replaceable by AI (I think this is too low) - https://iceberg.mit.edu/
[flagged]
Bold claim by the Anthropic employee drinking their own Koolaid
I'm not an Anthropic employee and think that:
>AI will improve more rapidly than the education system can adapt.
Is entirely obvious, and:
> Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
is not obvious, but quite clear from how things are going. I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
> I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
This doesn’t seem like a good example. People who engineer systems that rely on integrals still know what an integral is. They might not be doing it manually, but it’s still part of the tower of knowledge that supports whatever work they are doing now. Say you are modeling some physical system in Matlab - you know what an integral is, how it connects with the higher level work that you’re doing, etc.
An example from programming: you know what process isolation is, and how memory is allocated, etc. You’re not explicitly working with that when you create a new python list that ends up on the heap, but it’s part of your tower of knowledge. If there’s a need, you can shake off the cobwebs and climb back down the tower a bit to figure something out.
So here’s my contention: LLMs make it optional to have the tower of knowledge that is required today. Some people seem to be very productive with agentic coding tools today - because they already have the tower. We are in a liminal state that allows for this, since we all came up in the before time, struggling to get things to compile, scratching our heads at core dumps, etc.
What happens when you no longer need to have a mental model of what you’re doing? The hard problems in comp sci and software engineering are no less hard after the advent of LLMs.
Here's one way to think about it
Architects are not civil engineers and often don't know details of construction, project management, structural engineering etc. For a few years there will still be a role for a human "architect" but most of the specific low level stuff will be automated. Eventually there won't be an architect either but that may be 10 years away
Optional tower of knowledge leads to a ballooning of incompetence and future problems
I wonder when will there be something more rigorous on what works clearing house https://ies.ed.gov/ncee/WWC/Search/Products?searchTerm=AI&&&...
I am actually hoping someone there studies such interventions the way they did with CMU's intelligent tutor — which if I recall correctly did not have net strong evidence in its favors as far as educational outcomes per the reports in WWC — given the fall in grade level scores in math and reading since 2015/16 across multiple grades in middle school. It is vital to know if any of these things help kids succeed.
Crazy idea but: what if we built an AI pair programmer that actually pair programmed? That is, sometimes it was the driver and you navigated, pretty much as it is today, but sometimes you drive and it navigates.
I surmise that would help people learn to code better.
LLMs with their current context size suck at navigating in larger codebases. They are better left in the driver seat for now.
Knowledge not earned is not gained.
Well said. I’ve often been able to trick myself into thinking I’ve learned something, especially if it is somewhat intuitive.
But unless I practically apply what I learned, my retention is quite low.
> Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work unaided, pointing to potential overreliance and gaps in foundational knowledge transfer.
This is basically what would be expected. However n=20 is too small. This needs to be replicated with x10 the n.
> Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work un-aided, pointing to potential over reliance and gaps in foundational knowledge transfer.
As someone studying CS/ML this is dead on but I don't think the side-effects of this are discussed enough. Frankly, cheating has never been more incentivized and it's breaking the higher education system (at least that's my experience, things might be different at the top tier schools).
Just about every STEM class I've taken has had some kind of curve. Sometimes individual assignments are curved, sometimes the final grade, sometimes the curve isn't a curve but some sort of extra credit. Ideally it should be feasible to score 100% in a class but I think this actually takes a shocking amount of resources. In reality, professors have research or jobs to attend to and same with the students. Ideally there are sections and office hours and the professor is deeply conscious of giving out assignments that faithfully represent what students might be tested on. But often this isn't the case. The school can only afford two hours of TA time a week, the professors have obligations to research and work, the students have the same. And so historically the curve has been there to make up for the discrepancy between ideals and reality. It's there to make sure that great students get the grades that they deserve.
LLMs have turned the curve on its head.
When cheating was hard the curve was largely successful. The great students got great grades, the good students got good grades, those that were struggling usually managed a C+/B-, and those that were checked out or not putting in the time failed. The folks who cheated tended to be the struggling students but, because cheating wasn't that effective, maybe they went from a failing grade to just passing the class. A classic example is sneaking identities into a calculus test. Sure it helps if you don't know the identities but not knowing the identities is a great sign that you didn't practice enough. Without that practice they still tend to do poorly on the test.
But now cheating is easy and, I think it should change the way we look at grades. This semester, not one of my classes is curved because there is always someone who gets a 100%. Coincidentally, that person is never who you would expect. The students who attend every class, ask questions, go to office hours, and do their assignments without LLMs tend to score in B+/A- range on tests and quizzes. The folks who set the curve on those assignments tend to only show up for tests and quizzes and then sit in the far back corners when they do. Just about every test I take now, there's a mad competition for those back desks. Some classes people just dispense with the desk and take a chair to the back of the room.
Every one of the great students I know is murdering themselves to try to stay in the B+/A- range.
A common refrain when people talk about this is "cheaters only cheat themselves" and while I think has historically been mostly true, I think it's bullshit now. Cheating is just too easy, the folks who care are losing the arms race. My most impressive peers are struggling to get past the first round of interviews. Meanwhile, the folks who don't show up to class and casually get perfect scores are also getting perfect scores on the online assessments. Almost all the competent people I know are getting squeezed out of the pipeline before they can compete on level-footing.
We've created a system that massively incentivizes cheating and then invented the ultimate cheating tool. A 4.0 and a good score on an online assessment used to be a great signal that someone was competent. I think these next few years, until universities and hiring teams adapt to LLMs, we're going to start seeing perfect scores as a red flag.
If sitting in the back and cheating guarantees a good grade, that's a shit school, honestly. The school seems to know that people cheat, and how, but nothing is being done. Randomize seating, have a proctor stand in the back of the class, suspend/expel people who are caught cheating.
Ya it drives me crazy. I know someone who scored an 81% on a midterm where a few people scored in the high 90%. The professor told them, that among the people they didn’t suspect of cheating, they got the highest score. No curve, no prosecution of the cheaters.
Look, I agree with the sibling that the school needs to do something about cheating.
Individual instructors should do something about it, even.
The fact that there is no feedback loop causing instructors to do this is a real problem.
If there were ever a stats page showing results in your compilers course were uncorrelated with understanding of compilers on a proctored exit exam you bet people would change or be fired.
So in a way, I blame the poor response on the systematic factors.
FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.
They had opted out of the lectures, believing that they were inefficient or ineffective (or just poorly scheduled). Not everyone learns best in a lecture format. And not everyone is starting with the same level of knowledge of the topic.
Also:
> A 4.0 and a good score on an online assessment used to be a great signal that someone was competent
... this has never been true in my experience, as a student or hiring manager.
> FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.
For many classes this is still the case, and I lump these folks in with the great students. They still care about learning the material.
My experience has been that these students are super common in required undergrad classes and not at all common in the graduate-level electives that I’ve seen this happening in.
> ... this has never been true in my experience, as a student or hiring manager.
Good to know. What’ve you focused on when you’re hiring?
GPA doesn't matter though. As long as you graduate and learn you come out ahead. You'll pass interviews which really matters.
It is notable that so many publications try to salvage "AI" ("need for new pedagogical approaches that integrate AI effectively") rather than ditch "AI" completely.
The world worked perfectly before 2023, there is no need to outsource information retrieval or thinking.
The world worked perfectly before 1982, there is no need for the internet.
(…I actually kind of think this. "Kind of" being the key word.)
God no.
Speaking as someone that communicates primarily through text (high likelihood of Autism) the internet was the first chance a lot of us had to ... speak.. and be heard
Why couldn't you write letters instead of texting?
I didn't have your mailing address :P
That’s not a problem that generalizes to the broader population. We don’t really need internet.
I disagree.
People have a need to be heard and understood. That’s half of what we are doing here posting.
Many (“not disabled”) people don’t fit in with their local peer group / society. The internet gave them a way to connect with other like-minded individuals.
Do I need to give examples? Let’s say: struggling with a rare disease.
But on the other end you have genocide triggered by Facebook. You can't speak if you're dead.
Perhaps some of that violence would have happened anyway. I don't know how it all nets out.
In other words "disabled people can suck it, because I don't care about their lives or experiences"?
We often fall short, but as a society we do try to make sure we're accommodating disabled people when we make big changes in our systems.
Just FTR - I read them as being facetious
Poe's Law applies.
There are far, far too many people who genuinely think disabled people should just disappear or die for it to be "safe" to be facetious about that without a clear sarcasm indicator.
[dead]
Screw the broader population I can speak now dammit!!!!!
[flagged]
Not much has changed, only people get diagnosed now. I think GP makes actually a good point that, with all its downsides, there are also net positive upsides to the internet.
there are upsides but I dont know if its net upside. In this particular example, communicating by text - letter writing has existed for millenia and has arguably degraded considerably in this age of instant messaging
Sorry, i know it's a bit "flavour of the month" but I mentioned it because I have a difficulty communicating face to face, which is common amongst a certain group of people, and I figured that mentioning it would help people understand my thinking.
Ah yes, the perfect world we had when governments could get away with anything because the press was not enough to showcase their attrocities. A beautiful, perfect world, with rubella and a global population living in extreme poverty close to 50% (compared to today's 10%).
I see this mentality almost exclusively in americans and/or anglo people in general, it's incredible... if you're not that, I guess you're just too young or completely isolated from reality and I wish you the best in the ongoing western collapse.
(... I actually wish you're joking and I didn't catch it, though).
last sentence in your first paragraph has nothing to do with the current state of the internet and certainly not AI. first sentence? turns out governments can still get away with pretty much anything and propaganda is easier than ever.
> propaganda is easier than ever.
It is so much harder now. There are people who are willfully ignorant now, almost proud to be; snooty about it. But it's impossible for governments and institutions to lie like they used to be able to. People are trading primary source documents online within the day.
It's why the popularity of long-ruling institutional parties is dropping everywhere, and why the measures to stop people from communicating and to monitor what they're saying are becoming more and more draconian and desperate.
beyond irony that you pose as some tech optimist while also mentioning “western collapse” and then speak about a uniquely American pessimism, a nation that is presently under the thumb of a government that does not respect the rule of law and actively manipulates capital/big business.
and you cannot simply hand-wave away the massive acceleration of the surveillance state and characterize it as a tool of the “institutional parties”
Where is this perfect world you’re speaking of? Surely not the one we’re living in…
Why stop there? We could do long division before the calculator and hand write before the typewriter.
I do wonder if the calculator would have been as successful if it regularly delivered wrong answers.
Calculators give wrong answers all the time. The differentiator from AI is that you can trust that a garbage answer from a calculator was caused by bad input, where bad AI answers aren't debuggable.
It does if you’re a clumsy operator and those are not rare.
Yes, but the machine itself is deterministic and logically sound.
>Yes, but the machine itself is deterministic and logically sound.
Because arithmetic itself, by definition, is.
Human language is not. Which is why being able to talk to our computers in natural language (and have them understand us and talk back) now is nothing short of science fiction come true.
Even worse is if it's in the other room and your fingers can't reach the keys. It delivers no answers at all!
My point is, needing to use something with care doesn't prevent it becoming from wildly successful. LLM's are wrong way more often but are also more versatile than a calculator.
> LLM's are wrong way more often but are also more versatile than a calculator.
LLMs are wrong infinitely more than calculators, because calculators are never wrong (unless they're broken).
If you input "1 + 3" into your calculator and get "4", but you actually wanted to know the answer to "1 + 2", the calculator wasn't "wrong". It gave you the answer to the question you asked.
Now you might say "but that's what's happening with LLMs too! It gave you the wrong answer because you didn't ask the question right!" But an LLM isn't an all-seeing oracle. It can only interpolate between points in its training data. And if the correct answer isn't in its training data, then no amount of "using it with care" will produce the correct answer.
Google is successful and it's page rank algorithm also does not deliver correct results all the times.
There's no such thing as a correct result to a search query. It certainly delivered exactly what was asked for, a grep of the web, sorted by number of incoming links.
They also don't use it at all anymore, they barely even care about your search query.
Google is successful, however, because they innovated once, and got enough money together as a result to buy Doubleclick. Combining their one innovation with the ad company they bought enabled them to buy other companies.
My typerwriter delivered wrong answers.
Did you learn how to do long division in schools? I did, and I wasn't allowed to use calculators on a test until I was in highschool and basic math wasn't what was being taught or evaluated.
I also learned long division in school.
I was allowed to use a calculator from middle school onward, when we were being tested on algebra and beyond and not arithmetic.
Some schools have ridiculous policies. Some don’t. Ymmv. I don’t think that’s changed from when I was in school.
I'm a lot more productive than I was in 2023 and I've been coding full time since 2012