Maybe it's spite-driven development, but I'd love to hear about someone who, upon learning that LLMs are suggesting endpoints in their API that don't exist, implements them specifically to respond with a status code[0] of "421: Misdirected Request". Or, for something less snarky and more in keeping with the actual intent of the code, "501: Not Implemented". If the potentially-implied "but it might be, later" of 501 is untenable, I humbly propose this new code: "513: Your Coding Assistant Is Wrong"
It's really more about how when I say "I am a teapot", I want people to think "Oh, he's a teapot!" and not "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"
What would be an appropriate response code for "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"?
Each of those should have a clear, unique response code. There should be no "maybe it's this, maybe it's that". A real-world example is login forms that tell you something like "Invalid e-mail or password".
Are you joking around with me or is my point just not as obvious as I believed it to be?
Edit: Not sure if that last bit sounds confrontational, please know that it's a genuine question.
I have this little bookmarklet in my bookmarks bar that I use constantly. It removes all fixed or sticky elements on the page and re-enabled y-overflow if it was disabled:
Same here. Right-click the page and choose Inspect (or Inspect Element). Click the Console tab, paste this code, and press Enter:
document.getElementById("presence")?.remove();
If you want to know why this is happening in your brain, it's likely a prey/predator identification thing. I would like to think that being so distracted by this just means I have excellent survival instincts :)
Reminded me so much of a game called Chess Royale that I used to play, the avatars and the flags (screenshot [1]). It was really good too; and then Ubisoft being Ubisoft, they killed it even though the game had bots and could have been made single-player.
isn't this the page that used to have cursors everywhere in the background? I think the distracting design is some intentional running joke at this point
Same here. I don't have the time or patience to hack the page like the siblings comments suggest. There are more articles on the web than I will ever be able to consume in my lifetime, so I just close the tab and move on when the UX is aggressively bad.
i literally opened the developer console to delete that element from the page. no surprise somebody who has no idea how to make a readable website is getting bullied by a chatbot.
Maybe if the background color on all pages was a heatmap of the current top line of the page, so that you could see where people were reading and how many were reading, it would be better?
Also, what if it played slow and brooding music when fewer people were reading and epic action adventure music when many people were reading it?
How about if the page mined bitcoin and the first person to enter a page made a percentage higher percentage of the next person’s bitcoin and less of the next one, like a multi-level marketing mining strategy?
> We see the same at Instant: for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
Good. Think of all the dev hours that must’ve been wasted by humans who were confused by this too.
> for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
If a function can both insert and update, it should be called "put". Using "update" is misleading.
Implement all of them, with slightly different edge cases that result in glaringly obvious RCE when two or three of them are misused in place of each other.
(New startup pitch: Our agentic AI scans your access and error logs, and automatically vibe codes new API endpoints for failed API calls and pushes them to production within seconds, all without expensive developers or human intervention! Please form an orderly queue with your termsheets and Angel Investment cheques.)
Sorry, we will reach the heat death of the universe before I alter a single line of code simply because some LLM somewhere extruded incorrect synthetic text. That is so bonkers, I feel offended I even need to point out how bonkers it is.
Recently i had an interesting chat with my team around coding principles of the future.
I think the way people will write code will not be around following solid principles or making sure your cyclometric complexity is high or low, nor it would be about is your code readable or not.
I think future coding principles would be around whether your agentic ide can index it well to become context aware, does it fix into the context window or not. It will be around the model you use and thr code it can generate. We will index on maintainability of the code, as code will become disposable as rate of change will increase dramatically. It will be around whether your vibed prompts matches the code thats already generated to reach some accuracy or generate enough serendipity.
This feels like the beginning of a wonderful friendship between me and the LLMS. I work as a fractional CTO. One of the things that frustrate me is when my clients have various idiosyncratic naming conventions on things, eg there’s a ”dev” and a ”prod” environment on AWS, but then there’s a ”test” and ”production” environment in Expo. It just needlessly consumes brain cycles, especially when you’re working with multiple clients. I guess it’s the same for the LLMs, just on a massive scale.
In general I think it’s great whenever some weight / synapse strength bits can be reallocated from idiosyncratic API naming / behavior towards real semantics.
As the old joke goes: there are two hard problems in computer science - cache invalidation, naming things and off-by-one errors
Naming things doesn’t get easier just because you bring an LLM to do it based on an incoherent stochastic process.
Have you asked why those environments have not been renamed to align? As a former CTO I’d see it immediately as a signal of poor communication, poor standards adoption, or both. It’s this low hanging stuff that you can fix relatively easily where you’re actually using that work to make the culture better and make people care more.
Don’t outsource things you should care about a lot. Naming things is something you shouldn’t be hand waving away to a model.
Sure, I can spend my days doing that. But I appreciate the help (from the LLMs). And I think we actually have the same goal function: we want to make naming more compressible, less unexpected. You can call that culture (and it is) but you can also see it as pure information theory.
In postmodern societies, reality itself is structured by simulation—"codes, models, and signs are the organizing forms of a new social order where simulation rules".
The bureaucratic and legal apparatus you invoke are themselves caught up in this regime. Their procedures, paperwork, and legitimacy rely on referents—the "models" and "simulacra" of governance, law, and knowledge—that no longer point back to any fundamental, stable reality. What you serve, in effect, is the system of signification itself: simulation as reality, or—per Baudrillard—hyperreality, where "all distinctions between the real and the fictional, between a copy and the original, disappear".
"The spectacle is not a collection of images but a social relation among people, mediated by images." (Debord) Our social relations, governance, and even dissent become performances staged for the world's endless mediated feedback loop.
In this age, according to Heidegger, "everything becomes a 'picture', a 'set-up' for calculation, representation, and control." The machine is not just a device or a bureaucratic protocol—it is the mode of disclosure through which the world appears, and your sense of selfhood and agency are increasingly products (and objects) within this technological enframing.
Is there a general name and framing we could apply to these “AI” that is equally as accurate but sheds all of the human biases associated with the terms?
Like… it’s just a really, really, really good autocomplete and sometimes I find thinking of it that way cleans up my whole mental model for its use.
I like something related to "interns" (artificial interns?) because it keeps the implication that you still always have to double-check, review and verify the work they did.
Does that actually clean up your mental model though? At some number of "reallys" that autocomplete starts to sound like intelligence. Like, what is "taking customer requirements and turning them into working code" if not just really really really really really really really good autocomplete with this mental model?
A lot of people are just doing the job of a really good autocomplete, not being asked to make many, if any, nontrivial decisions in their job.
Taking requirements and making working code is something some models are adequate at. It’s all the stuff around that, which I think holds the value, such as deciding things like when the requirements are wrong.
It's really difficult because many of the task types we use AI for are those that are linguistically tied to concepts of human actions and cognition. Most of our convenient language use implies that AI are thinking people.
If it were somehow a human that was consistently and confidently handing out made up programming advice about one's products, would companies still respond by just adding whatever imagined feature and writing a vaguely bemused blog post about it?
Maybe I can start pretending I’m an LLM and see if that gets me a pass when I make silly mistakes or hallucinate in entirely the wrong direction. As long as I look confident doing so.
I rented a car recently for a trip to Arizona that had lane keeping on by default. The highway I was traveling on was undergoing extensive repair. Not only did the car sound audible alarms with some frequency, since the highway had been rerouted in places using traffic cones, it also constantly tried to veer the car back into “the lane.” Since the lane was in some places just a hole, the consequences would have been bad. I ended up pulling over and fishing through the menus until I found a way to turn it all off.
It appears that there’s a very long tail of exceptional circumstances that must be handled with autonomous driving.
imho lane keep is a misfeature. I own one car where it is impossible to turn off without also turning off lane departure warning (arguably a somewhat useful feature).
Yep, i'd not like it too - changing lanes requires increased attention and now during the maneuver you steering wheel starts to vibrate out-of-the-blue.
That isn't to argue about using of the blinker, it is about the way the assist is implemented in this case - it doesn't help directly with the blinker, instead it punishes you and thus stress-injects-and-conditions you for the instinct to use blinker next time. Net positive probably for the driver and society thus demonstrating again that forcing individual submission is an effective way to social harmony.
And blinker is just very mild use case. LLMs can already today in some cases and will be more and more tomorrow able to recognize when your behavior isn't legal and/or isn't very moral (like it would hear that you say and see what you text on the phone and would for example recognize a drug buying - pardon such a primitive simplicity, it is just a caricaturish exampl for illustration purposes only - and we've already established a tendency of LLMs to rat you out to authorities) and thus LLM can act to warn you about or even prevent your actions and/or report you to authorities, probably even before you actually commit anything.
You are literally too lazy to move a single finger. You are a bad driver. Being "in a hurry" makes no sense either, turning on your blinkers should be ingrained in your muscle memory and take no additional effort.
No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident. The societal cost of collisions is worth more than your freedoms. Or you should pay higher premiums for turning those safety features off.
> No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident
For all you know I need to exit my lane in a hurry to avoid a collision. The car doesn't have the same context that the driver has. It only cares about staying between two painted lines, it might not have any idea about a truck coming straight at me going the other direction
> The societal cost of collisions is worth more than your freedoms
If a semi is in my lane barrelling toward me I'm not obligated to just accept death so I don't endanger anyone else by accident by swerving to avoid it
The fact is that human drivers have a lot more information and awareness than a handful of sensors installed by idiot engineers that think the only bad thing that ever happens when driving is that someone changes lanes without signalling
It vibrates and tries to gently guide you. It will absolutely not overpower you if you are swerving in an emergency. You are talking hypothetical nonsense.
And I say that makes no sense. If you use your blinkers, lane assist doesn't get in the way. So do what you should be doing anyway and use your blinkers.
To those that still believe that a bunch of data loaded into memory, where data can be anything from a scientific article to a message between two lovers, getting triggered for an output with input and a basic for loop can represent anything intelligence, i have some bad news for you like damn ya'll don't you know git(hub) & huggingface? Ofcourse the drawback of that is that you are not contributing to AGI KEK!
Maybe it's spite-driven development, but I'd love to hear about someone who, upon learning that LLMs are suggesting endpoints in their API that don't exist, implements them specifically to respond with a status code[0] of "421: Misdirected Request". Or, for something less snarky and more in keeping with the actual intent of the code, "501: Not Implemented". If the potentially-implied "but it might be, later" of 501 is untenable, I humbly propose this new code: "513: Your Coding Assistant Is Wrong"
[0]: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
> "513: Your Coding Assistant Is Wrong"
You made me chuckle. Well played. Great stuff :)
May I, simply, also suggest:
HTTP 407 Hallucination
Meaning: The server understands the request but believes it to be incongruous with reality.-
Yes, it should definitely be in the 400 space of HTML error codes. As 400 -> "You are incorrect" while 500 -> "We messed up".
Reminds me of signs like this: https://www.reddit.com/r/ScarySigns/comments/dhzov1/your_gps...
+1 for 513: Your Coding Assistant Is Wrong"
If we have 418, why not 513?
I humbly request, if you are going to do this, please, please...use the 418 response. It deserves wider adoption :-)
Bit of a pet peeve: 418 is clearly defined as "I am a teapot", not "whatever I want it to mean".
Please do not use it for anything other than its specified purpose, even if it is a joke.
If an LLM can hallucinate an endpoint, then the server is allowed to hallucinate being a teapot :)
Is one being a little precious about one being a teapot!?
Are you a teapot? If you were, maybe you'd be precious about people falsely claiming to be one too!
It's really more about how when I say "I am a teapot", I want people to think "Oh, he's a teapot!" and not "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"
What would be an appropriate response code for "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"?
Each of those should have a clear, unique response code. There should be no "maybe it's this, maybe it's that". A real-world example is login forms that tell you something like "Invalid e-mail or password".
Are you joking around with me or is my point just not as obvious as I believed it to be?
Edit: Not sure if that last bit sounds confrontational, please know that it's a genuine question.
I agree, though that means I really should stop returning my 404 - Server Unavailable response, but you'll never have my 500 - OK.
Core identity panic response?
* https://www.youtube.com/watch?v=kLQStcdhAGA
* https://www.goodiesruleok.com/articles.php?id=27
I think it's a good representation of a hallucination.
(on that note, I'm putting the kettle on :)
I like seeing what users are currently viewing the same page, but man the constant jostling of users coming and going made it hard to read the post.
I have this little bookmarklet in my bookmarks bar that I use constantly. It removes all fixed or sticky elements on the page and re-enabled y-overflow if it was disabled:
javascript: (function () {document.querySelectorAll("body *").forEach(function(node){["fixed","sticky"].includes(getComputedStyle(node).position)&&node.parentNode.removeChild(node)});var htmlNode=document.querySelector("html");htmlNode.style.overflow="visible",htmlNode.style["overflow-x"]="visible",htmlNode.style["overflow-y"]="visible";var bodyNode=document.querySelector("body");bodyNode.style.overflow="visible",bodyNode.style["overflow-x"]="visible",bodyNode.style["overflow-y"]="visible";var nodes=document.querySelectorAll('.tp-modal-open');for(i in nodes) {nodes[i].classList.remove('tp-modal-open');}}())
They have been called “dickbars” before [0].
> Kill-sticky, a bookmarklet to remove sticky elements and restore scrolling (174 comments)
— https://news.ycombinator.com/item?id=32998091
[0] https://daringfireball.net/linked/2017/06/27/mcdiarmid-stick...
Huge fan of killsticky and using it everywhere!
Same here. Right-click the page and choose Inspect (or Inspect Element). Click the Console tab, paste this code, and press Enter:
If you want to know why this is happening in your brain, it's likely a prey/predator identification thing. I would like to think that being so distracted by this just means I have excellent survival instincts :)https://www.pnas.org/doi/10.1073/pnas.0703913104
https://en.wikipedia.org/wiki/Salience_%28neuroscience%29
uBlock Origin also lets you "zap" elements away. No console fiddling required.
Can just right click remove node.
I thought my instructions would work universally, across all desktop browsers. I have also been known to overthink things.
Reminded me so much of a game called Chess Royale that I used to play, the avatars and the flags (screenshot [1]). It was really good too; and then Ubisoft being Ubisoft, they killed it even though the game had bots and could have been made single-player.
[1]: https://game-guide.fr/wp-content/uploads/2020/02/Might-and-M...
isn't this the page that used to have cursors everywhere in the background? I think the distracting design is some intentional running joke at this point
Try "dark mode" foe further trolling.
Great way to remove the jostling users...!
I tried uBlock's element zapper and ended up playing a furious game whac-a-mole :D
Certainly not built to help those with ADHD in mind.
Same here. I don't have the time or patience to hack the page like the siblings comments suggest. There are more articles on the web than I will ever be able to consume in my lifetime, so I just close the tab and move on when the UX is aggressively bad.
It's hilarious but I literally can't click on their gh or patreon links because of it
natural selection at work
Inatant tab close for me. So obnoxious.
The idea is kinda cute, but the implementation is aggressive.
I found Safari’s “hide distracting items” feature was necessary to finish the article.
I ended up using safari remove distracting content, which seemed to work nicely.
It's pretty fun seeing what countries people are from. If you hover, it tells your their city as well!
i literally opened the developer console to delete that element from the page. no surprise somebody who has no idea how to make a readable website is getting bullied by a chatbot.
Maybe if the background color on all pages was a heatmap of the current top line of the page, so that you could see where people were reading and how many were reading, it would be better?
Also, what if it played slow and brooding music when fewer people were reading and epic action adventure music when many people were reading it?
How about if the page mined bitcoin and the first person to enter a page made a percentage higher percentage of the next person’s bitcoin and less of the next one, like a multi-level marketing mining strategy?
That heatmap idea sounds really neat actually.
Its the bottom 20px or so, with a lot of content above it. Move the window down slightly.
The article literally starts with:
"Any person who has used a computer in the past ten years knows that doing meaningless tasks ..."
I guess this is demonstrating another variant of that. Admittedly, not one I'd seen before so +1 for novelty even if -20 for distraction.
I wonder if it's GDPR-compliant.
Why wouldn’t it? It’s anonymous and he probably doesn’t store the data.
that webmaster should ask himself, if it is so easy implement does it mean you SHOULD implement it? I just immediately closed the page.
> We see the same at Instant: for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
Good. Think of all the dev hours that must’ve been wasted by humans who were confused by this too.
If tx.create didn't exist, why would any hours be wasted by this?
> for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
If a function can both insert and update, it should be called "put". Using "update" is misleading.
Upsert?
Lets just do all variations and have the llm guess it right the first time.
Implement all of them, with slightly different edge cases that result in glaringly obvious RCE when two or three of them are misused in place of each other.
(New startup pitch: Our agentic AI scans your access and error logs, and automatically vibe codes new API endpoints for failed API calls and pushes them to production within seconds, all without expensive developers or human intervention! Please form an orderly queue with your termsheets and Angel Investment cheques.)
Crupdate
put implies overwriting instead of updating.
upsert is for you insert/update.
update already means overwriting.
semantically PUT is exactly upsert.
upsert is update + create if not exists, which is exactly PUT
any update without overwrite is "append" or "extend" (or something else)
Sorry, we will reach the heat death of the universe before I alter a single line of code simply because some LLM somewhere extruded incorrect synthetic text. That is so bonkers, I feel offended I even need to point out how bonkers it is.
I don't agree with the thesis of this post. It is begging the question of if we have to do what computers want.
> Millions of people create accounts, confirm emails, ... not because they particularly want to or even need to.
These were design choices made by humans, not computers.
You are so generous to call this even "thesis" lol. I read that line and I closed the page. haha
Only from lack of a better word (:
Premise?
There it is
Recently i had an interesting chat with my team around coding principles of the future.
I think the way people will write code will not be around following solid principles or making sure your cyclometric complexity is high or low, nor it would be about is your code readable or not.
I think future coding principles would be around whether your agentic ide can index it well to become context aware, does it fix into the context window or not. It will be around the model you use and thr code it can generate. We will index on maintainability of the code, as code will become disposable as rate of change will increase dramatically. It will be around whether your vibed prompts matches the code thats already generated to reach some accuracy or generate enough serendipity.
This feels like the beginning of a wonderful friendship between me and the LLMS. I work as a fractional CTO. One of the things that frustrate me is when my clients have various idiosyncratic naming conventions on things, eg there’s a ”dev” and a ”prod” environment on AWS, but then there’s a ”test” and ”production” environment in Expo. It just needlessly consumes brain cycles, especially when you’re working with multiple clients. I guess it’s the same for the LLMs, just on a massive scale.
In general I think it’s great whenever some weight / synapse strength bits can be reallocated from idiosyncratic API naming / behavior towards real semantics.
As the old joke goes: there are two hard problems in computer science - cache invalidation, naming things and off-by-one errors
Naming things doesn’t get easier just because you bring an LLM to do it based on an incoherent stochastic process.
Have you asked why those environments have not been renamed to align? As a former CTO I’d see it immediately as a signal of poor communication, poor standards adoption, or both. It’s this low hanging stuff that you can fix relatively easily where you’re actually using that work to make the culture better and make people care more.
Don’t outsource things you should care about a lot. Naming things is something you shouldn’t be hand waving away to a model.
Sure, I can spend my days doing that. But I appreciate the help (from the LLMs). And I think we actually have the same goal function: we want to make naming more compressible, less unexpected. You can call that culture (and it is) but you can also see it as pure information theory.
> Like it or not, we are already serving the machines.
The machines don’t give a shit, it’s the lawyers and bureaucrats you’re serving :)
Better or worse?
In postmodern societies, reality itself is structured by simulation—"codes, models, and signs are the organizing forms of a new social order where simulation rules".
The bureaucratic and legal apparatus you invoke are themselves caught up in this regime. Their procedures, paperwork, and legitimacy rely on referents—the "models" and "simulacra" of governance, law, and knowledge—that no longer point back to any fundamental, stable reality. What you serve, in effect, is the system of signification itself: simulation as reality, or—per Baudrillard—hyperreality, where "all distinctions between the real and the fictional, between a copy and the original, disappear".
"The spectacle is not a collection of images but a social relation among people, mediated by images." (Debord) Our social relations, governance, and even dissent become performances staged for the world's endless mediated feedback loop.
In this age, according to Heidegger, "everything becomes a 'picture', a 'set-up' for calculation, representation, and control." The machine is not just a device or a bureaucratic protocol—it is the mode of disclosure through which the world appears, and your sense of selfhood and agency are increasingly products (and objects) within this technological enframing.
Yada, yada, yada; the Matrix is real.
ie, you don't know the half of it, compadre.
Is there a general name and framing we could apply to these “AI” that is equally as accurate but sheds all of the human biases associated with the terms?
Like… it’s just a really, really, really good autocomplete and sometimes I find thinking of it that way cleans up my whole mental model for its use.
I like something related to "interns" (artificial interns?) because it keeps the implication that you still always have to double-check, review and verify the work they did.
AInterns?
Does that actually clean up your mental model though? At some number of "reallys" that autocomplete starts to sound like intelligence. Like, what is "taking customer requirements and turning them into working code" if not just really really really really really really really good autocomplete with this mental model?
A lot of people are just doing the job of a really good autocomplete, not being asked to make many, if any, nontrivial decisions in their job.
Taking requirements and making working code is something some models are adequate at. It’s all the stuff around that, which I think holds the value, such as deciding things like when the requirements are wrong.
It's really difficult because many of the task types we use AI for are those that are linguistically tied to concepts of human actions and cognition. Most of our convenient language use implies that AI are thinking people.
Related: https://news.ycombinator.com/item?id=44491071
I had to use the reader mode to be able to read this article
Delete the div called `presence` and the page becomes nicer to read.
I aint reading all this with the animated live user bar at the bottom.
Didn't realize so many Canadians on HN
If it were somehow a human that was consistently and confidently handing out made up programming advice about one's products, would companies still respond by just adding whatever imagined feature and writing a vaguely bemused blog post about it?
Maybe I can start pretending I’m an LLM and see if that gets me a pass when I make silly mistakes or hallucinate in entirely the wrong direction. As long as I look confident doing so.
We don’t talk about PMs here. (/s)
s/PMs/bad managers/
Isn’t this the whole shtick of Mr Martin, author of “Clean Code”?
If that human was giving advice to 90% of your customers you just might.
No they would confidently assert they need the dumb thing you keep saying.
Yes, but please separate plannning from reviewing, let alone real coding.
[dead]
>Well, now there is a new way to serve our silicon overlords. LLMs started to have opinions on how your API should look
we have code review by LLM. There is no point or a way to argue. Just submit to the wishes of the overlord, resistance is futile.
For some reason this reminds me of the conversation I had with a guy who didn't like the lane keeping assist on his car.
He didn't like that it vibrated the steering wheel when he changed lanes without using the blinker.
I rented a car recently for a trip to Arizona that had lane keeping on by default. The highway I was traveling on was undergoing extensive repair. Not only did the car sound audible alarms with some frequency, since the highway had been rerouted in places using traffic cones, it also constantly tried to veer the car back into “the lane.” Since the lane was in some places just a hole, the consequences would have been bad. I ended up pulling over and fishing through the menus until I found a way to turn it all off.
It appears that there’s a very long tail of exceptional circumstances that must be handled with autonomous driving.
imho lane keep is a misfeature. I own one car where it is impossible to turn off without also turning off lane departure warning (arguably a somewhat useful feature).
Yep, i'd not like it too - changing lanes requires increased attention and now during the maneuver you steering wheel starts to vibrate out-of-the-blue.
That isn't to argue about using of the blinker, it is about the way the assist is implemented in this case - it doesn't help directly with the blinker, instead it punishes you and thus stress-injects-and-conditions you for the instinct to use blinker next time. Net positive probably for the driver and society thus demonstrating again that forcing individual submission is an effective way to social harmony.
And blinker is just very mild use case. LLMs can already today in some cases and will be more and more tomorrow able to recognize when your behavior isn't legal and/or isn't very moral (like it would hear that you say and see what you text on the phone and would for example recognize a drug buying - pardon such a primitive simplicity, it is just a caricaturish exampl for illustration purposes only - and we've already established a tendency of LLMs to rat you out to authorities) and thus LLM can act to warn you about or even prevent your actions and/or report you to authorities, probably even before you actually commit anything.
literally just turn on your blinkers, like you should be doing anyway, and lane assistant won't trigger. you are just outing yourself as a bad driver.
No, if you need to depart your lane in a hurry you cannot be expected to use the blinker first and the car should not fight you to do so
You are literally too lazy to move a single finger. You are a bad driver. Being "in a hurry" makes no sense either, turning on your blinkers should be ingrained in your muscle memory and take no additional effort.
No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident. The societal cost of collisions is worth more than your freedoms. Or you should pay higher premiums for turning those safety features off.
> No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident
For all you know I need to exit my lane in a hurry to avoid a collision. The car doesn't have the same context that the driver has. It only cares about staying between two painted lines, it might not have any idea about a truck coming straight at me going the other direction
> The societal cost of collisions is worth more than your freedoms
If a semi is in my lane barrelling toward me I'm not obligated to just accept death so I don't endanger anyone else by accident by swerving to avoid it
The fact is that human drivers have a lot more information and awareness than a handful of sensors installed by idiot engineers that think the only bad thing that ever happens when driving is that someone changes lanes without signalling
It vibrates and tries to gently guide you. It will absolutely not overpower you if you are swerving in an emergency. You are talking hypothetical nonsense.
sorry, you're missing my point. I explicitly said that it isn't about the need to use blinkers.
And I say that makes no sense. If you use your blinkers, lane assist doesn't get in the way. So do what you should be doing anyway and use your blinkers.
Cue up the people that start conversations with "I'm not racist, but..."
Have fun! I'm going to keep trying to build new things that are more excellent and original than the median training data.
Statistically you’re median and unoriginal, but keep trying.
To those that still believe that a bunch of data loaded into memory, where data can be anything from a scientific article to a message between two lovers, getting triggered for an output with input and a basic for loop can represent anything intelligence, i have some bad news for you like damn ya'll don't you know git(hub) & huggingface? Ofcourse the drawback of that is that you are not contributing to AGI KEK!