My main question is in 90% of cases these are installers. How are you actually verifying the software that you install? In some cases it is signed and verified but in many cases it is just coming down from the same HTTPS server with no additional verification. So are you then diffing the code (which may be compiled) as well?
I'm not saying that random running random installers from the internet is a great pattern. Something like installing from your distribution can have better verification mechanisms. But this seems to add very little confidence.
This. OP, tools often install their own update mechanisms (e.g. `uv self update`), so this may not be as useful as you think. As an alternative (albeit one that adds potential hosting costs), consider running a small DB - can be as simple as SQLite - with hashes of scripts. You also need to handle legitimate updates from the script's author[s], though. If you can extract versioning from the URL, e.g. GitHub releases, you could include that in the schema.
You're absolutely right—vet's scope is focused on securing the installer script itself, not the binary it downloads.
The goal is to prevent the installer from being maliciously modified to, for example, skip its own checksum verification or download a binary from a different, malicious URL.
It's one strong link in the chain, but you're right that it's not the whole chain.
> How are you actually verifying the software that you install?
By installing it through a well-audited, cryptocraphically-signed and community-maintained package list with a solid security history. What?
The bug here isn't that "it's hard to make downloading scripts secure!", it's that people on macs (and a few other communities, but really it's just OS X culture at fault here) insist on developing software with outrageous hackery like this and refuse to demand better from their platform.
Fix that. Don't pretend that linting (!!) shell scripts pulled off the open internet is going to do anything.
Why do you think it’s OS X culture and not Rust culture? Popular rust tools like starship, atuin, and cargo itself ask you to curl an installer. They certainly didn’t invent this but they did re popularize it
Most non-Apple rust users get it via a Linux distro's package manager, or by building from source. And after installation cargo is, if not Debian-quality, reasonably secure vs. attack (sub-linux but better than npm, basically).
While there are surely exceptions, that nonsense about "just run this unauthenticated script URL" is something unique the the Mac experience. And it's horrifying.
By the way, the excellent discussion here got me thinking about the next logical step for vet: supporting private environments.
Running public scripts is great, but what about running deployment scripts from a private GitHub repo or setup scripts from an internal server?
Based on this, I've opened a new feature request to add authentication support to vet, with a roadmap that includes .netrc support, a VET_TOKEN environment variable, and a future goal of integrating with secret managers like HashiCorp Vault by reading tokens from stdin.
If you're interested in that direction, I'd love to get your thoughts on the feature request over on GitHub:
This an amazing solution. I wondered about this often, looking at you `uv`, but in a lot of the cases I cave given that everyone else trust some code maintainers.
I guess you stopped reading there and missed that part:
> Yes, we see the irony! We encourage you to inspect our installer first. That's the whole point of vet. You can read the installer's source code install.sh
It is very trivial to serve different code to someone inspecting the code than when they pipe it to bash. In the very rare case someone inspected it they’d likely do so in a way that was vulnerable to this.
That’s an excellent point, and thank you for raising it. You are 100% correct—relying on users to inspect a URL that could be spoofed with User-Agent trickery is a flaw in the original recommendation. It's a classic threat model that I should have addressed from the start.
Thanks to your feedback, I've just merged a PR to change the recommended installation method in the documentation to the only truly safe one: a two-step "download, then execute the local file" process. This ensures the code a user inspects is the exact same code they run.
I sincerely appreciate you taking the time to share your expertise and hold the project to a higher standard. This is what makes a community great.
You're right, the README explains what vet does, but it doesn't do a great job of showing how it feels to use it. I'll definitely create a demo GIF for the page.
To answer your questions directly in the meantime:
- Pager or Editor? It opens a pager (less by default, but it will automatically use the much nicer bat if you have it installed for syntax highlighting). It doesn't open an editor to prevent any accidental modifications.
- ShellCheck Issues: If shellcheck finds issues, it prints its standard, colorful output directly to your terminal before you review the script. It then pauses and asks you if you want to proceed with the review despite the warnings, like this:
==> Running ShellCheck analysis...
In /tmp/tmp.XXXXXX line 7:
echo "Processing file: $filename"
^-- SC2086: Double quote to prevent globbing and word splitting.
==> WARNING: ShellCheck found potential issues.
[?] Continue with review despite issues? [y/N]
A malicious actor could definitely do that. That’s why vet’s model doesn’t rely solely on ShellCheck—it’s just one layer. The key layer here is the diff. Even if the linter is silenced, the diff reveals any new suspicious # shellcheck disable= lines added to trusted scripts. That change alone is a red flag.
The two biggest hurdles for a security tool like this are LLM non-determinism and the major privacy risk of sending code to a third-party API.
This is exactly why vet relies on ShellCheck—it's deterministic, rules-based, and runs completely offline. It will always give the same, trustworthy output for the same input.
But your vision of smarter analysis is absolutely the right direction to be thinking. I'm excited for a future where fast, local AI models can make that a reality for vet. Great food for thought!
Hi HN, I'm the creator of `vet`. I've always been a bit nervous about the `curl | bash` pattern, even for trusted projects. It feels like there's a missing safety step. I wanted a tool that would show me a diff if a script changed, run it through `shellcheck`, and ask for my explicit OK before executing. That's why I built `vet`.
The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I'm glad to see that I'm not the only person worried about this. It's a pretty glaring bit of attack surface if you ask me. I chuckled when I saw you used nvm as an example in your readme. I've pestered nvm about this sort of thing in the past (https://github.com/nvm-sh/nvm/issues/3349).
I'm a little uncertain about your threat model though. If you've got an SSL-tampering adversary that can serve you a malicious script when you expected the original, don't you think they'd also be sophisticated enough to instead cause the authentic script to subsequently download a malicious payload?
I know that nobody wants to deal with the headaches associated with keeping track of cryptographic hashes for everything you receive over a network (nix is, among other things, a tool for doing this). But I'm afraid it's the only way to actually solve this problem:
1. get remote inputs, check against hashes that were committed to source control
2. make a sandbox that doesn't have internet access
3. do the compute in that sandbox (to ensure it doesn't phone home for a payload which you haven't verified the hash of)
After looking closer, I think I misunderstood. I thought that after a human reviewed the script, vet would cache something which indicates that that script is trusted--that way it can run in CI without a human involved, and vet is checking that it is indeed the thing the human trusted. Looks like not.
Re: hashes, the whole point is that I want it to break anytime the developer pushes an update, that's my cue to review the update and decide once more whether I want it in my project. The lack of awareness re: what that curl is going to provide is the whole reason people think that `curl | bash` is insecure.
Otherwise there's no commit which indicates the moment we started depending on the new version--nothing to find if we're later driving `git bisect` to figure out when something went wrong. It could supply a malicious payload once, revert back to normal behavior, and you'd have no way to notice.
Also, you end up with developers who have different versions installed based on when they ran the command, there's no association with the codebase. That's a different kind of headache.
> I wanted a tool that would show me a diff if a script changed, run it through `shellcheck`
Why? What exactly do you think "shellcheck" does? When do you think you're diffing and what do you think you are diffing with?
> and ask for my explicit OK before executing.
But to what end? You're not better informed by what the script does with this strategy.
A small shell script like yours I can read in a minute and decide it does nothing for me, but large installers can be hard to decipher since they are balancing bandwidth costs with compatibility, and a lot of legitimate techniques can make this hard to follow without care and imagination.
> The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I don't understand what philosophy you're talking about.
I think you're doing the exact same thing that malicious attackers do, you're just doing it worse:
You're getting it: I think your program sucks, but I also like the idea of trying to do something, and I understand you just don't have any idea what to do or what the problem actually is.
So let me teach you a little bash:
check () {
echo "> $BASH_COMMAND" >&2
echo -n "Allow? (yes/no) " >&2
select c in yes no
do
if [ "$c" = "yes" ]
then break
elif [ "$c" = "no" ]
then return 1
fi
done
}
shopt -s extdebug
trap check DEBUG
This little scriptlet will wait until bash tries to run something, and ask before proceeding. Simples. Put this in front of an installer (or something else messy) and get step-by-step confirmation of what's going on. Something like this is in the BASH manual someplace, or was once upon a time.
In a large script this might annoy people, so if it were me, I would have a whitelist of commands that I think are safe, or maybe a "remember" option that updates that script. I might also have a blacklist for things like sudo.
While I'm on the subject of sudo, a nasty trick bad guys use is get you to run sudo on something innocuous and then rely on the cached credentials to run a sneaky (silent) sudo in the same session. Running sudo -k before interacting with an unknown program can help tremendously with this.
Wow, thank you for taking the time to write such a detailed and in-depth critique.
First, let me address the bugs you found, because you were 100% right.
The wget user-agent issue revealed a significant and regrettable flaw in the server-side logic. Thanks to your report, a fix has already been merged and deployed.
The installer also had a conceptual flaw in its security recommendation, as you and others pointed out.
The documentation has been updated to recommend a two-step "download, then execute" process and now includes a direct link to the GitHub release asset for maximum transparency—no more "cute" domain magic as the primary method.
Your trap DEBUG suggestion is a really powerful technique, and it highlights a core philosophical difference in how to approach this problem:
Your approach is an "In-Flight Monitor"—it steps through an executing script and asks for permission at each step. It's fantastic for deep, real-time analysis.
vet's approach is a "Pre-Flight Check"—its goal is to let a human review and approve a complete, static snapshot of a script before a single line of it ever executes.
I chose the "pre-flight" path because diffing and shellcheck are central to the idea. They answer the questions: "I trusted this script last month, but has it changed at all since then?" and "Does this static code contain any obvious red flags?"
The trap DEBUG method is powerful, but it can't answer that "what's changed?" question upfront and runs the risk of "prompt fatigue" on large installers, where a user might just start hitting 'y' to get through it.
You've given me a lot to think about, especially on how to better articulate this philosophy. I sincerely appreciate you taking the time to teach and challenge the project. This is the kind of tough, expert feedback that makes open source better, and you've already had a direct, positive impact on it.
The idea is great. Vet will work for people who can run the code displayed. Right now my skills are not high enough. Unsure wether I sit with the majority or minority of future users.
I appreciate you finding a problem and trying to build a solution, but I think your solution will not work very well. Shellcheck is not a virus or vulnerability scanner, it’s not designed for the thing you are using it for.
You are absolutely right, and that's a crucial distinction to make. ShellCheck is a linter, not a security scanner.
Its role in vet isn't to find malware, but to act as an automated code quality check. A script full of shellcheck warnings is a red flag, which helps inform the user's final decision to trust it or not. It's one of several signals that vet provides.
This is somewhat flawed by not automatically happening when a user does curl | bash. Windows is able to automatically scan files when the user goes to install them.
And this is why this exploit mechanism works so well.
Most installers are doing the same basic patterns: checking for dependencies, checking the distro, etc. It’s not hard to figure these out and spot them in different scripts.
OK, fair-ish point. You won’t find major examples, because it’s not a CVE if you willingly download and execute malicious code. I hope you can understand the theoretical (but very real) risks of doing this, though.
For me personally, I try to use a distro/platform specific package if it exists, since hopefully that means at least one human has read through some of the code, and probably installed it. If that’s not available, I do download the script to review before executing it (and not re-downloading it to pipe to a shell). I’m sure I wouldn’t catch everything, but I would probably catch odd embedded curl calls and the like.
As far as I know there are zero examples, CVE or not. I have asked several times over the years and thus no one has been able to provide an example. It just doesn't happen because it just doesn't make much sense.
As I already said years ago[1], if you want to hide some nefarious stuff then you'd do it in something like autoconf soup, or something like that. The install.sh is just too obvious of a place. And this is exactly what happened in the real-world xz attack. I can guarantee you very few, if any, packagers are auditing all of that. And even if they did: it's just so easy to miss.
My main question is in 90% of cases these are installers. How are you actually verifying the software that you install? In some cases it is signed and verified but in many cases it is just coming down from the same HTTPS server with no additional verification. So are you then diffing the code (which may be compiled) as well?
I'm not saying that random running random installers from the internet is a great pattern. Something like installing from your distribution can have better verification mechanisms. But this seems to add very little confidence.
The other thing is.. installer generally only runs once on a single machine, not sure how useful it is to “show the changes since last run”
This. OP, tools often install their own update mechanisms (e.g. `uv self update`), so this may not be as useful as you think. As an alternative (albeit one that adds potential hosting costs), consider running a small DB - can be as simple as SQLite - with hashes of scripts. You also need to handle legitimate updates from the script's author[s], though. If you can extract versioning from the URL, e.g. GitHub releases, you could include that in the schema.
I made a gist demonstrating a SQLite schema and using it via direct user input: https://gist.github.com/stephanGarland/5ee5281dedc3abcbc57fa...
You're absolutely right—vet's scope is focused on securing the installer script itself, not the binary it downloads.
The goal is to prevent the installer from being maliciously modified to, for example, skip its own checksum verification or download a binary from a different, malicious URL.
It's one strong link in the chain, but you're right that it's not the whole chain.
> How are you actually verifying the software that you install?
By installing it through a well-audited, cryptocraphically-signed and community-maintained package list with a solid security history. What?
The bug here isn't that "it's hard to make downloading scripts secure!", it's that people on macs (and a few other communities, but really it's just OS X culture at fault here) insist on developing software with outrageous hackery like this and refuse to demand better from their platform.
Fix that. Don't pretend that linting (!!) shell scripts pulled off the open internet is going to do anything.
Why do you think it’s OS X culture and not Rust culture? Popular rust tools like starship, atuin, and cargo itself ask you to curl an installer. They certainly didn’t invent this but they did re popularize it
Most non-Apple rust users get it via a Linux distro's package manager, or by building from source. And after installation cargo is, if not Debian-quality, reasonably secure vs. attack (sub-linux but better than npm, basically).
While there are surely exceptions, that nonsense about "just run this unauthenticated script URL" is something unique the the Mac experience. And it's horrifying.
By the way, the excellent discussion here got me thinking about the next logical step for vet: supporting private environments.
Running public scripts is great, but what about running deployment scripts from a private GitHub repo or setup scripts from an internal server?
Based on this, I've opened a new feature request to add authentication support to vet, with a roadmap that includes .netrc support, a VET_TOKEN environment variable, and a future goal of integrating with secret managers like HashiCorp Vault by reading tokens from stdin.
If you're interested in that direction, I'd love to get your thoughts on the feature request over on GitHub:
https://github.com/vet-run/vet/issues/4
Thanks again for all the great feedback!
This an amazing solution. I wondered about this often, looking at you `uv`, but in a lot of the cases I cave given that everyone else trust some code maintainers.
Oh the irony:
thenI guess you stopped reading there and missed that part:
> Yes, we see the irony! We encourage you to inspect our installer first. That's the whole point of vet. You can read the installer's source code install.sh
It is very trivial to serve different code to someone inspecting the code than when they pipe it to bash. In the very rare case someone inspected it they’d likely do so in a way that was vulnerable to this.
That’s an excellent point, and thank you for raising it. You are 100% correct—relying on users to inspect a URL that could be spoofed with User-Agent trickery is a flaw in the original recommendation. It's a classic threat model that I should have addressed from the start.
Thanks to your feedback, I've just merged a PR to change the recommended installation method in the documentation to the only truly safe one: a two-step "download, then execute the local file" process. This ensures the code a user inspects is the exact same code they run.
I sincerely appreciate you taking the time to share your expertise and hold the project to a higher standard. This is what makes a community great.
Can you show how it works on the page or readme as a video?
Does it open pager or editor? How does it show the shellcheck issues.
You're right, the README explains what vet does, but it doesn't do a great job of showing how it feels to use it. I'll definitely create a demo GIF for the page.
To answer your questions directly in the meantime:
- Pager or Editor? It opens a pager (less by default, but it will automatically use the much nicer bat if you have it installed for syntax highlighting). It doesn't open an editor to prevent any accidental modifications.
- ShellCheck Issues: If shellcheck finds issues, it prints its standard, colorful output directly to your terminal before you review the script. It then pauses and asks you if you want to proceed with the review despite the warnings, like this:
==> Running ShellCheck analysis...
In /tmp/tmp.XXXXXX line 7: echo "Processing file: $filename" ^-- SC2086: Double quote to prevent globbing and word splitting.
==> WARNING: ShellCheck found potential issues. [?] Continue with review despite issues? [y/N]
Thanks again for the excellent idea!
What if someone peppers their malicious script with `# shellcheck disable=` pragmas?
Great point.
A malicious actor could definitely do that. That’s why vet’s model doesn’t rely solely on ShellCheck—it’s just one layer. The key layer here is the diff. Even if the linter is silenced, the diff reveals any new suspicious # shellcheck disable= lines added to trusted scripts. That change alone is a red flag.
Love the idea!
The two biggest hurdles for a security tool like this are LLM non-determinism and the major privacy risk of sending code to a third-party API.
This is exactly why vet relies on ShellCheck—it's deterministic, rules-based, and runs completely offline. It will always give the same, trustworthy output for the same input.
But your vision of smarter analysis is absolutely the right direction to be thinking. I'm excited for a future where fast, local AI models can make that a reality for vet. Great food for thought!
This is a great idea!
One extra feature could be passing the contents of the shell script to an LLM and asking it to surface any security concerns.
Hi HN, I'm the creator of `vet`. I've always been a bit nervous about the `curl | bash` pattern, even for trusted projects. It feels like there's a missing safety step. I wanted a tool that would show me a diff if a script changed, run it through `shellcheck`, and ask for my explicit OK before executing. That's why I built `vet`.
The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I'd love to hear your feedback.
The repo is at https://github.com/vet-run/vet
I'm glad to see that I'm not the only person worried about this. It's a pretty glaring bit of attack surface if you ask me. I chuckled when I saw you used nvm as an example in your readme. I've pestered nvm about this sort of thing in the past (https://github.com/nvm-sh/nvm/issues/3349).
I'm a little uncertain about your threat model though. If you've got an SSL-tampering adversary that can serve you a malicious script when you expected the original, don't you think they'd also be sophisticated enough to instead cause the authentic script to subsequently download a malicious payload?
I know that nobody wants to deal with the headaches associated with keeping track of cryptographic hashes for everything you receive over a network (nix is, among other things, a tool for doing this). But I'm afraid it's the only way to actually solve this problem:
1. get remote inputs, check against hashes that were committed to source control
2. make a sandbox that doesn't have internet access
3. do the compute in that sandbox (to ensure it doesn't phone home for a payload which you haven't verified the hash of)
Vet only downloads once, so what do you mean by subsequent download?
Also hashing on inputs is brittle and will break anytime the developer pushes an update. You want to trust their certificate instead.
After looking closer, I think I misunderstood. I thought that after a human reviewed the script, vet would cache something which indicates that that script is trusted--that way it can run in CI without a human involved, and vet is checking that it is indeed the thing the human trusted. Looks like not.
Re: hashes, the whole point is that I want it to break anytime the developer pushes an update, that's my cue to review the update and decide once more whether I want it in my project. The lack of awareness re: what that curl is going to provide is the whole reason people think that `curl | bash` is insecure.
Otherwise there's no commit which indicates the moment we started depending on the new version--nothing to find if we're later driving `git bisect` to figure out when something went wrong. It could supply a malicious payload once, revert back to normal behavior, and you'd have no way to notice.
Also, you end up with developers who have different versions installed based on when they ran the command, there's no association with the codebase. That's a different kind of headache.
> I wanted a tool that would show me a diff if a script changed, run it through `shellcheck`
Why? What exactly do you think "shellcheck" does? When do you think you're diffing and what do you think you are diffing with?
> and ask for my explicit OK before executing.
But to what end? You're not better informed by what the script does with this strategy.
A small shell script like yours I can read in a minute and decide it does nothing for me, but large installers can be hard to decipher since they are balancing bandwidth costs with compatibility, and a lot of legitimate techniques can make this hard to follow without care and imagination.
> The install process itself uses this philosophy - I encourage you to check the installer script before running it!
I don't understand what philosophy you're talking about.
I think you're doing the exact same thing that malicious attackers do, you're just doing it worse:
I mean your script knows about wget, but your server doesn't. Sad. I also think you should be telling people to pull "https://github.com/vet-run/vet/blob/main/scripts/install.sh" instead of trying to be cute, but that's just me.> I'd love to hear your feedback.
You're getting it: I think your program sucks, but I also like the idea of trying to do something, and I understand you just don't have any idea what to do or what the problem actually is.
So let me teach you a little bash:
This little scriptlet will wait until bash tries to run something, and ask before proceeding. Simples. Put this in front of an installer (or something else messy) and get step-by-step confirmation of what's going on. Something like this is in the BASH manual someplace, or was once upon a time.In a large script this might annoy people, so if it were me, I would have a whitelist of commands that I think are safe, or maybe a "remember" option that updates that script. I might also have a blacklist for things like sudo.
While I'm on the subject of sudo, a nasty trick bad guys use is get you to run sudo on something innocuous and then rely on the cached credentials to run a sneaky (silent) sudo in the same session. Running sudo -k before interacting with an unknown program can help tremendously with this.
Wow, thank you for taking the time to write such a detailed and in-depth critique.
First, let me address the bugs you found, because you were 100% right. The wget user-agent issue revealed a significant and regrettable flaw in the server-side logic. Thanks to your report, a fix has already been merged and deployed.
The installer also had a conceptual flaw in its security recommendation, as you and others pointed out. The documentation has been updated to recommend a two-step "download, then execute" process and now includes a direct link to the GitHub release asset for maximum transparency—no more "cute" domain magic as the primary method.
Your trap DEBUG suggestion is a really powerful technique, and it highlights a core philosophical difference in how to approach this problem:
Your approach is an "In-Flight Monitor"—it steps through an executing script and asks for permission at each step. It's fantastic for deep, real-time analysis.
vet's approach is a "Pre-Flight Check"—its goal is to let a human review and approve a complete, static snapshot of a script before a single line of it ever executes.
I chose the "pre-flight" path because diffing and shellcheck are central to the idea. They answer the questions: "I trusted this script last month, but has it changed at all since then?" and "Does this static code contain any obvious red flags?"
The trap DEBUG method is powerful, but it can't answer that "what's changed?" question upfront and runs the risk of "prompt fatigue" on large installers, where a user might just start hitting 'y' to get through it.
You've given me a lot to think about, especially on how to better articulate this philosophy. I sincerely appreciate you taking the time to teach and challenge the project. This is the kind of tough, expert feedback that makes open source better, and you've already had a direct, positive impact on it.
The idea is great. Vet will work for people who can run the code displayed. Right now my skills are not high enough. Unsure wether I sit with the majority or minority of future users.
My 2 cents
I appreciate you finding a problem and trying to build a solution, but I think your solution will not work very well. Shellcheck is not a virus or vulnerability scanner, it’s not designed for the thing you are using it for.
You are absolutely right, and that's a crucial distinction to make. ShellCheck is a linter, not a security scanner.
Its role in vet isn't to find malware, but to act as an automated code quality check. A script full of shellcheck warnings is a red flag, which helps inform the user's final decision to trust it or not. It's one of several signals that vet provides.
Thanks for the important clarification!
This is somewhat flawed by not automatically happening when a user does curl | bash. Windows is able to automatically scan files when the user goes to install them.
This looks great and all, but trying to read and digest a multi hundred line bash script seems unrealistic. Full send pipe into bash.
And this is why this exploit mechanism works so well.
Most installers are doing the same basic patterns: checking for dependencies, checking the distro, etc. It’s not hard to figure these out and spot them in different scripts.
Does it work really well? Any major examples?
OK, fair-ish point. You won’t find major examples, because it’s not a CVE if you willingly download and execute malicious code. I hope you can understand the theoretical (but very real) risks of doing this, though.
For me personally, I try to use a distro/platform specific package if it exists, since hopefully that means at least one human has read through some of the code, and probably installed it. If that’s not available, I do download the script to review before executing it (and not re-downloading it to pipe to a shell). I’m sure I wouldn’t catch everything, but I would probably catch odd embedded curl calls and the like.
As far as I know there are zero examples, CVE or not. I have asked several times over the years and thus no one has been able to provide an example. It just doesn't happen because it just doesn't make much sense.
As I already said years ago[1], if you want to hide some nefarious stuff then you'd do it in something like autoconf soup, or something like that. The install.sh is just too obvious of a place. And this is exactly what happened in the real-world xz attack. I can guarantee you very few, if any, packagers are auditing all of that. And even if they did: it's just so easy to miss.
[1]: https://www.arp242.net/curl-to-sh.html