"uv run" seriously needs a sandbox. Running arbitrary code from arbitrary dependencies with 0 version locking provides no guarantees on what you are actually running.
Implementing sandboxes is really hard... but Astral are demonstrable great at solving hard problems. I dream of them one day saying "we've solved sandboxing for Python scripts" ala Deno https://docs.deno.com/runtime/fundamentals/security/
It might be a cool thing for them to provide some kind of container metadata in the `# /// script` block so that e.g. it automatically runs the script in a container.
I took gp’s comment to mean something more like deno. Deno is nice because you can explicitly allow/deny filesystem, network, etc. in an ergonomic way like `—-allow-fs`
So not sure it would necessarily be ergonomically worse. It could even be a new run command `uv srun` or something…
uv run is using virtual envs, that's the de facto standard, and those are sandboxes for python deps. So it already is.
Plus inline deps mean you can pin python versions and 3rd party modules using pyproject.toml syntax in a comment of your script. This is not perfect locking, as it doesn't pin sub dependencies, but it's already more that any other tool out there.
If you want perfect locking, create a project, and use uv lock. You are already in a different category of code.
OP isn't talking about virtual environment style sandboxing, they're talking about sandboxes that prevent arbitrary code from deleting or stealing any information your user account has access to on your computer.
This has been attempted many times with python, and always been a failure because of the dynamism of the language, even by big actors.
The solution, therefor, as always been to use the OS tooling for that. Even the .Net ecosystem eventually went into that direction.
The JS ecosystem is making that mistake right now, and will of course, deprecate this API in 10 years after they realize they can't make it secure either unless they basically reimplement BSD jails entirely.
Docker isn’t a sandbox and shouldn’t be treated like one. Admittedly if I’m going to run untrusted code I’ll run it in Docker, but I’m aware that whatever I’m running could break out. I wouldn’t blindly run some bullshit even in Docker unless I’m 90% sure it’s safe already.
>but I’m aware that whatever I’m running could break out
If you have a working docker escape exploit at hand, that works on unprivileged containers, you can earn some good money. Just saying.
Docker was not created as a sandbox, but people rely on it for security and it is a sandbox at this point. Hell, containerd is one of kuberbetes backends and it absolutely relies on it being a secure sandbox.
Docker's primary purpose is to give applications their own namespaces in which they can run without conflict. It does confine applications to their own root filesystem, own process namespace and so on, but this isn't intended as a security boundary. cgroup escapes happen.
Firecracker and gVisor provide much stronger isolation. Both are battle tested; clouds run millions of multi-tenant workloads on these every day. Docker would simply never even be a candidate for this purpose.
This is an interesting development, especially considering the growing trend of code-sharing platforms. As others have pointed out, this move by GitHub to allow UV to run GitHub Gists blurs the lines between code hosting and execution environments. It's worth noting that this also puts UV in direct competition with other code execution services like Repl. it and Google Colab, both of which have been gaining traction in the developer community. I'm curious to see how UV will differentiate itself in this crowded space.
You know how you can "uv run" python code from a text file using just a URL?
No? Well, you can:
uv run https://pastebin.com/raw/RrEWSA5F
And since yesterday, you can even run a github gist:
uv run https://gist.github.com/charliermarsh/ea9eab7f56b1b3d41e5196...
You can also get text from Gists by add .txt
https://gist.github.com/charliermarsh/ea9eab7f56b1b3d41e5196...
This is what the code does more or less.
Or more generally, pipe your script into stdin.
> print("hi")' | uv run -
> curl https://pastebin.com/raw/RrEWSA5F | uv run -
"uv run" seriously needs a sandbox. Running arbitrary code from arbitrary dependencies with 0 version locking provides no guarantees on what you are actually running.
Implementing sandboxes is really hard... but Astral are demonstrable great at solving hard problems. I dream of them one day saying "we've solved sandboxing for Python scripts" ala Deno https://docs.deno.com/runtime/fundamentals/security/
There’s lots of options not native to the tool. Just a few:
devbox on MacOS.
distrobox/toolbx on Linux.
Project Bluefin has some really good ideas and concepts about all this: https://docs.projectbluefin.io/bluefin-dx/
You can by set dependencies explicitly in the script's header.
https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
That's the job of docker or systemd-nspawn. It shouldn't be implemented by every single command.
devcontainer builds upon it to further the sandbox.
It might be a cool thing for them to provide some kind of container metadata in the `# /// script` block so that e.g. it automatically runs the script in a container.
Why is it their job to check for security? Sandboxing would make the ergonomics significantly worse for running quick scripts with uv run --script
I took gp’s comment to mean something more like deno. Deno is nice because you can explicitly allow/deny filesystem, network, etc. in an ergonomic way like `—-allow-fs`
So not sure it would necessarily be ergonomically worse. It could even be a new run command `uv srun` or something…
But uv isn’t a framework, isn’t that the difference, ie why they wouldn’t necessarily think it’s appropriate to delve into that particular territory?
This is like asking why do web browsers need to sandbox javascript. Giving full permissions to untrusted code is an attacker's dream.
I have seen several Pyodide in Deno implementations lately.
Maybe use along with "Pyodide"?
uv run is using virtual envs, that's the de facto standard, and those are sandboxes for python deps. So it already is.
Plus inline deps mean you can pin python versions and 3rd party modules using pyproject.toml syntax in a comment of your script. This is not perfect locking, as it doesn't pin sub dependencies, but it's already more that any other tool out there.
If you want perfect locking, create a project, and use uv lock. You are already in a different category of code.
OP isn't talking about virtual environment style sandboxing, they're talking about sandboxes that prevent arbitrary code from deleting or stealing any information your user account has access to on your computer.
This has been attempted many times with python, and always been a failure because of the dynamism of the language, even by big actors.
The solution, therefor, as always been to use the OS tooling for that. Even the .Net ecosystem eventually went into that direction.
The JS ecosystem is making that mistake right now, and will of course, deprecate this API in 10 years after they realize they can't make it secure either unless they basically reimplement BSD jails entirely.
Deno has had this feature for five years already, since May 2020: https://deno.com/blog/v1
Run it in a Docker container?
Docker isn’t a sandbox and shouldn’t be treated like one. Admittedly if I’m going to run untrusted code I’ll run it in Docker, but I’m aware that whatever I’m running could break out. I wouldn’t blindly run some bullshit even in Docker unless I’m 90% sure it’s safe already.
>but I’m aware that whatever I’m running could break out
If you have a working docker escape exploit at hand, that works on unprivileged containers, you can earn some good money. Just saying.
Docker was not created as a sandbox, but people rely on it for security and it is a sandbox at this point. Hell, containerd is one of kuberbetes backends and it absolutely relies on it being a secure sandbox.
Why is Docker (or extensions thereof) not a sandbox? Granted, it could access the internet, but that's necessary.
Docker's primary purpose is to give applications their own namespaces in which they can run without conflict. It does confine applications to their own root filesystem, own process namespace and so on, but this isn't intended as a security boundary. cgroup escapes happen.
Firecracker and gVisor provide much stronger isolation. Both are battle tested; clouds run millions of multi-tenant workloads on these every day. Docker would simply never even be a candidate for this purpose.
How do you get to 90% sure for code that has any dependencies?
This is an interesting development, especially considering the growing trend of code-sharing platforms. As others have pointed out, this move by GitHub to allow UV to run GitHub Gists blurs the lines between code hosting and execution environments. It's worth noting that this also puts UV in direct competition with other code execution services like Repl. it and Google Colab, both of which have been gaining traction in the developer community. I'm curious to see how UV will differentiate itself in this crowded space.
Did you even read the article?