I like this idea, but I dont know if Hyper is the best package to go with.
Hyper occupies part of the Rust ecosystem that I think suffers from package
bloat, like much of NPM. For example, currently Hyper requires 52 packages:
Part of this is just crates being broken up more in Rust. For example the `http` crate only contains trait (interface) definitions. They break down like so:
C/C++ suffers from severe wheel reinvention due to a lack of package management, but do you see people cautioning the use of programs written in that language for that reason?
The STL package contains lots of things, but most of them are pretty ugly to use and quite a few of them are quite complicated to use because they are generalized for as many use cases as possible. There are so many authors out there that decide they can do memory management manually, write their own thread or process pools, implement sorting in a different way, rewrite basic algorithms for list operations like mapping, filtering, zipping, reducing...
Simply counting the number of dependencies isn't a great indicator for dependency bloat. There are extremes on both ends : no deps --> I know everything better and reimplemented the world, and thousands of deps --> I put a one liner in a package ma! One should not judge too quickly.
> These complaints are valid, but my argument is that they’re also not NEW, and they’re certainly not unique to Rust ... The only thing new about it is that programmers are exposed to more of the costs of it up-front.
Most of these are maintained as sets of crates under the same project/maintainers. For example, everything starting with futures comes from one repo, everything starting with tokio plus mio (plus some others) are under the tokio-rs GitHub organization, all the windows bindings packages are from the same repo, etc.
Plus some of the dependencies are also dependencies of the standard library (hashbrown, cfg-if, libc).
I've recently taken a different stance on this. the issue isn't number of dependencies, its the reliability of said dependencies to be the following
- Secure. Does this contain malicious code, exploits or otherwise known bugs that could be fixed but aren't? This of course is hard, and will never be perfect. There are static security scans though, espcially in a language like rust, that should be able to verify what the code does before its being published for consumption via carog. NPM is trying to do something similiar. This isn't foolproof and more sophisicated exploits always will exist, but getting low hanging and mid-level fruit should be within reach, which is a net win
- Is it quality? This is beyond just secure, but does it provide real utility value? One thing we always hear is DRY, which may mean sometimes you consume alot of dependencies, since the problem space you work in involves a lot of things, so why re-invent every single fix if something exists and you can glue together to start making impact in your problem domain? I don't think this is an issue espcially if #1 is true
So I don't know, I think its fine to have a lot of dependencies, I think the justification for those dependencies is often related to the complexity of the work involved. I'd expect a cURL replacement to have quite a few dependencies, since its complicated software with lots of edge cases, so for instance not re-inventing a HTTP parser is a great idea.
Now if only we all shared as developers this sentiment in terms of upstreaming contributions back too. The more we contribute and share with each other the more productive we can be.
Of course, sometimes package managers do terrible jobs at ensuring some sort of base quality, around security or otherwise, and thats never great. So its important to be aware of the trade offs you make when you source your dependencies.
By no means does this excuse developers from not understanding their dependency tree either. Its really the opposite.
I also wonder if Hyper is really the right tool for the job here. When I was looking into HTTP/S crates, I decided against Hyper because it seemed to require bringing in a runtime for HTTPS, and the async nature of hyper did not seem necessary for my totally synchronous CLI tool. For something like CURL it seems like you would just want the leanest, simplest synchronous HTTP implementation possible
IMO "how many packages are the dependencies broken into" is a far less useful question than "how many maintainers have commit access to the dependency subtree".
The latter is a better question because:
* It's directly connected to your security posture.
* It's a stable metric across languages with different norms about module size.
A quick check of cargo-Geiger shows many hundreds of unsafe invocations in the dependencies of hyper. I think it’s hard to argue that some rust HTTP library is Irrefutably safer when you’ve thrown out so many of the static guarantees of the language and replaced them with “dude, trust me”.
This quote is interesting: “ I’m a bit vague on the details here because it’s not my expertise, but Rust itself can’t even properly clean up its memory and just returns error when it hits such a condition. Clearly something to fix before a libcurl with hyper could claim identical behavior and never to leak memory”.
So Rust aborts on invalid memory accesses, unwrap on None, etc. It does not abort on memory leaks. I don’t see Rust aborting in that context as much different from a segfault, and it guards against more situations than a segfault is able to do. Additionally, when stack unwinding is enabled (default) aborts can be caught during runtime and handled specially, if that’s necessary.
basically there are already a lot of interchangeable backends, they are going to introduce a new one written in rust, which should increase memory security, but rust itself does not cleanup itself when it panics
I see that Stenberg (bagder) is receiving funding for the work from the ISRG, but I wonder if McArthur (seanmonstar) is, too? It seems like a sizable amount of work on their part, too.
The language's semantics are such that by default, you get memory safe code. This is checked at compile time. While many languages are memory safe, they often require a significant amount of runtime checking, with things like a garbage collector. Rust moves the vast majority of these kinds of checks to compile time, and so has the performance profile of C or C++, while still retaining memory safety.
> How does it compare with modern C++?
One way to look at Rust is "modern C++, but enforced, and by default." But that ignores some significant differences. For example, Rust's Box<T> and std::uniq_ptr are similar, but the latter can still be null, whereas Rust's can't. C++ cannot be checked statically for memory safety, even if modern C++ helps improve things, it doesn't go as far as Rust does.
Rusts type system is able to carry a lot of information it can use to verify the memory safety of programs at compile time.
For example, the type system includes a piece called the borrow-checker, which is able to guarantee that pointers are still valid when you use them, which eliminates use-after-free and buffer overflows.
In a similar vein, the type system includes information about in which ways types may be shared across threads, and by using this information, the compiler can guarantee that there are no data races whatsoever in multi-threaded programs.
> We’d like to thank Daniel for his willingness to be a leader on this issue. It’s not easy to make such significant changes to how wildly successful software is built, but we’ve come up with a great plan and together we’re going to make one of the most critical pieces of networking software in the world significantly more secure. We think this project can serve as a template for how we might secure more critical software, and we’re excited to learn along the way.
I don't have any data on exploitability, but 19 of the last 22 vulnerabilities (since 2018) have C-induced memory unsafety as a cause: https://curl.haxx.se/docs/security.html
Switching immediately to building with C++, and then migrating incrementally to safe forms in C++, would provide much more value per unit effort. It would also enable engagement by the orders-of-magnitude more available skilled C++ programmers, who could also pick up new skills writing modern, safe C++ to apply in other migrations.
It is not an either/or proposition. Certain, select modules could be recoded in Rust by particularly motivated Rust coders, leaving the huge amount of other code, for which there are too few Rust enthusiasts to work on, to be modernized in C++, and still able to call into the Rust code.
I think you may have misunderstood what the post is saying they're going to do. It is significantly more in line with your suggestion than you seem to think.
I'm unifying Rust's async HTTP implementations H1, H2, H3 and Google's tarpc in Rust~Actix~Torchbear. I just don't have a lot of time now sice my house got broken and I don't have enough money to rent anywhere. It also needs a lot of work on the parsing layer, and the laptops with my notes on them are hard to keep with me as I move around.
I like how the comment referenced in the article with the description "Rust itself can't even properly clean up its own memory" was answered today saying the restriction of unwinding on oom is going away; it's not a fundamental issue, just something that wasn't implemented that way the first time.
I find myself in the need of a "lib_download" a few times, a high level library that:
- support HTTP/HTTPS
- support proxy (for by-passing firewall, censorship, etc, http/https/socks5)
- download one large file in parallel (configurable temporary directory)
- download many small files in parallel (seems too high-level to put in a library, not sure this is a good feature)
- configurable retry (maybe too high-level to put in a library)
- resume download
- good error semantics
- an interface with defined behaviour
- progress report (useful for downloading large files)
I tried using a wrapped (in rust) version of libcurl, and in the end I decided to just use the curl cli, and read through the man page and pass about 13 arguments to it to make it's behaviour defined (to me, to a certain confidence level), I also pinned the curl executable to a specific version to avoid unknown changes.
The end result works, but the process is unnecessarily complicated (invoke the cli binary, know what argument to pass, know the meaning of the many error codes), and the resume is not pleasant to use. I guess libcurl is designed to be that way, so that to an curl-master, he can tune all the knobs to do what he want, but to a average library user who just want to download things, it requires more attention than I'm willing to give to.
Used in an interactive context, the issue of defined behaviour is usually overlooked, but when used a library in a program that runs unattended and expensive to upgrade/repair, achievable defined behaviour is a must, and test is not an alternative to it, even experience is not an alternative (experience are time consuming to get, and not transferable to others).
All package managers needs to download packages from internet, often via HTTP, it's good to have a easy-to-use, well-defined, capable download library, many of them uses curl (Archlinux's pacman, rust installation script), many of them use others with varying level of capabilities, I thinks it would be beneficial if we can have a good library (in rust) for download things.
Certainly, SPARK's GPLv3 license is incompatible with Curl's license. Curl's license is MIT-ish, but adds a prohibition for those who use it from using the author's name to promote their business. That restriction is not compatible with GPLv3. Since the author of curl took the time to write that restriction, I would imagine it is more important to them than SPARK. Since the copyleft licenses are hostile to restricting free speech, it's safe to assume that all copyleft options would be similarly unacceptable.
Rust is dual APL2/MIT, and Curl's modified MIT is compatible with MIT, so no such issue would exist for Rust.
(Standard disclaimer, I'm not your lawyer, no citations offered, seek legal counsel.)
I do not know much about Wuffs, but it seems to be completely safe. No arithmetic overflows, no bound checking failures, no None unwrapping panics, no memory allocation failure panics.
it's an interesting choice. i would have thought that fortifying http client libraries for major languages would be more important, but maybe they've already been hardened and interactive use of curl is a vector.
makes me wonder about other interactive tooling. would be interesting if there were malicious binaries that were benign at runtime but triggered bugs in debuggers and profilers.
I was under the impression that curl worked on more platforms than Rust and LLVM. It will be interesting to see what happens to curl support on those platforms going forward.
As the article indicates, it would be but one of dozens of existing backends, although it'd be one where few alternative backends currently exist (HTTP/1 and HTTP/2).
For instance libcurl can use any of 13 different TLS backends (one of which is already in Rust), or 3 different HTTP/3 backend (one of which is in rust).
Libcurl supports multiple compile-time backends for http support, encryption, and so forth. This will be no different. Hyper will be just one option among many, as will Rustls.
First, in a philosophical sense: pointers and x86 CPUs are real, ultimately any safe abstraction must be built on unsafe primitives. The ability and need to do that aren't specific to memory unsafety, we do that all over software engineering.
Second, empirically, my experience has been that the design of these abstractions can be safe, but moreover that the cordoning off of unsafe blocks makes 3p auditing for memory unsafety _much_ easier to do. It can be orders of magnitude faster than reviewing an entire C or C++ codebase.
OCaml has its fair share of unsafe features, with the functions helpfully prefixed by "unsafe_". If you look through the stdlib you'll find dozens of such functions. Plus real world code uses them to do things like avoiding bounds checks. Much as I'm a fan of OCaml, even a pure OCaml implementation could do unsafe things.
A language like OCaml can still have memory unsafety issues introduced by the compiler or standard library. It just makes it much more manageable to effectively audit for and fix such issues. `unsafe` blocks serve the same purpose.
Not only that, but to write anything more than a hello world requires lots of unsafe blocks. There are loads of them everywhere, in all of those crates, incl. the standard library.
The point is that they want further assurances that the "curl https://totally-not-evil.example.com/install.sh" part won't, in certain environments, screw up some pointer arithmetic and write the buffer into executable memory, or cause some other heartbleed-esque bug which can be exploited.
Piping it to "sudo bash" is perfectly acceptable in the eyes of the system. It's doing the instructions the user has asked it to, they've explicitly been configured as sudoers, and usually have been prompted to enter their password.
Historically there was a long period where this didn't do what you expect, which is very bad.
What this looks like it does, and indeed does today (modulo bugs some of which could be prevented using Rust) is:
Ask totally-not-evil.example.com for this install.sh resource and then run that as root as a Bash script. This is no worse than if you were to have totally-not-evil.example.com give you the bash script on a floppy disk or something. If you suspect they might actually be evil, or just incompetent, that's on you either way.
But for some years curl didn't make any effort to confirm it was getting this file from totally-not-evil.example.com. Connect over SSL, ignore all this security stuff, fetch the file. So then it's like you just accepted a floppy disk you got in the mail which says it's "from totally-not-evil.example.com" but might really be from anybody. That's definitely worse. Today you have to specify the --insecure flag to do this if you want to (Hint: You do not want to)
In your example, bash, sudo, linux, your DNS stack, ISP, router, clipboard and keyboard all play a role that is just as essential as curl in that command working the way you (cynically) expect it to.
The bug did pass the type checker. Memory safe languages also have security issues. The program never run is the most secure, or like a programmer gain experience, programs get "battle hardened".
> Hyper is a fast and safe HTTP implementation
Well.. Hyper does rely on unsafe blocks (14 at first glance[2]), so I don't know if we can just assume that it's safe. When Sergey Davidoff did their big smoke test of popular Rust HTTP implementations they found a couple of bugs[1] (through Reqwest).
I love the idea of a safer cURL, but I don't think you should take this as a magical answer to all of cURL's problems.
svnpenn|5 years ago
autocfg, bitflags, bytes, cfg-if, fnv, fuchsia-zircon, fuchsia-zircon-sys, futures-channel, futures-core, futures-sink, futures-task, futures-util, h2, hashbrown, http, http-body, httparse, httpdate, indexmap, iovec, itoa, kernel32-sys, lazy_static, libc, log, memchr, mio, miow, net2, pin-project, pin-project-internal, pin-project-lite, pin-utils, proc-macro2, quote, redox_syscall, slab, socket2, syn, tokio, tokio-util, tower-service, tracing, tracing-core, try-lock, unicode-xid, want, winapi, winapi-build, winapi-i686-pc-windows-gnu, winapi-x86_64-pc-windows-gnu, ws2_32-sys
nicoburns|5 years ago
Platform integration: libc, winapi, winapi-build, winapi-i686-pc-windows-gnu, winapi-x86_64-pc-windows-gnu, ws2_32-sys, fuchsia-zircon, fuchsia-zircon-sys, kernel32-sys, redox_syscall
Primitive algorithms: itoa, memchr, unicode-xid
Proc macro / pinning utilities: proc-macro2, autocfg, cfg-if, lazy_static, quote, syn, pin-project, pin-project-internal, pin-project-lite, pin-utils
Data structures: bitflags, bytes, fnv, hashbrown, indexmap, slab
Core Rust asyncio crates: mio, miow, iovec, tokio, tokio-util, futures-channel, futures-core, futures-sink, futures-task, futures-util,
Logging: log, tracing, tracing-core
The following are effectively sub-crates of the project: http, http-body, httparse, httpdate, tower-service, h2
Not sure what these are for: net2, socket2, try-lock, want
kingkilr|5 years ago
Looking at the authors and publishers numbers from https://github.com/rust-secure-code/cargo-supply-chain it's clear a lot of these are maintained by the same set of trusted folks.
LockAndLol|5 years ago
The STL package contains lots of things, but most of them are pretty ugly to use and quite a few of them are quite complicated to use because they are generalized for as many use cases as possible. There are so many authors out there that decide they can do memory management manually, write their own thread or process pools, implement sorting in a different way, rewrite basic algorithms for list operations like mapping, filtering, zipping, reducing...
Simply counting the number of dependencies isn't a great indicator for dependency bloat. There are extremes on both ends : no deps --> I know everything better and reimplemented the world, and thousands of deps --> I put a one liner in a package ma! One should not judge too quickly.
steveklabnik|5 years ago
> These complaints are valid, but my argument is that they’re also not NEW, and they’re certainly not unique to Rust ... The only thing new about it is that programmers are exposed to more of the costs of it up-front.
johncolanduoni|5 years ago
Plus some of the dependencies are also dependencies of the standard library (hashbrown, cfg-if, libc).
no_wizard|5 years ago
- Secure. Does this contain malicious code, exploits or otherwise known bugs that could be fixed but aren't? This of course is hard, and will never be perfect. There are static security scans though, espcially in a language like rust, that should be able to verify what the code does before its being published for consumption via carog. NPM is trying to do something similiar. This isn't foolproof and more sophisicated exploits always will exist, but getting low hanging and mid-level fruit should be within reach, which is a net win
- Is it quality? This is beyond just secure, but does it provide real utility value? One thing we always hear is DRY, which may mean sometimes you consume alot of dependencies, since the problem space you work in involves a lot of things, so why re-invent every single fix if something exists and you can glue together to start making impact in your problem domain? I don't think this is an issue espcially if #1 is true
So I don't know, I think its fine to have a lot of dependencies, I think the justification for those dependencies is often related to the complexity of the work involved. I'd expect a cURL replacement to have quite a few dependencies, since its complicated software with lots of edge cases, so for instance not re-inventing a HTTP parser is a great idea.
Now if only we all shared as developers this sentiment in terms of upstreaming contributions back too. The more we contribute and share with each other the more productive we can be.
Of course, sometimes package managers do terrible jobs at ensuring some sort of base quality, around security or otherwise, and thats never great. So its important to be aware of the trade offs you make when you source your dependencies.
By no means does this excuse developers from not understanding their dependency tree either. Its really the opposite.
nine_k|5 years ago
Either you depend on other's work for that, or you roll your own. Choose your poison.
skohan|5 years ago
danielheath|5 years ago
The latter is a better question because:
* It's directly connected to your security posture. * It's a stable metric across languages with different norms about module size.
timdorr|5 years ago
bytes, futures-core, futures-channel, futures-util, http, http-body, httpdate, httparse, h2, itoa, tracingfeatures, pin-project, tower-service, tokio, want
dochtman|5 years ago
sk2020|5 years ago
stjohnswarts|5 years ago
delfinom|5 years ago
[deleted]
sohkamyung|5 years ago
[1] https://daniel.haxx.se/blog/2020/10/09/rust-in-curl-with-hyp...
dang|5 years ago
bluejekyll|5 years ago
So Rust aborts on invalid memory accesses, unwrap on None, etc. It does not abort on memory leaks. I don’t see Rust aborting in that context as much different from a segfault, and it guards against more situations than a segfault is able to do. Additionally, when stack unwinding is enabled (default) aborts can be caught during runtime and handled specially, if that’s necessary.
Edit: I said “aborts” above, I should have said “panics”. The option in Rust is to disable unwinding and instead abort immediately: https://doc.rust-lang.org/edition-guide/rust-2018/error-hand...
That can’t be caught at runtime, to be clear.
__s|5 years ago
Looks like they've figured out a good way to allow bringing in safety while avoiding the risks any change will bring
txdv|5 years ago
dochtman|5 years ago
https://github.com/fishinabarrel/bounty
steveklabnik|5 years ago
Glad to see it seems to be going well!
faitswulff|5 years ago
I see that Stenberg (bagder) is receiving funding for the work from the ISRG, but I wonder if McArthur (seanmonstar) is, too? It seems like a sizable amount of work on their part, too.
hpb42|5 years ago
kingkilr|5 years ago
- The borrow checker enforces mutable XOR shared references.
- The compiler does not allow use of local variables before they're assigned to, requires structs to be completely initialized, etc..
- All the builtin datastructures perform bounds checks
- The compiler disallows deferencing raw pointers except in unsafe blocks.
There's a lot of good things to be said about modern C++, particular smart pointers. However, it's significantly less resilient to common mistakes than Rust is: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
steveklabnik|5 years ago
> But what are they?
The language's semantics are such that by default, you get memory safe code. This is checked at compile time. While many languages are memory safe, they often require a significant amount of runtime checking, with things like a garbage collector. Rust moves the vast majority of these kinds of checks to compile time, and so has the performance profile of C or C++, while still retaining memory safety.
> How does it compare with modern C++?
One way to look at Rust is "modern C++, but enforced, and by default." But that ignores some significant differences. For example, Rust's Box<T> and std::uniq_ptr are similar, but the latter can still be null, whereas Rust's can't. C++ cannot be checked statically for memory safety, even if modern C++ helps improve things, it doesn't go as far as Rust does.
aliceryhl|5 years ago
For example, the type system includes a piece called the borrow-checker, which is able to guarantee that pointers are still valid when you use them, which eliminates use-after-free and buffer overflows.
In a similar vein, the type system includes information about in which ways types may be shared across threads, and by using this information, the compiler can guarantee that there are no data races whatsoever in multi-threaded programs.
navaati|5 years ago
faitswulff|5 years ago
> We’d like to thank Daniel for his willingness to be a leader on this issue. It’s not easy to make such significant changes to how wildly successful software is built, but we’ve come up with a great plan and together we’re going to make one of the most critical pieces of networking software in the world significantly more secure. We think this project can serve as a template for how we might secure more critical software, and we’re excited to learn along the way.
pjmlp|5 years ago
https://blogs.windows.com/windowsdeveloper/2020/04/30/rust-w...
M2Ys4U|5 years ago
[0] https://github.com/cloudflare/quiche
nindalf|5 years ago
As a user of software, it makes me happy to know that folks are investing in making the nuts and bolts safer and more secure.
pjmlp|5 years ago
Heck, even Checked C would do, if it ever gets fully done.
In any case, looking forward to the results.
mehrdadn|5 years ago
kingkilr|5 years ago
ncmncm|5 years ago
It is not an either/or proposition. Certain, select modules could be recoded in Rust by particularly motivated Rust coders, leaving the huge amount of other code, for which there are too few Rust enthusiasts to work on, to be modernized in C++, and still able to call into the Rust code.
steveklabnik|5 years ago
mitchtbaum|5 years ago
https://github.com/google/tarpc
https://github.com/actix/actix-web/tree/master/actix-http/sr...
https://github.com/hyperium/h2
https://github.com/djc/quinn/tree/main/quinn-h3
https://github.com/speakeasy-engine/torchbear/blob/master/sr...
~
There's beauty in this with the fluency in which complex applications like the coming secure social network, Radiojade, are built. See this example:
!# https://github.com/foundpatternscellar/ping-pong/blob/master...
Curl users, do you really want to stick with Bash's syntax instead of this??
ameixaseca|5 years ago
7kmph|5 years ago
- support HTTP/HTTPS
- support proxy (for by-passing firewall, censorship, etc, http/https/socks5)
- download one large file in parallel (configurable temporary directory)
- download many small files in parallel (seems too high-level to put in a library, not sure this is a good feature)
- configurable retry (maybe too high-level to put in a library)
- resume download
- good error semantics
- an interface with defined behaviour
- progress report (useful for downloading large files)
I tried using a wrapped (in rust) version of libcurl, and in the end I decided to just use the curl cli, and read through the man page and pass about 13 arguments to it to make it's behaviour defined (to me, to a certain confidence level), I also pinned the curl executable to a specific version to avoid unknown changes.
The end result works, but the process is unnecessarily complicated (invoke the cli binary, know what argument to pass, know the meaning of the many error codes), and the resume is not pleasant to use. I guess libcurl is designed to be that way, so that to an curl-master, he can tune all the knobs to do what he want, but to a average library user who just want to download things, it requires more attention than I'm willing to give to.
Used in an interactive context, the issue of defined behaviour is usually overlooked, but when used a library in a program that runs unattended and expensive to upgrade/repair, achievable defined behaviour is a must, and test is not an alternative to it, even experience is not an alternative (experience are time consuming to get, and not transferable to others).
All package managers needs to download packages from internet, often via HTTP, it's good to have a easy-to-use, well-defined, capable download library, many of them uses curl (Archlinux's pacman, rust installation script), many of them use others with varying level of capabilities, I thinks it would be beneficial if we can have a good library (in rust) for download things.
charonn0|5 years ago
The --libcurl command line argument can help translate curl to libcurl.
0: https://ec.haxx.se/libcurl/libcurl--libcurl
johnisgood|5 years ago
floatingatoll|5 years ago
Rust is dual APL2/MIT, and Curl's modified MIT is compatible with MIT, so no such issue would exist for Rust.
(Standard disclaimer, I'm not your lawyer, no citations offered, seek legal counsel.)
benibela|5 years ago
I do not know much about Wuffs, but it seems to be completely safe. No arithmetic overflows, no bound checking failures, no None unwrapping panics, no memory allocation failure panics.
a-dub|5 years ago
makes me wonder about other interactive tooling. would be interesting if there were malicious binaries that were benign at runtime but triggered bugs in debuggers and profilers.
kej|5 years ago
masklinn|5 years ago
For instance libcurl can use any of 13 different TLS backends (one of which is already in Rust), or 3 different HTTP/3 backend (one of which is in rust).
dralley|5 years ago
unknown|5 years ago
[deleted]
unknown|5 years ago
[deleted]
Kednicma|5 years ago
[deleted]
kingkilr|5 years ago
First, in a philosophical sense: pointers and x86 CPUs are real, ultimately any safe abstraction must be built on unsafe primitives. The ability and need to do that aren't specific to memory unsafety, we do that all over software engineering.
Second, empirically, my experience has been that the design of these abstractions can be safe, but moreover that the cordoning off of unsafe blocks makes 3p auditing for memory unsafety _much_ easier to do. It can be orders of magnitude faster than reviewing an entire C or C++ codebase.
rwmj|5 years ago
mehrdadn|5 years ago
nicoburns|5 years ago
johnisgood|5 years ago
cbm-vic-20|5 years ago
erinaceousjones|5 years ago
Piping it to "sudo bash" is perfectly acceptable in the eyes of the system. It's doing the instructions the user has asked it to, they've explicitly been configured as sudoers, and usually have been prompted to enter their password.
tialaramex|5 years ago
What this looks like it does, and indeed does today (modulo bugs some of which could be prevented using Rust) is:
Ask totally-not-evil.example.com for this install.sh resource and then run that as root as a Bash script. This is no worse than if you were to have totally-not-evil.example.com give you the bash script on a floppy disk or something. If you suspect they might actually be evil, or just incompetent, that's on you either way.
But for some years curl didn't make any effort to confirm it was getting this file from totally-not-evil.example.com. Connect over SSL, ignore all this security stuff, fetch the file. So then it's like you just accepted a floppy disk you got in the mail which says it's "from totally-not-evil.example.com" but might really be from anybody. That's definitely worse. Today you have to specify the --insecure flag to do this if you want to (Hint: You do not want to)
scrollaway|5 years ago
Glad you feel safer though.
z3t4|5 years ago
benecollyridam|5 years ago
I love the idea of a safer cURL, but I don't think you should take this as a magical answer to all of cURL's problems.
[1]https://web.archive.org/web/20200506212152/https://medium.co... [2] I ran `grep -oR unsafe . | wc -l` after cloning the repo
steveklabnik|5 years ago
Is anyone actually suggesting this?