No questions from me, just some appreciation and thanks for the release. While it is clearly not founded solely on the pure and selfless love of AWS for Rust, it is nevertheless very positive for the language to have good stable ways to work with major platforms. Writing things on AWS in Rust is now a significantly easier sell.
Thanks for all the work on this, looking forward to trying a few new pieces out!
Thanks for showing up and answering questions. Congratulations on the release.
What kind of plans for support of Rust's evolving async ecosystem?
Any particular reason why the public roadmap does not show the columns similar to "Researching", "We're Working On It" like the other similar public AWS Roadmaps? See example for Containers: https://github.com/aws/containers-roadmap/projects/1
Would be nice to have fully working examples on Github, for most common scenarios across most AWS services. This is something that historically
AWS SDKs have been inconsistent on. Just a request not really a question :-)
The blog post mentions support for 300+ services. I have a couple of questions:
1. It would be interesting to see a comparison between the Rust service coverage and other language SDKs that have been around for a while such as Java. Is there such a place to see this comparison?
2. Will the Rust SDK stay up to date with the latest services as they're announced?
I'm very excited to see this announcement. It's been a long time coming.
Are there plans to improve the compilation times? Aws sdk crates are some of the slowest dependencies in our build—which feels odd for what are basically wrappers for http clients.
What are the differences in the design principles of the AWS Rust SDK compared to AWS SDKs of other languages? In what ways is it special to work best with the Rust ecosystem?
I attended a re:Invent session yesterday on using Rust as a Lambda runtime. The potential performance improvements, especially with limited memory, was quite compelling. I’m looking forward to trying this SDK out with Rust Lambdas.
At my company we’ve written all of our Lambda functions in Rust. It’s a perfect fit with the constraints in Lambda. We did customize the runtime somewhat for our needs but that wasn’t all that complicated.
I realize this is a "how long is a piece of string" question, but I'm wondering what cost benefits you might realistically see from moving lambdas from Python to a faster language like Rust? You pay (partly) for execution time so I guess you should see some savings, but I'm wondering how that works out in practice. Worth it?
Here's a fun answer to that question: Rubygems saved infinity money. That is, they got resource usage down to the point where they could move to the free tier.
This paper is not about lambdas and their typical operations specifically, but it shows that across a variety of tasks, as of 2017, Rust is more environmentally friendly than Python.
Instrument your python code and gather metrics. Maybe use a profiler. If it is heavily CPU limited and it spends all time in python interpreter calls it might benefit from moving to a more efficient language. It it’s mostly waiting on IO (eg remote services) it might be a negligible difference.
I'm a Rust beginner, so please excuse any naivete herein: Does this SDK _necessarily_ require an async runtime or is it possible to use it in a traditional sync application using whatever extra facilities (e.g. block_on) which would be required to "normalize" it?
You can use tokio’s block_on to sync-ify. You need to instantiate a runtime, but you don’t need to do run your whole application in it, just the Future.
edit: Tokio can be beefy. You might look at some of the smaller single-threaded runtimes to execute your future in the main application thread if you’re only concerned about serial execution.
I just heard about AWS CRT at the AWS ReInvent Innovation talk on Storage.
1. Does the Rust SDK use CRT under the hood? I use the Rust SDK to access S3 and wonder if there are any automatic performance gains?
2. I couldn't find good material on how AWS CRT works and how it is integrated with the Java or Python S3 connectors. I would appreciate a more technical explanation. Do you have any links that explains this in more depth?
One thing I sorely missed was workers for consuming SQS messages. Ended up having an intern adapt a worker for the old community AWS SDK (rusoto) into this: https://github.com/Landeed/sqs_worker
Also on my dream list of features: gRPC support for Lambda.
Hah that reminds me of a decade or so ago - there was an entire unofficial node SDK before the official one came out. The unofficial one still supported a bunch of features outside the main one for a while.
As with all the other AWS SDKs, the bulk of the code is generated. The JSON service definitions are shared, the effort (one expects) is in being adding support for all the different ways in which the JSON indicates that services behave, and making it look like it could have been hand-written.
What are some valid reasons why people wouldn't now use these rust libraries and extend them to their preferred language? Maintaining clients is tedious work and prone to abandonment.
I would expect AWS to provide custom libraries for basically every language. The cost of a few full-time engineers who are experts at any reasonably popular language is probably pocket change compared to how much even a few companies using that language might spend on AWS services.
Not all languages have a great interop story with Rust. Binding the JNI is especially tricky, for example. Furthermore, when performance isn't important, the need to package and compile Rust code may be an unnecessary hassle.
rusbus|2 years ago
kolektiv|2 years ago
Thanks for all the work on this, looking forward to trying a few new pieces out!
belter|2 years ago
What kind of plans for support of Rust's evolving async ecosystem?
Any particular reason why the public roadmap does not show the columns similar to "Researching", "We're Working On It" like the other similar public AWS Roadmaps? See example for Containers: https://github.com/aws/containers-roadmap/projects/1
Would be nice to have fully working examples on Github, for most common scenarios across most AWS services. This is something that historically AWS SDKs have been inconsistent on. Just a request not really a question :-)
digitalsanctum|2 years ago
1. It would be interesting to see a comparison between the Rust service coverage and other language SDKs that have been around for a while such as Java. Is there such a place to see this comparison? 2. Will the Rust SDK stay up to date with the latest services as they're announced?
I'm very excited to see this announcement. It's been a long time coming.
habitue|2 years ago
necubi|2 years ago
Are there plans to improve the compilation times? Aws sdk crates are some of the slowest dependencies in our build—which feels odd for what are basically wrappers for http clients.
iofiiiiiiiii|2 years ago
csomar|2 years ago
kitplummer|2 years ago
kristianpaul|2 years ago
cebert|2 years ago
anelson|2 years ago
Joeboy|2 years ago
steveklabnik|2 years ago
* https://andre.arko.net/2018/10/25/parsing-logs-230x-faster-w...
* https://andre.arko.net/2019/01/11/parsing-logs-faster-with-r...
(obviously most people will not realize infinity money)
odyssey7|2 years ago
https://greenlab.di.uminho.pt/wp-content/uploads/2017/09/pap...
Matthias247|2 years ago
ethagnawl|2 years ago
ursuscamp|2 years ago
edit: Tokio can be beefy. You might look at some of the smaller single-threaded runtimes to execute your future in the main application thread if you’re only concerned about serial execution.
db-news|2 years ago
I just heard about AWS CRT at the AWS ReInvent Innovation talk on Storage.
1. Does the Rust SDK use CRT under the hood? I use the Rust SDK to access S3 and wonder if there are any automatic performance gains?
2. I couldn't find good material on how AWS CRT works and how it is integrated with the Java or Python S3 connectors. I would appreciate a more technical explanation. Do you have any links that explains this in more depth?
rusbus|2 years ago
For S3, there is a meta-layer that interceps requests to S3 and converts them into ranged-gets and multipart uploads for parallelization.
It's quite complex and can also use significantly more memory, but it does allow for *much* faster uploads and downloads in some circumstances.
insanitybit|2 years ago
jon_richards|2 years ago
jon_richards|2 years ago
One thing I sorely missed was workers for consuming SQS messages. Ended up having an intern adapt a worker for the old community AWS SDK (rusoto) into this: https://github.com/Landeed/sqs_worker
Also on my dream list of features: gRPC support for Lambda.
nailer|2 years ago
Aeolun|2 years ago
On the plus side, I guess the work lends itself well to parallelization.
maxwellg|2 years ago
andrewaylett|2 years ago
say_it_as_it_is|2 years ago
ufmace|2 years ago
rusbus|2 years ago
marcinzm|2 years ago