scurvy_steve | 2 years ago | on: The costs of microservices (2020)
scurvy_steve's comments
scurvy_steve | 2 years ago | on: Stealing OAuth tokens of Microsoft accounts via open redirect in Harvest App
State is for preventing CSRF, not transferring data. Don't abuse state, it's wrong.
Use your own authorize url, add an encrypted cookie and redirect to the real one. Even if the cookie is encrypted, only put some kind of session/cache key in it, don't actually send "info". Read cookie in callback then delete it.
scurvy_steve | 2 years ago | on: Stealing OAuth tokens of Microsoft accounts via open redirect in Harvest App
Go the some harvest authorize url,
That redirects to the Microsoft authorize url with redirect_uri=registered_uri and state=some_encoded_final_uri,
user enters credentials,
redirect to a registered uri
read state parameter and redirect to uri encoded in state.
This exploit still redirect to an authorized uri, but that endpoint then reads the the state parameter and happily forwards the response/token.
3 mistakes in this, abusing state, not encypting and validing state if you are going to abuse it. Enabling implicit grant(even if they needed it, should have made a second registration with limited uses).
scurvy_steve | 3 years ago | on: The sad history of Unicode printf-style format specifiers in Visual C++ (2019)
scurvy_steve | 4 years ago | on: How to Design Better APIs
I would mostly agree except in 1 case, streaming data is easier without an envelope. Making some array inside an envelope stream is usually more code and messier than just getting some metadata out of header. So if you have something like data integration endpoints and you expect someone could pull many megs of records, consider no envelope.
scurvy_steve | 4 years ago | on: The WebSocket Handbook
An on demand button press can start a processes that runs for multiple days, and this is expected. A job can do 100k API requests or read/transform/write millions of records from a database, this is also expected. Out of memory errors happen often and are expected. It's not our bad code, its the customer's bad code.
Since jobs are run as microservices on isolated machines, this is all fine. A customer(or multiple at once) can set up something badly, run out of resources, and fail or go really slow and nobody is effected but them.