Insanely awesome feature add (much needed for truly “serverless” application development). The power to scale here without insane infrastructure headache is amazing.
One day some kid is totally going to build a single-person billion dollar company from his mom’s basement.
That already happened, see plentyoffish.com, created by Markus Frind in Vancouver, the first (or first popular anyway) free online dating website. Sold for $575M. No outside funding, and for a long time it was just him and his girlfriend running it, working about 10 hours a week. When he sold it to match.com, they had 75 employees and he was working normal hours.
Wow, very cool! I didn't see the string "mobile code" in this press release, but that's essentially what this is, right? Automatically moving objects to be near the computation that needs it, is a long-standing dream. It's awesome to see that Cloudflare is giving it a try! Plus, the persistence is clever - I'm guessing that makes the semantics of mobility much easier to deal with.
I love the migration to nearby edge nodes, but here's a question to any Cloudflare employees around: Have you given any thought to automatically migrating Durable Objects to end user devices?
That has security implications of course, so if you've dismissed the idea previously because the security issues are too hard to surface to the developer, that's reasonable.
> Have you given any thought to automatically migrating Durable Objects to end user devices?
We don't have any current plans, but... I was the co-founder of Sandstorm.io before going to Cloudflare, and Durable Objects are very much inspired by parts of Sandstorm's design. So yeah, I've absolutely thought about it. ;)
It would definitely have to be an opt-in thing on the developer's part, due to the security considerations as you mention. But I think the possibilities for solving tricky compliance problems are pretty interesting.
Protip: "Compliance" is how you say "privacy" while sounding like a shrewd business person instead of an activist. ;)
>Automatically moving objects to be near the computation that needs it, is a long-standing dream. It's awesome to see that Cloudflare is giving it a try!
I'm not sure I see many real world applications for this. It seems to sit in the unhappy middle ground between local device storage and central storage. Local storage is the best performance because you eliminate network issues but then you have to deal with sync/consistency issues. Central storage & processing eliminates sync/consistency issues but can have poor performance due to network. Worker Durable Objects sits in the middle. You trade consistency complications for performance but instead of eliminating the network you're shaving some tens of miliseconds off the RTT. It's a level of performance improvement that essentially no one will notice.
To use their examples:
>Shopping cart: An online storefront could track a user's shopping cart in an object. The rest of the storefront could be served as a fully static web site. Cloudflare will automatically host the cart object close to the end user, minimizing latency.
>Game server: A multiplayer game could track the state of a match in an object, hosted on the edge close to the players.
>IoT coordination: Devices within a family's house could coordinate through an object, avoiding the need to talk to distant servers.
>Social feeds: Each user could have a Durable Object that aggregates their subscriptions.
>Comment/chat widgets: A web site that is otherwise static content can add a comment widget or even a live chat widget on individual articles. Each article would use a separate Durable Object to coordinate. This way the origin server can focus on static content only.
The performance benefits for the cart, social feed, and chat are irrelevant. Nobody cares if it takes 50 ms longer for any of those things.
IoT coordination is more promising because you want things to happen instantly. Maybe it's worth it here, but people usually have a device on their local network to coordinate these things.
Game server would definitely be an improvement. But these things are more complex than some JS functions and it would be a large effort to make them work with Durable Objects.
> I'm going to be honest: naming this product was hard, because it's not quite like any other cloud technology that is widely-used today.
On a superficial skim it looks like a tuple space; they were heavily researched in the 80s and 90s. JavaSpaces emerged in the late 90s but never took off.
Scala folks are keen on Actor models (Lightbend have been using the term "Stateful Serverless" for a while now), as are Erlang and Elixir folks.
I guess the key here is "widely-used".
Edit: this sounds even more arrogant than I intended. Sorry. I just feel bad for tuple space researchers (including my Honours supervisor). They laboured mightily in the 80s and 90s and their reward was to be largely ignored by industry.
It sounds fairly Actor-like to me. There's a bunch of different entities, each is a singular entity that lives somewhere & has it's own state that only it can directly access. These Actors happens to be mobile in Durable Objects. And they are presented more object-like than actor-like, but that seems like a different in name more than difference in nature to me.
Edit: oh, here's @kentonv, capnproto author & cloudflare employee, elsewhere in this discussion:
> Each object is essentially an Actor in the Actor Model sense. It can send messages (fetches, and responses to fetches) to other objects and regular workers. Incoming requests are not blocked while waiting for previous events to complete.
Well, this is kind of like asking the throughput of an individual Worker instance. It doesn't really matter, because the system automatically spins up as many as you need, and so the overall throughput is effectively unlimited.
For Durable Objects, applications should aim to make their objects as fine-grained as they reasonably can, so that the limits on one object are not likely to matter. Meanwhile, the total capacity across all objects is effectively unlimited.
Anyway, we don't have numbers for these questions yet. This is an early beta and we still have a lot of low-hanging fruit optimization to do.
Interesting, i didn't see how security works? Is there backpressure on message senders? Any ordering guarantees? Are messages queued so activated objects can reconstruct state? Can passivation warmth be controlled? Can objects support multiple threads? Can objects move? Failover?
Messages can only be sent to Durable Objects from other Workers. To send a message, you must configure the sending Worker with a "Durable Object Namespace Binding". Currently, we only permit workers on the same account to bind to a namespace. Without the binding, there's no way to talk to Durable Objects in that namespace.
> Is there backpressure on message senders?
Currently, the only message type is HTTP (including WebSocket). There is indeed backpressure on the HTTP request/response bodies and WebSocket streams.
We plan to support other formats for messaging in the future.
> Any ordering guarantees?
Since each object is single-threaded, any block of code that doesn't contain an `await` statement is guaranteed to execute atomically. Any put()s to durable storage will be ordered according to when put() was invoked (even though it's an async method that you have to `await`.)
When sending messages to a Durable Object, two messages sent with the same stub will be delivered in order, i.e.:
let stub = OBJECT_NAMESPACE.get(id);
let promise1 = stub.fetch(request1);
let promise2 = stub.fetch(request2);
await promise1;
await promise2;
If you have heard of a concept called "E-order" (from capability-based security and the E programming language designed by Mark Miller), we try to follow that wherever possible.
> Are messages queued so activated objects can reconstruct state?
No. The only state that is durable is what you explicitly store using the storage interface that is passed to the object's constructor. We don't attempt to reconstruct live object state. We thought about it, but there's a lot of tricky problems with that... maybe someday.
If the machine hosting an object randomly dies mid-request, the client will get an exception thrown from `stub.fetch()` and will have to retry (with a new stub; the existing stub is permanently disconnected per e-order). In capability-based terms, this is CapTP-style, not Ken-style.
> Can passivation warmth be controlled?
Sorry, I don't know what that means.
> Can objects support multiple threads?
No, each object is intentionally single-threaded. It's up to the app to replicate objects if needed, though we might add built-in features to simplify this in the future.
> Can objects move?
This is a big part of the plan -- objects will transparently migrate between datacenters to be close to whatever is talking to them. It's not fully implemented yet, but the pieces are there, we just need to write some more code. This will be done before coming out of beta.
> Failover?
If a machine goes down, we automatically move the object to a different machine. If a colo goes down, we will automatically move to another colo. We still have a little bit of missing code for colo failover -- the data is replicated already, but we haven't fully implemented the live failover just yet. Again, that'll happen before we exit beta.
"Actors" was actually one of the names we used internally for a long time (it's still all over the code), but eventually decided against because we found that people familiar with the Actor Model actually expected something a bit different, so it confused them.
But yes, the basic idea is not entirely new. For me, Durable Objects derive from my previous work on Sandstorm.io, which in turn really derives from past work in Capability-based Security (many implementations of which are Actor-oriented). But while the idea is not entirely new, the approach is not very common in web infrastructure today.
Some wanky theory about computing and the design of programs follows. (Not out of scope considering the philosophical underpinnings of this product and the "edge", etc.)
The chat demo says:
> With the introduction of modules, we're experimenting with allowing text/data
blobs to be uploaded and exposed as synthetic modules. We uploaded `chat.html`
as a module of type `application/octet-stream`, i.e. just a byte blob. So when
we import it as `HTML` here, we get the HTML content as an `ArrayBuffer`[...]
import HTML from "chat.html";
I've thought a lot about this for the work that I've been doing. From an
ergonomics standpoint, it's really attractive, and the only other viable
alternatives are (a) dynamically reading the asset, or (b) settling on using
some wrapper pattern so the original asset can be represented in the host
language, e.g.:
export const IMAGE_DATA =
"iVBORw0KGgoAAAANSUhEUgAAAD8AAAA/..." +
"..."
export const HTML = `
<!-- totally the HTML I wanted to use -->
`;
... which is much less attractive than the "import" way.
Ultimately I ended up going with something closer to the latter, and there wasn't even any reluctance about it on my part by the time I made the decision—I was pretty enthusiastic
after having an insight verging on a minor epiphany.
I'd been conflicted around the same time also about representing "aliens" (cf
Bracha) from other languages and integrating with them. I slapped my head
after realizing that the entire reason for my uneasiness about the latter "data islands"
approach was because I wasn't truly embracing objects and that these two problems (foreign integration and foreign representation) were very closely related. Usually you don't actually want `HTML`,
for example, and focusing on it is missing the forest for the trees. I.e.,
forget whatever you were planning with your intention to leave it to the
caller/importer to define procedures for operating on this inert data. Make it a class
that can be instantiated as an object that knows things about itself (e.g.
the mimetype) and that you can send messages to, because that's what your
program really wants it to be, anyway. Once you're at that point, the "wrapper" approach is much more palatable, because it's really not even a wrapper anymore.
Heh, that is actually a name we considered, and as a name on its own, I like it a lot.
But we also needed a name for the individual instances. We also found that the people who "got" the product were the ones who thought of it in terms of object-oriented programming (an object is an instance of a class). So we ended up gravitated towards "objects".
But I dunno, naming is hard. "Workers State" may in fact have been a better name!
Hey, I'm Greg, the PM working on Durable Objects at Cloudflare. As part of the private beta, we're looking to get feedback on the best way to price Durable Object so they're accessible for all applications - small or large.
While we're in beta, storage access will be free. As we're thinking about it now, once we're out of beta this wouldn't be included in the base $5/mo plan.
Since there's both a compute component (a Durable Object runs code, like a Worker) and a storage component (for storage operations) to the product, we want the long-term pricing model to mesh those two in a transparent, competitive way.
While we're not finalized on price yet, you can expect that costs for storage will be cheaper than existing services like AWS DynamoDB or Google Cloud Firestore when we move out of beta.
For those in the beta, it's currently free. We are still working out what pricing will look like post-beta. We realized we need to see how people actually use it and get some feedback before we could settle on the right pricing structure... that's what betas are for.
As others said, we’re figuring out pricing during beta but hope to keep it in-line with pricing for Workers KV. And it may be possible for us to get pricing even lower than that.
This is a great question, and explains why we decided that the durable storage API needed to be explicit, rather than automatically serializing the in-memory object. Nothing is stored unless you explicitly use storage.put(key, value).
Since the storage is explicit, it's easy to upgrade the class definition. The in-memory object will be reconstructed and will need to refresh its state from storage.
are updates to durable objects guaranteed to be exactly once?if an update is sent but the connection between client and object are dropped, how is that handled?
Yes, updates are guaranteed to happen exactly once or not at all.
If the connection drops, the Worker will receive an error and can re-establish its connection to the Durable Object. The update may or may not have been successfully persisted by the Durable Object - just like any other remote database operation where the connection drops before you receive the result back.
From what I understand these features are a nice way to implement a serverless Actor Model. I was surprised to see no reference to it on the CloudFlare page.
Possibly. We did not base the design (or name) of Durable Objects on any other product we were aware of (except arguably Sandstorm.io, which was my startup before joining Cloudflare). I haven't looked closely at Azure Durable Functions.
We actually did call this product "Actors" internally for a long time, but we found that people who had done previous Actor Model work (e.g. in Erlang) ended up more confused than enlightened by this name, so we ditched it.
Is there only a single instance of the example Counter object globally and as there are no additional await'ed calls between the get and put operations, the atomicity is guaranteed? Is the object then prevented from getting instantiated on any other worker?
Can this result in a deadlock if I access DurableClass(1), then delayed DurableClass(2) in one worker and DurableClass(2) and delayed DurableClass(1) in another worker?
Each object is essentially an Actor in the Actor Model sense. It can send messages (fetches, and responses to fetches) to other objects and regular workers. Incoming requests are not blocked while waiting for previous events to complete.
Hence, a block is only atomic if it contains no "await" statements.
In the counter example, the only thing we "await" (after initialization) is the storage put()s. Technically, then, you could imagine that the put()s could be carried out in the wrong order. But, we're able to guarantee that even though put() is an async method, the actual writes will happen in the order in which put()s were called.
(For those with a background in capability-based security: Our system is based on Cap'n Proto RPC which implements something called E-order, which makes a lot of this possible.)
* Disclaimer: At this very moment, there are some known bugs where put()s could theoretically happen out-of-order, but we'll be fixing that during the beta.
The actor model doesn't prevent "semantic" deadlocks that are caused by circular dependencies. It's kinda like reference counting which also doesn't handle cycles. In practice it doesn't matter and when it matters you have already saved enough brain cells that you can think about the tricky parts in isolation.
However, memory corruption via manual memory management and deadlocks via manual locking are commonly caused by simple and innocent programming mistakes and basically something one has to live with on a day to day basis.
This is awesome, and I'm so excited to read through the chat.mjs code. I might consider trying this out for a project. It means I need to use cloudflare? I wonder if in the future, this could become more standard, and one could do something similar on their own infrastructure (maybe such a solution already exists, open sourced somewhere?)
Storage is replicated across a handful of nearby sites. It does add some latency to writes, but that's preferable to Objects being offline or lost in the case of hardware or network failures.
There's no Jepsen testing in the works at the moment, but we'll see if it makes sense in the future.
An object is limited to one thread. How many qps that is depends entirely on what your app does, since the app can run arbitrary code in the request handler...
_ahs0|5 years ago
- https://edge-chat-demo.cloudflareworkers.com
- A Public Room (to joint test):
hackernews
- Source: https://github.com/cloudflare/workers-chat-demo
Insanely awesome feature add (much needed for truly “serverless” application development). The power to scale here without insane infrastructure headache is amazing.
One day some kid is totally going to build a single-person billion dollar company from his mom’s basement.
eloff|5 years ago
grinich|5 years ago
catern|5 years ago
I love the migration to nearby edge nodes, but here's a question to any Cloudflare employees around: Have you given any thought to automatically migrating Durable Objects to end user devices?
That has security implications of course, so if you've dismissed the idea previously because the security issues are too hard to surface to the developer, that's reasonable.
kentonv|5 years ago
We don't have any current plans, but... I was the co-founder of Sandstorm.io before going to Cloudflare, and Durable Objects are very much inspired by parts of Sandstorm's design. So yeah, I've absolutely thought about it. ;)
It would definitely have to be an opt-in thing on the developer's part, due to the security considerations as you mention. But I think the possibilities for solving tricky compliance problems are pretty interesting.
Protip: "Compliance" is how you say "privacy" while sounding like a shrewd business person instead of an activist. ;)
treis|5 years ago
I'm not sure I see many real world applications for this. It seems to sit in the unhappy middle ground between local device storage and central storage. Local storage is the best performance because you eliminate network issues but then you have to deal with sync/consistency issues. Central storage & processing eliminates sync/consistency issues but can have poor performance due to network. Worker Durable Objects sits in the middle. You trade consistency complications for performance but instead of eliminating the network you're shaving some tens of miliseconds off the RTT. It's a level of performance improvement that essentially no one will notice.
To use their examples:
>Shopping cart: An online storefront could track a user's shopping cart in an object. The rest of the storefront could be served as a fully static web site. Cloudflare will automatically host the cart object close to the end user, minimizing latency.
>Game server: A multiplayer game could track the state of a match in an object, hosted on the edge close to the players.
>IoT coordination: Devices within a family's house could coordinate through an object, avoiding the need to talk to distant servers.
>Social feeds: Each user could have a Durable Object that aggregates their subscriptions.
>Comment/chat widgets: A web site that is otherwise static content can add a comment widget or even a live chat widget on individual articles. Each article would use a separate Durable Object to coordinate. This way the origin server can focus on static content only.
The performance benefits for the cart, social feed, and chat are irrelevant. Nobody cares if it takes 50 ms longer for any of those things.
IoT coordination is more promising because you want things to happen instantly. Maybe it's worth it here, but people usually have a device on their local network to coordinate these things.
Game server would definitely be an improvement. But these things are more complex than some JS functions and it would be a large effort to make them work with Durable Objects.
jacques_chester|5 years ago
On a superficial skim it looks like a tuple space; they were heavily researched in the 80s and 90s. JavaSpaces emerged in the late 90s but never took off.
Scala folks are keen on Actor models (Lightbend have been using the term "Stateful Serverless" for a while now), as are Erlang and Elixir folks.
I guess the key here is "widely-used".
Edit: this sounds even more arrogant than I intended. Sorry. I just feel bad for tuple space researchers (including my Honours supervisor). They laboured mightily in the 80s and 90s and their reward was to be largely ignored by industry.
rektide|5 years ago
Edit: oh, here's @kentonv, capnproto author & cloudflare employee, elsewhere in this discussion:
> Each object is essentially an Actor in the Actor Model sense. It can send messages (fetches, and responses to fetches) to other objects and regular workers. Incoming requests are not blocked while waiting for previous events to complete.
https://news.ycombinator.com/item?id=24617172
gavinray|5 years ago
Thank you, it was bugging me so much.
jopsen|5 years ago
The read/write limit per second?
That usually the first things I want to know about my cloud primitives...
(Credits for at-least being clear about consistency which is always my very first question)
kentonv|5 years ago
For Durable Objects, applications should aim to make their objects as fine-grained as they reasonably can, so that the limits on one object are not likely to matter. Meanwhile, the total capacity across all objects is effectively unlimited.
Anyway, we don't have numbers for these questions yet. This is an early beta and we still have a lot of low-hanging fruit optimization to do.
toddh|5 years ago
kentonv|5 years ago
> how security works?
Messages can only be sent to Durable Objects from other Workers. To send a message, you must configure the sending Worker with a "Durable Object Namespace Binding". Currently, we only permit workers on the same account to bind to a namespace. Without the binding, there's no way to talk to Durable Objects in that namespace.
> Is there backpressure on message senders?
Currently, the only message type is HTTP (including WebSocket). There is indeed backpressure on the HTTP request/response bodies and WebSocket streams.
In fact, this is exactly why we added streaming flow control to Cap'n Proto: https://capnproto.org/news/2020-04-23-capnproto-0.8.html
We plan to support other formats for messaging in the future.
> Any ordering guarantees?
Since each object is single-threaded, any block of code that doesn't contain an `await` statement is guaranteed to execute atomically. Any put()s to durable storage will be ordered according to when put() was invoked (even though it's an async method that you have to `await`.)
When sending messages to a Durable Object, two messages sent with the same stub will be delivered in order, i.e.:
If you have heard of a concept called "E-order" (from capability-based security and the E programming language designed by Mark Miller), we try to follow that wherever possible.> Are messages queued so activated objects can reconstruct state?
No. The only state that is durable is what you explicitly store using the storage interface that is passed to the object's constructor. We don't attempt to reconstruct live object state. We thought about it, but there's a lot of tricky problems with that... maybe someday.
If the machine hosting an object randomly dies mid-request, the client will get an exception thrown from `stub.fetch()` and will have to retry (with a new stub; the existing stub is permanently disconnected per e-order). In capability-based terms, this is CapTP-style, not Ken-style.
> Can passivation warmth be controlled?
Sorry, I don't know what that means.
> Can objects support multiple threads?
No, each object is intentionally single-threaded. It's up to the app to replicate objects if needed, though we might add built-in features to simplify this in the future.
> Can objects move?
This is a big part of the plan -- objects will transparently migrate between datacenters to be close to whatever is talking to them. It's not fully implemented yet, but the pieces are there, we just need to write some more code. This will be done before coming out of beta.
> Failover?
If a machine goes down, we automatically move the object to a different machine. If a colo goes down, we will automatically move to another colo. We still have a little bit of missing code for colo failover -- the data is replicated already, but we haven't fully implemented the live failover just yet. Again, that'll happen before we exit beta.
ramchip|5 years ago
Perhaps I'm missing something important, but isn't this quite similar to Orleans grains and other distributed actors?
kentonv|5 years ago
But yes, the basic idea is not entirely new. For me, Durable Objects derive from my previous work on Sandstorm.io, which in turn really derives from past work in Capability-based Security (many implementations of which are Actor-oriented). But while the idea is not entirely new, the approach is not very common in web infrastructure today.
(I'm not familiar with Orleans.)
cxr|5 years ago
The chat demo says:
> With the introduction of modules, we're experimenting with allowing text/data blobs to be uploaded and exposed as synthetic modules. We uploaded `chat.html` as a module of type `application/octet-stream`, i.e. just a byte blob. So when we import it as `HTML` here, we get the HTML content as an `ArrayBuffer`[...]
I've thought a lot about this for the work that I've been doing. From an ergonomics standpoint, it's really attractive, and the only other viable alternatives are (a) dynamically reading the asset, or (b) settling on using some wrapper pattern so the original asset can be represented in the host language, e.g.: ... which is much less attractive than the "import" way.Ultimately I ended up going with something closer to the latter, and there wasn't even any reluctance about it on my part by the time I made the decision—I was pretty enthusiastic after having an insight verging on a minor epiphany.
I'd been conflicted around the same time also about representing "aliens" (cf Bracha) from other languages and integrating with them. I slapped my head after realizing that the entire reason for my uneasiness about the latter "data islands" approach was because I wasn't truly embracing objects and that these two problems (foreign integration and foreign representation) were very closely related. Usually you don't actually want `HTML`, for example, and focusing on it is missing the forest for the trees. I.e., forget whatever you were planning with your intention to leave it to the caller/importer to define procedures for operating on this inert data. Make it a class that can be instantiated as an object that knows things about itself (e.g. the mimetype) and that you can send messages to, because that's what your program really wants it to be, anyway. Once you're at that point, the "wrapper" approach is much more palatable, because it's really not even a wrapper anymore.
pier25|5 years ago
Now in all seriousness, this is super impressive. Congrats to the CF team!
kentonv|5 years ago
But we also needed a name for the individual instances. We also found that the people who "got" the product were the ones who thought of it in terms of object-oriented programming (an object is an instance of a class). So we ended up gravitated towards "objects".
But I dunno, naming is hard. "Workers State" may in fact have been a better name!
agotterer|5 years ago
Can the data store only store alphanumeric or can you write blobs? Could a chat app store uploads inside the object?
luord|5 years ago
Oh, well, I'll wait until it's an open beta or generally available.
phn|5 years ago
greg-m|5 years ago
While we're in beta, storage access will be free. As we're thinking about it now, once we're out of beta this wouldn't be included in the base $5/mo plan.
Since there's both a compute component (a Durable Object runs code, like a Worker) and a storage component (for storage operations) to the product, we want the long-term pricing model to mesh those two in a transparent, competitive way.
While we're not finalized on price yet, you can expect that costs for storage will be cheaper than existing services like AWS DynamoDB or Google Cloud Firestore when we move out of beta.
kentonv|5 years ago
eastdakota|5 years ago
visarga|5 years ago
kentonv|5 years ago
Since the storage is explicit, it's easy to upgrade the class definition. The in-memory object will be reconstructed and will need to refresh its state from storage.
asdev|5 years ago
greg-m|5 years ago
If the connection drops, the Worker will receive an error and can re-establish its connection to the Durable Object. The update may or may not have been successfully persisted by the Durable Object - just like any other remote database operation where the connection drops before you receive the result back.
vyrotek|5 years ago
https://docs.microsoft.com/en-us/azure/azure-functions/durab...
From what I understand these features are a nice way to implement a serverless Actor Model. I was surprised to see no reference to it on the CloudFlare page.
ShockaZ|5 years ago
https://dotnet.github.io/orleans/Documentation/index.html
kentonv|5 years ago
We actually did call this product "Actors" internally for a long time, but we found that people who had done previous Actor Model work (e.g. in Erlang) ended up more confused than enlightened by this name, so we ditched it.
dividuum|5 years ago
Can this result in a deadlock if I access DurableClass(1), then delayed DurableClass(2) in one worker and DurableClass(2) and delayed DurableClass(1) in another worker?
kentonv|5 years ago
Hence, a block is only atomic if it contains no "await" statements.
In the counter example, the only thing we "await" (after initialization) is the storage put()s. Technically, then, you could imagine that the put()s could be carried out in the wrong order. But, we're able to guarantee that even though put() is an async method, the actual writes will happen in the order in which put()s were called.
(For those with a background in capability-based security: Our system is based on Cap'n Proto RPC which implements something called E-order, which makes a lot of this possible.)
* Disclaimer: At this very moment, there are some known bugs where put()s could theoretically happen out-of-order, but we'll be fixing that during the beta.
imtringued|5 years ago
However, memory corruption via manual memory management and deadlocks via manual locking are commonly caused by simple and innocent programming mistakes and basically something one has to live with on a day to day basis.
unknown|5 years ago
[deleted]
ilaksh|5 years ago
And today they have given into us a new, powerful bounty of storage with a delicious API!
kovek|5 years ago
unknown|5 years ago
[deleted]
skybrian|5 years ago
Is there going to be Jepsen testing for this?
a-robinson|5 years ago
There's no Jepsen testing in the works at the moment, but we'll see if it makes sense in the future.
gavinray|5 years ago
Signed up for beta invite -- does anyone happen to know whether all interested parties are admitted?
greg-m|5 years ago
We're keeping access limited at first so we can get experience operating the system. We'll be expanding continually over the next few weeks.
akritrime|5 years ago
greg-m|5 years ago
proppy|5 years ago
kentonv|5 years ago
asim|5 years ago
imtringued|5 years ago