top | item 45680843

(no title)

valzam | 4 months ago

I assume this is similar to Ray?

discuss

order

cwp|4 months ago

The code example is very similar to Ray.

Monarch:

  class Example(Actor):
     @endpoint
     def say_hello(self, txt):
         return f"hello {txt}"

  procs = this_host().spawn_procs({"gpus": 8})
  actors = procs.spawn("actors", Example)
  hello_future = actors.say_hello.call("world")
  hello_future.get()
Ray:

  @ray.remote(num_gpus=1)
  class Example:
      def say_hello(self, txt):
          return f"hello {txt}"

  actors = [Example.remote() for _ in range(8)]
  hello_object_refs = [a.say_hello.remote("world") for a in actors]
  ray.get(hello_object_refs)

lairv|4 months ago

I'm also curious what's the use case of this over Ray. Tighter integration with PyTorch/tensors abstractions?

porridgeraisin|4 months ago

That.

Also, it has RDMA. Last I checked, Ray did not support RDMA.

There are probably other differences as well, but the lack of RDMA immediately splits the world into things you can do with ray and things you cannot do with ray

unnah|4 months ago

There's also Dask, which can do distributed pandas and numpy operations etc. However it was originally developed for traditional HPC systems and has only limited support for GPU computing. https://www.dask.org/