burmako | 3 years ago | on: OpenXLA Is Available Now
burmako's comments
burmako | 3 years ago | on: OpenXLA Is Available Now
I'd also like to comment on our (StableHLO's) relationship with related work. StableHLO was a natural choice for the OpenXLA project, because a very similar operation set called HLO powers many of its key components. However, I would also like to give a shout out to related opsets in the ML community, including MIL, ONNX, TFLite, TOSA and WebNN.
Bootstrapping from HLO made a lot of sense to get things going, but that's just a starting point. There are many great ideas out there, and we're looking to evolve StableHLO beyond its roots. For example, we want to provide functionality to represent dynamism, quantization and sparsity, and there's so much to learn from related work.
We'd love to collaborate, and from the StableHLO side we can offer production-grade lowerings from TensorFlow, JAX and PyTorch, as well as compatibility with OpenXLA. Some of these connections in the ML ecosystem have already started growing organically, and we're super excited about that.
burmako | 3 years ago | on: OpenXLA Is Available Now
Also, OpenXLA is one of the external organizations in the Coordination section of the working group charter: https://w3c.github.io/machine-learning-charter/charter.html. We're looking forward to collaborating with WebML folks!
In the StableHLO spec, we are talking about this in more abstract terms - "StableHLO opset" - to be able to unambiguously reason about the semantics of StableHLO programs. However, in practice the StableHLO dialect is the primary implementation of the opset at the moment.
I wrote "primary implementation" because e.g. there is also ongoing work on adding StableHLO support to the TFLite flatbuffer schema: https://github.com/tensorflow/tensorflow/blob/master/tensorf.... Having an abstract notion of the StableHLO opset enables us to have a source of truth that all the implementations correspond to.