(no title)
ciaranmn | 4 years ago
At this early stage we'd be delighted with a big improvement in component standardisation/typing, and for now we're focusing on building on existing ways of representing semantic information in order to achieve that. I agree that there's a lot of scope beyond that for improvements.
> How do we design a semantic typing system for real-world data that is amenable to the inevitable need to evolve or amend the schema, or leave off certain fields from a type instance, or tack on a few extra fields? Should our applications eschew canonical types (like schema.org's Person) in favor of a ducktyping approach (I don't care if you give me a schema.org Person, or a blockprotocol.org Person, or a notpachet.org Dog, as long as the blob has a numeric field for the "age" key?
We may be indulging in cake-and-eat-it-ism, but we think the Block Protocol can allow for both.
The proposal is for the structure of entities to be represented in JSON Schema. As JSON Schema is just a set of constraints, you can take a blob of data and calculate 'does this data meet this set of constraints' (i.e. structural/duck typing). So if a Block has a schema which says that it requires a single field 'age: number', it doesn't matter what class the data you have belongs to, you can throw it into the block if it has that field. We have an API that will accept data objects and respond with blocks which have a schema the data is consistent with (not necessarily matches).
This doesn't preclude limiting certain operations/blocks to _only_ data which is linked/belongs to a specific schema/type (i.e. nominal typing), rather than simply satisfying its constraints. And there are circumstances in which not dealing with additional fields that a type might have is an issue, although for presentational or editing blocks that don't claim to be a 1:1 map onto a specific type, I think it's fine and useful for them to be able to present/edit a subset of fields from various types.
No comments yet.