-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify abstract data model to be more concrete #855
Comments
chair hat off DID DHT does not use JSON-LD for extensibility for a few reasons:
I believe DID DHT could potentially be adjusted to add processing rules to transform the document to one with a context, and register LD term definitions alongside registered properties. That said, it would be a breaking change. I am curious how other DID Methods leverage the abstract data model, and it would be good to get a sense of the variety of implementations out there before seeing if it's feasible to define a concrete representation. Separately, I am not sure this type of change is permitted, as it might fall under the Class 4 definition:
Since I believe this could be considered a "new feature" by introducing new rules for representing DID Documents. |
I generally agree with the direction of simplifying the specification by removing the abstract data model and replacing it with a concrete one (which can then be converted to different representations like YaML, CBOR, etc.) |
I also agree that it is possible to remove the abstract data model in a way that does not affect existing implementation conformance and that we should make an attempt at doing this. To provide a concrete proposal, this would entail:
To be clear, if any of the steps above would result in a conforming DID Method becoming non-conformant, we'd clearly have to figure out how to fix the spec text so that doesn't happen. The goal here is to simplify the specification while not invalidating any currently conforming DID Methods. |
@decentralgabe wrote:
Hmm, the DID Core URL is 28 characters, a did:dht one would be maybe twice to three times that? Trading 75 characters for no deterministic way to do extensibility doesn't seem like a good trade off to me.
I don't understand these statements? IOW, the approach ensures that NO terms are defined (except for maybe in did:dht, and who knows if those definitions are going to conflict with definitions in other DID Methods). It feels like a recipe for guaranteed term conflicts in the future. I also don't understand "all terms have DNS-record mappings ahead of time" -- what does that mean? To be clear, I think |
I fully support a normative requirement on JSON-LD only core data model, and to eliminate the JSON and abstract data models from the next version of the technical recommendation. We've seen substantial confusion caused by this, and there is needless complexity and interoperability problems created by having an abstract data model, that is for the most part, just RDF... sometimes broken RDF. I think the W3C VCWG did the right thing, by clarifying that W3C VCs are always JSON-LD, and allowing alternative serialization of digital credentials such as ISO mDoc, OAUTH SD-JWTs, attribute certs and other formats to be developed elsewhere. I would recommend that the DID WG take a similar approach. Do JSON-LD based DIDS as well as they can be done at W3C. Do not attempt to define multiple serializations of the data model. Provide concrete resolution guidance based on the JSON-LD ecosystem, such as document loaders, which can handle either URNs or URLs, and which are already supported well in JSON-LD tooling. Address the If people want to do "did like things" in CBOR or YAML, let them do that... but make it clear that DIDs are JSON-LD, just like its now clear that W3C VCs are JSON-LD. |
@msporny it gets into the specifics of how did:dht works and there is more detail here but the short version is as a size saving mechanism the spec leverages a DID Document -> DNS Packet mapping, and then using DNS packet compression the result is saved on the DHT. We did an analysis of a number of compression formats (plain bytes, json, cbor, a custom binary serialization, and DNS) and found that DNS balanced an efficiency/already existing software tradeoff. Without a known mapping (or reverse mapping) between a property in the DID Doc and packet representation we cannot effectively store the record on the DHT, so these must be registered in the spec or a well known registry to reduce inconsistencies across implementations. The spec itself has a registry for this purpose. Leveraging the existing DID registry is likely the best process--noting properties supported by did:dht linked to their DID registry reference. This is the approach we've taken so far, but are open to other alternatives while maintaining the goal of saving as many bytes as possible. |
Ah, I see. I skimmed those sections and haven't tried to put the whole problem in my head to think about it more deeply. My gut reaction is that the "custom Domain-Specific Language for DNS encoding of DID Documents" thing feels a bit fraught, but that's a completely orthogonal issue. Based on what I saw in the spec, however, it feels like it would be fairly trivial for the DID Resolution process for
I would imagine that CBOR-LD applied to a In any case, with respect to changes to the abstract data model, I would expect that there wouldn't be an issue for |
Just to make it clear: this comment is with my W3C staff member's hat put down. TL;DR: my preference is to keep the abstract data model (ADM) as is. I have several reasons:
|
It has been suggested that the abstract data model in DID Core creates unnecessary complexity and that a more concrete data model should be selected, based on implementation experience over the past two years. This issue is to track the discussion of how that simplification might occur.
The text was updated successfully, but these errors were encountered: