Chapter 2 Understanding ODS-RAM and ODS Protocols

This chapter organizes "what should be implemented" for readers who are in a position to implement Open Dataspaces technologies, refering to the design philosophy and architectural paradigm of ODS and the guidelines set out in its reference architecture model, "ODS-RAM (Open Data Spaces Reference Architecture Model)."

2.1 Design Philosophy of Open Dataspaces

Open Dataspaces is not merely a mechanism for sharing or publishing data; its purpose is to enable the management of data distributed across multiple enterprises and organizations while maintaining trust. The following A through E are essential to achieving this. For details on the design philosophy, refer to "Design Philosophy".

A. Distributed Data Management

Open Dataspaces does not assume that data is managed in a centralized manner by any specific party. Data remains under the management of each enterprise's or organization's own domain, and is provided as a product within the necessary scope and under defined conditions. Centralized management by any single party is not a premise. (Reference: Design Philosophy, Chapters 3 and 4)

B. Semantics and Ontology

Open Dataspaces explicitly separates "data structure and values (data model)" from "data meaning (information model)," and constrains domain-derived context through ontology, thereby enabling inference by data users. (Reference: Design Philosophy, Chapter 5)

C. Data Addressability and Discoverability

Open Dataspaces is designed so that data and ontology endpoints are globally identifiable and discoverable. (Reference: Design Philosophy, Chapter 6)

D. Identity and Usage Control

Open Dataspaces treats as design targets: "who can access data," "whether that party can be trusted," "under what conditions data can be provided and used," and "whether a given transaction can be considered to have been conducted correctly." The asymmetry of trust, security, and rights and obligations relationships is handled explicitly. (Reference: Design Philosophy, Chapter 7)

E. Interoperability

In ODS-RAM, "ODS Protocols (ODP)" are defined as the protocols to be relied upon in order to achieve connectivity across different enterprises, organizations, and legal jurisdictions. The design principle is to avoid dependence on any specific vendor, product, or legal framework. (Reference: Design Philosophy, Chapter 3)

2.2 Assumption of Service Models in ODS-RAM

ODS-RAM assumes two implementation patterns:

  • Distributed Service Model: an approach in which domain owners themselves build a Self-Serve Data Platform and provide Data Product/Ontology Product based on DPQM

  • Federated Service Model: an approach in which a managed service provider delivers the basic software stack that constitutes DPQM on behalf of the domain owner, while the domain owner retains responsibility for providing Data/Ontology Product.

It should be noted that even in the federated service model, data and ontology remain the responsibility of the data provider, who exercises usage control over them.

2.3 Components and Functional Overview

This section classifies the components that make up ODS — based on the ODP — into Common Functionalities,Fundamental Protocols,and Complementary Protocols,and provides an overview of what each is designed to enable.

2.3.1 Fundamental Protocols

Fundamental Protocols provide specifications for providing the core functions required to realize Open Dataspaces; protocols that must be adopted to realize the functions of the corresponding layer or perspective

  • Common Functionalities:

    • Versioning: A function for managing changes to specifications, configurations, and components, and for maintaining interoperability among participants on a shared set of assumptions

    • Logging: A function that enables comprehensive recording and analysis of system communication status, processing results of each service, and overall monitoring and operational status, based on three categories: communication logs, service logs, and processing logs

    • Monitoring: A function for continuously monitoring and managing system health and performance by tracking the execution environment and service operational status from system logs

    • Notifier: An information notification function for sharing updates and acknowledgment status related to data transactions between data providers and users

  • Usage Control: An interface function that enables data providers to self-determine the scope of provision, storage/usage conditions, and responsibility boundaries (the Usage Control function itself is not included)

  • Data Trust Assessment: An interface function that enables the evaluation and calculation of data integrity and non-tampering (the evaluation and calculation of integrity and non-tampering itself is not included)

  • Data Trustworthiness and Quality Assessment: An interface function that enables the evaluation and calculation of data quality and trustworthiness (the evaluation and calculation of quality and trustworthiness itself is not included)

  • Transaction: A function responsible for endpoint management, process control, and data transfer, serving as the junction point across each layer to enable transactions

  • Identity and Trust: A function for identifying participating parties and providing the trust foundation that ensures data exchange occurs only between legitimate parties and only with authorized access to legitimate resources

  • Metadata Exchange: A function for exchanging metadata related to data location and meaning, enabling data discoverability and semantic interpretation in a distributed environment

  • Discovery and Search: An advanced function for exploration and search based on metadata

2.3.2 Complementary Protocols

Complementary Protocols provide specifications for providing supplementary functions required to realize Open Dataspaces; protocols that may be adopted as needed to realize the functions of the corresponding layer or perspective

  • Heuristic Contracting: An interface function that supports the definition and agreement of terms of use and contractual conditions through third-party electronic contract applications (the contracting function itself is not included)

  • Clearing and Payment: An interface function for recording and reconciling data transaction records through third-party electronic payment applications, and for processing settlement and billing/payment (the settlement and billing/payment function itself is not included)

  • Marketplace: An interface function for forming a venue for data transactions among multiple participants and supporting distribution and transaction expansion (the marketplace function itself is not included)

Last updated