3 new data models for device subject

There are 3 new data models in the subject device thanks to the contribution of the project P2CODE.

Sense HAT – Raspberry Pi

  • PolarH10. A Data Model of Polar H10 Heart Rate Sensor with RR, HRV, HR, and ECG
  • SenseHat. Data model for Sense Hat sensor readings for an array of sensing capabilities for Raspberry Pi.
  • UWBAnchor. Data model for the Ultra Wideband (UWB) Anchor which are electronic devices that detect UWB pulses emitted by UWB Tags and forward them to the location server for calculating tag positions.

Thanks to the new contributors

Use smart data models as a service. First draft

In the data-models repository you can access to the first version to use smart data models as a service. Thanks to the works for the Cyclops project.

The files available create a wrap up around pysmartdatamodels package and also add one service for the online validation of NGSI-LD payloads.

Here is the readme contents to have an explanation of this first version

 

Smart Data Models API and Demo

This project consists of two main components:

  1. A FastAPI-based web service (pysdm_api3.py) that provides access to Smart Data Models functionality
  2. A demo script (demo_script2.py) that demonstrates the API endpoints
  3. A requirements file for the components use
  4. A bash script to run it

API Service (pysdm_api3.py)

A RESTful API that interfaces with the pysmartdatamodels library to provide access to Smart Data Models functionality.

Key Features

  • Payload Validation: Validate JSON payloads against Smart Data Models schemas
  • Data Model Exploration: Browse subjects, data models, and their attributes
  • Search Functionality: Find data models by exact or approximate name matching
  • Context Retrieval: Get @context information for data models
  • if you need other please email us to info @ smartdatamodels. org

Endpoints

Endpoint Method Description
/validate-url GET Validate a JSON payload from a URL against Smart Data Models
/subjects GET List all available subjects
/datamodels/{subject_name} GET List data models for a subject
/datamodels/{subject_name}/{datamodel_name}/attributes GET Get attributes of a data model
/datamodels/{subject_name}/{datamodel_name}/example GET Get an example payload of a data model
/search/datamodels/{name_pattern}/{likelihood} GET Search for data models by approximate name
/datamodels/exact-match/{datamodel_name} GET Find a data model by exact name
/subjects/exact-match/{subject_name} GET Check if a subject exists by exact name
/datamodels/{datamodel_name}/contexts GET Get @context(s) for a data model name

Validation Process

The /validate-url endpoint performs comprehensive validation:

  1. Fetches JSON from the provided URL
  2. Normalizes NGSI-LD payloads to key-values format
  3. Extracts the payload type
  4. Finds all subjects containing this type
  5. Retrieves schemas for each subject
  6. Validates against all schemas
  7. Returns consolidated results

Demo Script (demo_script2.py)

A simple interactive script that demonstrates the API endpoints by opening a series of pre-configured URLs in your default web browser.

Features

  • Opens each URL in a new browser tab
  • Pauses between URLs for user input
  • Allows early termination with ‘exit’ command
  • Provides clear progress indicators

Usage

  1. Configure the URLs in the my_web_urls list
  2. Run the script: python demo_script2.py
  3. Follow the on-screen instructions

The demo includes examples of:

  • Payload validation
  • Subject listing
  • Data model exploration
  • Attribute retrieval
  • Example payloads
  • Search functionality
  • Exact matching
  • Context retrieval

Requirements

  • Python 3.7+
  • FastAPI
  • Uvicorn
  • httpx
  • pydantic
  • jsonschema
  • webbrowser (standard library)

Installation

pip install fastapi uvicorn httpx pydantic jsonschema

Running the API

python pysdm_api3.py

Running the demo

python demo_script2.py

Configuration

Edit the my_web_urls list in demo_script2.py to change which endpoints are demonstrated.

License

Apache 2.0

New script for testing several data models at the same time.

Most of the files of the testing process have been updated and make it available the source code:

https://github.com/smart-data-models/data-models/tree/master/test_data_model

But also there is a new file

multiple_tests.py

This file enables you to test all the data models located in a internal subject (subdirectories of the root one). Currently this option is not available as a form but if you send us an email to our infno@smartdatamodels.org

we could create a specific form for that

See here an example of the outcome.

Another tiny improvement on the new testing process (ngsild payloads)

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.jsonld is a valid NGSI LD file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Tiny improvement on the new testing process

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Improved test method for data models

When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.

The current process (3rd option in tools menu) keeps on working as it was.

But we have drafted a new method because

– We need to be more explicit about the tests passed and the errors

– We need to improve the performance

So you can check the new method in the 4th option of the Tools menu

Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.

New testing process in progress were you can contribute your code

Current test process for new and extended data models

In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:

Pro: it is currently working

Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected

The new process

1) Every test is an independent file:

2) To test the new data model it copies to local the files and then run the tests, which is quicker.

What can you do with basic knowledge of python (or with a good AI service)

Here you can see the current files available in the github repository data-models subdirectory test_data_model

Instructions

Smart Data Models Validator

This Python script validates the structure and contents of a directory containing the basic files for an official Smart Data Model (it local folder) containing standard data models and supporting files. It checks the presence and correctness of JSON schemas, examples, and YAML documentation using a set of predefined tests according to the contribution manual.

🚀 Features

  • Supports both GitHub URLs and local paths
  • Downloads all the required files like schema.json, examples/*.json, ADOPTERS.yaml, and more
  • Runs a series of validation tests and outputs structured JSON results
  • Configuration-driven paths for results and downloads
  • Parallel file downloading for GitHub sources
  • Cleanup of temporary files after execution

🧪 How to Use

📦 Prerequisites

  • Python 3.6 or newer
  • requests library (pip install requests)

📁 Configuration

Edit the config.json file with the following structure:

You need to configure the script by editing the file:

config.json

This the content of the file config.json

{
  "results_dir": "Put a local directory where the script can write, and it will store the results for the tests",
  "results_dir_help": "Nothing to edit here it is just instructions",
  "download_dir": "Put a local directory where the files being tested can be temporary stored (they are removed by the end of the test)",
  "download_dir_help": "Nothing to edit here it is just instructions"
}

Usage

The file master_tests.py can be called:

python3 master_tests.py <directory_root> <email> <only_report_errors> [--published true|false] [--private true|false] [--output output.json]
  • '<repo_url_or_local_path>'. It is the local path or url for the repository where the data model is located. It does not matter because any case the files are copied locally and removed once the tests has finished. Independently if you are going to test one file or all of them the parameter of the function has to be the root of the directory where the files are located. The expect structure is described in the contribution manual. In example https://github.com/smart-data-models/dataModel.Weather/tree/master/WeatherObserved file structure
  • '< email >' is the email of the user running the test
  • '<only_report_errors>' is a boolean (true or 1) to show just only those unsuccessful tests

The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code

   test_files = [
            "test_file_exists",
            "test_valid_json",
            "test_yaml_files",
            "test_schema_descriptions",
            "test_schema_metadata",
            "test_string_incorrect",
            "test_valid_keyvalues_examples",
            "test_valid_ngsiv2",
            "test_valid_ngsild",
            "test_duplicated_attributes",
            "test_array_object_structure",
            "test_name_attributes"
        ]

so if you create a new test you need to extend this line with your file. Bear in mind these points

  1. that the file you create has to have a function with the same name of the file inside. The file test_schema_descriptions.py has a function named test_schema_descriptions
  2. Every function returns 3 values.
    • test_name. test_name is the description of the test run,
    • success. success is a boolean value indicating if the overall test has been successful.
    • output. output contains all the messages for the issues or successful passed tests in a json format to be easily manageable.

🔍 Smart Data Models Multi-Data Model Validator

This script automates the validation of multiple data models within a GitHub repository or a local directory by invoking the master_tests.py script on each one.


📦 Overview

  • Automatically lists first-level subdirectories of a specified GitHub folder
  • Executes master_tests.py for each subdirectory
  • Aggregates all validation results into a single timestamped JSON file with the results of the tests
  • Supports filtering to report only errors

🧰 Requirements

  • Python 3.6+
  • Dependencies:
    • requests (for GitHub API)
  • master_tests.py must be available in the same directory and executable

Install required Python packages if needed:

pip install requests

Usage

python3 multiple_tests.py <subject_root> <email> <only_report_errors>
Parameter Description
subject_root Full GitHub URL or local directory to the directory containing subfolders to be tested
email Email used in naming result files and tagging output
only_report_errors true or false (case-insensitive) — limits output to failed tests only

Output

The script creates a JSON file named like test_results_YYYY-MM-DD_HH-MM-SS.json containing all subdirectory test results.

Each entry in the output JSON has:

  • The subdirectory name (datamodel)
  • The result as returned by master_tests.py

Contact

For questions, bug reports, or feedback, please use the Issues tab or contact: Your Name or Team 📧 info@smartdatamodels.org

New subject Smart Data Models and new data model Attribute

Eating your own food is for SDM a demonstration that agile standardization works

We have created a new subject, SmartDataModels, where the structure of the assets of SDM will be released. We have started with the data models of Attribute according to the global data base of attributes (more than 157.000 currently, >100 MB os it takes a while to download)

 

  • Attribute. Description of the data model Attribute