updated the test file test_duplicated_attributes

The contributors of new data models can test their data models in their local repositories with the source code of the testing tool than also can use online

Home -> tools -> test your data model

it has been updated to deal with those attributes coming from languageMap properties in NGSI

New pysmartdatamodels Version 0.8.0.2: Enhanced with More Models and Automated Contributor Tools

We are thrilled to announce a significant new release of our Python package, pysmartdatamodels, designed to empower developers and streamline the contribution process for our community.

This update is packed with new data models and, most notably, powerful new validation and testing services.

What’s New in This Version?

An Expanded Library of Data Models

The heart of the Smart Data Models initiative is our comprehensive library of open-licensed data models. With this new release, we have expanded it further, adding new models across our thirteen domains, from Smart Cities to Smart Energy. This package continues to provide powerful functions to integrate more than 1,000 standardized data models into your projects, digital twins, and data spaces.

New! Automated Testing for Contributors

To improve the quality and speed of contributions, we are excited to launch a brand new service for our contributors. You can now automatically test your data models before submitting them. This automated validation ensures that new models comply with our standards, making the review and integration process smoother for everyone.

For those who want to integrate this validation into their own workflows, the source code for the testing tool is also available.

Online Validation for Examples

In addition to the new contributor tool, you can also use the online service to validate payloads against existing data model examples.

Get the Latest Version

You can find all the details, explore the functions, and get the latest package from the Python Package Index (PyPI). The updated README file includes comprehensive documentation on all the new features.

➡️ Get the package on PyPI

Our Commitment to Open and Interoperable Data

We are committed to making data interoperability easier for everyone. These updates are a direct result of community feedback and the hard work of our contributors. A huge thank you to everyone who has helped make this possible!

Use smart data models as a service. First draft

In the data-models repository you can access to the first version to use smart data models as a service. Thanks to the works for the Cyclops project.

The files available create a wrap up around pysmartdatamodels package and also add one service for the online validation of NGSI-LD payloads.

Here is the readme contents to have an explanation of this first version

 

Smart Data Models API and Demo

This project consists of two main components:

  1. A FastAPI-based web service (pysdm_api3.py) that provides access to Smart Data Models functionality
  2. A demo script (demo_script2.py) that demonstrates the API endpoints
  3. A requirements file for the components use
  4. A bash script to run it

API Service (pysdm_api3.py)

A RESTful API that interfaces with the pysmartdatamodels library to provide access to Smart Data Models functionality.

Key Features

  • Payload Validation: Validate JSON payloads against Smart Data Models schemas
  • Data Model Exploration: Browse subjects, data models, and their attributes
  • Search Functionality: Find data models by exact or approximate name matching
  • Context Retrieval: Get @context information for data models
  • if you need other please email us to info @ smartdatamodels. org

Endpoints

Endpoint Method Description
/validate-url GET Validate a JSON payload from a URL against Smart Data Models
/subjects GET List all available subjects
/datamodels/{subject_name} GET List data models for a subject
/datamodels/{subject_name}/{datamodel_name}/attributes GET Get attributes of a data model
/datamodels/{subject_name}/{datamodel_name}/example GET Get an example payload of a data model
/search/datamodels/{name_pattern}/{likelihood} GET Search for data models by approximate name
/datamodels/exact-match/{datamodel_name} GET Find a data model by exact name
/subjects/exact-match/{subject_name} GET Check if a subject exists by exact name
/datamodels/{datamodel_name}/contexts GET Get @context(s) for a data model name

Validation Process

The /validate-url endpoint performs comprehensive validation:

  1. Fetches JSON from the provided URL
  2. Normalizes NGSI-LD payloads to key-values format
  3. Extracts the payload type
  4. Finds all subjects containing this type
  5. Retrieves schemas for each subject
  6. Validates against all schemas
  7. Returns consolidated results

Demo Script (demo_script2.py)

A simple interactive script that demonstrates the API endpoints by opening a series of pre-configured URLs in your default web browser.

Features

  • Opens each URL in a new browser tab
  • Pauses between URLs for user input
  • Allows early termination with ‘exit’ command
  • Provides clear progress indicators

Usage

  1. Configure the URLs in the my_web_urls list
  2. Run the script: python demo_script2.py
  3. Follow the on-screen instructions

The demo includes examples of:

  • Payload validation
  • Subject listing
  • Data model exploration
  • Attribute retrieval
  • Example payloads
  • Search functionality
  • Exact matching
  • Context retrieval

Requirements

  • Python 3.7+
  • FastAPI
  • Uvicorn
  • httpx
  • pydantic
  • jsonschema
  • webbrowser (standard library)

Installation

pip install fastapi uvicorn httpx pydantic jsonschema

Running the API

python pysdm_api3.py

Running the demo

python demo_script2.py

Configuration

Edit the my_web_urls list in demo_script2.py to change which endpoints are demonstrated.

License

Apache 2.0

New script for testing several data models at the same time.

Most of the files of the testing process have been updated and make it available the source code:

https://github.com/smart-data-models/data-models/tree/master/test_data_model

But also there is a new file

multiple_tests.py

This file enables you to test all the data models located in a internal subject (subdirectories of the root one). Currently this option is not available as a form but if you send us an email to our infno@smartdatamodels.org

we could create a specific form for that

See here an example of the outcome.

Two new data models TimeSeries and MachineTool

There are two new data models MachineTool at OPCUA subject and TimeSeries in AAS subject

Thanks to Manfredi Pistone from Engineering for the contributions

logo

 

  • MachineTool. MachineTool is a mechanical device which is fixed (i.e. not mobile) and powered (typically by electricity and compressed air), typically used to process workpieces by selective removal/addition of material or mechanical deformation
  • TimeSeries. Time Series can represent raw data, but can also represent main characteristics, textual descriptions or events in a concise way.

Another tiny improvement on the new testing process (ngsild payloads)

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.jsonld is a valid NGSI LD file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Tiny improvement on the new testing process

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Improved test method for data models

When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.

The current process (3rd option in tools menu) keeps on working as it was.

But we have drafted a new method because

– We need to be more explicit about the tests passed and the errors

– We need to improve the performance

So you can check the new method in the 4th option of the Tools menu

Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.

New testing process in progress were you can contribute your code

Current test process for new and extended data models

In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:

Pro: it is currently working

Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected

The new process

1) Every test is an independent file:

2) To test the new data model it copies to local the files and then run the tests, which is quicker.

What can you do with basic knowledge of python (or with a good AI service)

Here you can see the current files available in the github repository data-models subdirectory test_data_model

Instructions

Updated all data models to the last version of json schema

NOTE: We did yesterday 17-9 the changes. Unfortunately we made a mistake and now we have to revert all these changes, do it again properly and push. this Friday will be ready if not earlier.

NOTE2: It is already updated. Its Wednesday 15:30. Hopefully this time we made no errors.

The single-source-of-truth  of the data models is the json schema (file schema.json). This json schema has a tag ‘$schema’ indicating the meta schema the schema is compliant with.

Now all data models have been updated to the last one “https://json-schema.org/draft/2020-12/schema

Therefore some errors provided by validators due to the obsolete previous value have been removed.

Thanks to the user Elliopardad in GitHub for its contribution and to the community of json schema for its support.

As we announce earlier we are one of the project listed in its global landscape of projects.