Tiny improvement on the new testing process

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Improved test method for data models

When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.

The current process (3rd option in tools menu) keeps on working as it was.

But we have drafted a new method because

– We need to be more explicit about the tests passed and the errors

– We need to improve the performance

So you can check the new method in the 4th option of the Tools menu

Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.

New testing process in progress were you can contribute your code

Current test process for new and extended data models

In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:

Pro: it is currently working

Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected

The new process

1) Every test is an independent file:

2) To test the new data model it copies to local the files and then run the tests, which is quicker.

What can you do with basic knowledge of python (or with a good AI service)

Here you can see the current files available in the github repository data-models subdirectory test_data_model

Instructions

New subject Smart Data Models and new data model Attribute

Eating your own food is for SDM a demonstration that agile standardization works

We have created a new subject, SmartDataModels, where the structure of the assets of SDM will be released. We have started with the data models of Attribute according to the global data base of attributes (more than 157.000 currently, >100 MB os it takes a while to download)

 

  • Attribute. Description of the data model Attribute

Updated all data models to the last version of json schema

NOTE: We did yesterday 17-9 the changes. Unfortunately we made a mistake and now we have to revert all these changes, do it again properly and push. this Friday will be ready if not earlier.

NOTE2: It is already updated. Its Wednesday 15:30. Hopefully this time we made no errors.

The single-source-of-truth  of the data models is the json schema (file schema.json). This json schema has a tag ‘$schema’ indicating the meta schema the schema is compliant with.

Now all data models have been updated to the last one “https://json-schema.org/draft/2020-12/schema

Therefore some errors provided by validators due to the obsolete previous value have been removed.

Thanks to the user Elliopardad in GitHub for its contribution and to the community of json schema for its support.

As we announce earlier we are one of the project listed in its global landscape of projects.

pydantic export now available

The directory /code/ (see image with one example)  in every data model has now a new draft export the pydantic export.

Pydantic is a Python library that provides data validation and settings management using Python type annotations, allowing you to define data models that enforce type constraints and validate data automatically.

Now in most (if not all) data models you have such export to use it freely. Mind that is a first version and errors could happen (It is welcomed if you find any error or just make a suggestion)