Improved test method for data models

When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.

The current process (3rd option in tools menu) keeps on working as it was.

But we have drafted a new method because

– We need to be more explicit about the tests passed and the errors

– We need to improve the performance

So you can check the new method in the 4th option of the Tools menu

Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.

New testing process in progress were you can contribute your code

Current test process for new and extended data models

In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:

Pro: it is currently working

Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected

The new process

1) Every test is an independent file:

2) To test the new data model it copies to local the files and then run the tests, which is quicker.

What can you do with basic knowledge of python (or with a good AI service)

Here you can see the current files available in the github repository data-models subdirectory test_data_model

Instructions

This directory contains the decentralized method to test new and existing data models

The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code

test_files = ["test_valid_json", "test_file_exists", "test_schema_descriptions", "test_schema_metadata", "test_duplicated_attributes"]

so if you create a new test you need to extend this line with your file. Bear in mind these points

  1. that the file you create has to have a function with the same name of the file inside. The file test_schema_descriptions.py has a function named test_schema_descriptions
  2. Every function returns 3 values. test_name, success, output. test_name is the description of the test run, success is a boolean value indicating if the overall test has been successful. output contains all the messages for the issues or successful passed tests in a json format to be easily manageable.

The file master_tests.py is invoked this way 'python3 master_tests.py <repo_url_or_local_path> <only_report_errors>' . It expects to have all the tests in the subdirectory tests (like in the repo)

  • '<repo_url_or_local_path>'. It is the local path or url for the repository where the data model is located. It does not matter because any case the files are copied locally and removed once the tests has finished. Independently if you are going to test one file or all of them the parameter of the function has to be the root of the directory where the files are located. The expect structure is described in the contribution manual. In example https://github.com/smart-data-models/dataModel.Weather/tree/master/WeatherObserved file structure
  • '< email >' is the email of the user running the test
  • '<only_report_errors>' is a boolean (true or 1) to show just only those unsuccessful tests

What can be contributed by you. Lots of tests. Just a few

  1. Test that the notes.yaml file is a valid yaml file
  2. Test that the ADOPTERS.yaml file is a valid yaml file
  3. Test that the schema validates the files example.json and example.jsonld
  4. Test the file example-normalized.json is a valid NGSIv2 file
  5. Test the file example-normalized.jsonld is a valid NGSI-LD file

Updated all data models to the last version of json schema

NOTE: We did yesterday 17-9 the changes. Unfortunately we made a mistake and now we have to revert all these changes, do it again properly and push. this Friday will be ready if not earlier.

NOTE2: It is already updated. Its Wednesday 15:30. Hopefully this time we made no errors.

The single-source-of-truth  of the data models is the json schema (file schema.json). This json schema has a tag ‘$schema’ indicating the meta schema the schema is compliant with.

Now all data models have been updated to the last one “https://json-schema.org/draft/2020-12/schema

Therefore some errors provided by validators due to the obsolete previous value have been removed.

Thanks to the user Elliopardad in GitHub for its contribution and to the community of json schema for its support.

As we announce earlier we are one of the project listed in its global landscape of projects.

pydantic export now available

The directory /code/ (see image with one example)  in every data model has now a new draft export the pydantic export.

Pydantic is a Python library that provides data validation and settings management using Python type annotations, allowing you to define data models that enforce type constraints and validate data automatically.

Now in most (if not all) data models you have such export to use it freely. Mind that is a first version and errors could happen (It is welcomed if you find any error or just make a suggestion)

The Smart Data Models Initiative Embraces JSON Schema as the Core Component for Interoperable Smart Solutions

The Smart Data Models (SDM) initiative, led by FIWARE Foundation in collaboration with IUDX, TM Forum, and OASC, has firmly established JSON Schema as the core component and single source of truth for creating exports in YAML, SQL, and soon RDF. This strategic move aligns the SDM initiative with the growing JSON schema community, enabling a wider adoption of this powerful data modeling standard.

The SDM initiative is an open collaboration aiming to promote the adoption of a reference architecture and compatible common data models across various sectors, starting with Smart Cities. By leveraging JSON schema as the foundation, the initiative ensures that the data models developed are not only technically robust but also interoperable with a wide range of semantic and linked data initiatives.

“The adoption of JSON schema as the core component of the Smart Data Models initiative is a significant step forward in our mission to enable interoperable smart solutions,” said Alberto Abella (Data Modeling Expert, FIWARE Foundation). “This collaboration with the JSON schema community will further strengthen the initiative and drive the widespread adoption of these common data models.”

In addition to the JSON schema-based data models, the SDM initiative also creates comprehensive specifications in eight languages, including English, French, German, Spanish, Italian, Korean, Chinese, and Japanese. This multilingual approach ensures that the data models are accessible and usable by a global audience, fostering international collaboration and knowledge sharing.

“The alignment of the Smart Data Models initiative with the JSON Schema community is a testament to the power and versatility of this data modeling standard,” said Benjamin Granados (Community Development Senior Manager – Open Technologies, JSON Schema Community, Postman). “We are excited to work closely with the SDM team to further enhance the adoption and integration of JSON schema across various smart applications and services.”

The Smart Data Models initiative welcomes contributions from the public. The data models are licensed under a royalty-free, open-source model, permitting free use, modification, and sharing. This collaborative approach fosters innovation and the creation of interoperable smart solutions, which can be replicated and scaled across various sectors and regions.

For more information about the Smart Data Models initiative and its adoption of JSON schema, please visit the official website at https://smartdatamodels.org or follow the initiative on X @smartdatamodels or in Linkedin.

Version 0.8 of the pysmartdatamodels package

Due to the new configuration of files of the package pysmartdatamodels it will be no longer required to use the from clause (initially)

Therefore now to import the package in python it will be simply

import pysmartdatamodels as sdm

Accordingly the examples of code in all data models are being changed, including a comment on this version change.

This updated of the examples o code will be announced soon.

Public tender clause document

Some of the users of the Smart Data Models are public entities. Those entities are willing to use Smart Data Models in the provisioning of their IT systems.

They can do it because SDM are open licensed models not depending on any software maker but in public standards and the license of the data models allows them to customize the models and to share the modifications with only attributing the authors.

Here you can see and comment a draft document with some examples of the technical clauses for public tenders (currently only in Spanish and English)

It is open for comments and suggestions

This document arised as a consequence of a webinar held past May 16th with the Spanish Network of Smart Cities (RECI).

Examples of code associated to every data model

In order to make easier the use of the Smart Data Models now in every repository in GitHub there is a new directory named ‘code’ that contains for the python code using the pysmartdatamodels  for the architecture in the image below.

The idea is a code that fills several attributes of the data model with suitable values and insert them into a context broker installed by default.

The code for installing an instance of the context broker is also included as comments in the header of the code

The code is generated automatically (like most of what we do)

You can see an example of this code here

https://github.com/smart-data-models/dataModel.Transportation/blob/master/EVChargingStation/code/code_for_using_dataModel.Transportation_EVChargingStation.py

Of course there could be many things that could be improved and extended. Let us know in this mail account or in the usual support channels.

In the future if there are interest we could create also in other languages. Please let us know if you have interest on this possibility.

New version of pysmartdatamodels python package 0.7.1

The changes in this new version are:

– Including new function validate_dcat_ap_distribution_sdm
– Updating the comments of most of the functions
– Some code improvements by jilin.he@fiware.org
– Included a new directory with templates for the creation of a data model. Not used yet but next version they will be used for the creation of local data models. Available at my_subject directory
– Fixing the missing dependency of ruamel.yaml package

It also has an updated version of all data models (but you can get this also by running the function sdm.update_data() with the old versions)

The source code for the new version 0.7.1 is here at the data-models repository

 

 

pysmartdatamodels updated to 0.7

The new version does not provide new functionalities but an indication, including drafted code, about what is missing or in progress to the package can grow according to your needs.

The source code for the new version 0.7.0 is here at the data-models repository

There are 4 new functions drafted with the headings inputs and outputs and some recommendations for development.

1) validate_payload(datamodel, subject, payload)
2) create_QR_code(datamodel, subject)
3) include_local_datamodel(schema, subject, datamodel, contributors (optional), adopters (optional), notes(optional))
4) submit_datamodel(subject, datamodel, contributors (optional), adopters (optional), notes(optional), example_payload, notes_context, public_repository, credentials)

we will be glad to receive code or questions implementing this and we will include the authorship