Two new data models TimeSeries and MachineTool

There are two new data models MachineTool at OPCUA subject and TimeSeries in AAS subject

Thanks to Manfredi Pistone from Engineering for the contributions

logo

 

  • MachineTool. MachineTool is a mechanical device which is fixed (i.e. not mobile) and powered (typically by electricity and compressed air), typically used to process workpieces by selective removal/addition of material or mechanical deformation
  • TimeSeries. Time Series can represent raw data, but can also represent main characteristics, textual descriptions or events in a concise way.

Another tiny improvement on the new testing process (ngsild payloads)

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.jsonld is a valid NGSI LD file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Tiny improvement on the new testing process

In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.

This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)

The source code for the test is available at the repo.

Remember that if you want to improve / create a new test, just create a PR on the repo.

Improved test method for data models

When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.

The current process (3rd option in tools menu) keeps on working as it was.

But we have drafted a new method because

– We need to be more explicit about the tests passed and the errors

– We need to improve the performance

So you can check the new method in the 4th option of the Tools menu

Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.

New testing process in progress were you can contribute your code

Current test process for new and extended data models

In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:

Pro: it is currently working

Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected

The new process

1) Every test is an independent file:

2) To test the new data model it copies to local the files and then run the tests, which is quicker.

What can you do with basic knowledge of python (or with a good AI service)

Here you can see the current files available in the github repository data-models subdirectory test_data_model

Instructions

This directory contains the decentralized method to test new and existing data models

The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code

test_files = ["test_valid_json", "test_file_exists", "test_schema_descriptions", "test_schema_metadata", "test_duplicated_attributes"]

so if you create a new test you need to extend this line with your file. Bear in mind these points

  1. that the file you create has to have a function with the same name of the file inside. The file test_schema_descriptions.py has a function named test_schema_descriptions
  2. Every function returns 3 values. test_name, success, output. test_name is the description of the test run, success is a boolean value indicating if the overall test has been successful. output contains all the messages for the issues or successful passed tests in a json format to be easily manageable.

The file master_tests.py is invoked this way 'python3 master_tests.py <repo_url_or_local_path> <only_report_errors>' . It expects to have all the tests in the subdirectory tests (like in the repo)

  • '<repo_url_or_local_path>'. It is the local path or url for the repository where the data model is located. It does not matter because any case the files are copied locally and removed once the tests has finished. Independently if you are going to test one file or all of them the parameter of the function has to be the root of the directory where the files are located. The expect structure is described in the contribution manual. In example https://github.com/smart-data-models/dataModel.Weather/tree/master/WeatherObserved file structure
  • '< email >' is the email of the user running the test
  • '<only_report_errors>' is a boolean (true or 1) to show just only those unsuccessful tests

What can be contributed by you. Lots of tests. Just a few

  1. Test that the notes.yaml file is a valid yaml file
  2. Test that the ADOPTERS.yaml file is a valid yaml file
  3. Test that the schema validates the files example.json and example.jsonld
  4. Test the file example-normalized.json is a valid NGSIv2 file
  5. Test the file example-normalized.jsonld is a valid NGSI-LD file

New subject Smart Data Models and new data model Attribute

Eating your own food is for SDM a demonstration that agile standardization works

We have created a new subject, SmartDataModels, where the structure of the assets of SDM will be released. We have started with the data models of Attribute according to the global data base of attributes (more than 157.000 currently, >100 MB os it takes a while to download)

 

  • Attribute. Description of the data model Attribute