Current test process for new and extended data models
In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:
Pro: it is currently working
Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected
The new process
1) Every test is an independent file:
2) To test the new data model it copies to local the files and then run the tests, which is quicker.
What can you do with basic knowledge of python (or with a good AI service)
Here you can see the current files available in the github repository data-models subdirectory test_data_model
Instructions
This directory contains the decentralized method to test new and existing data models
The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code
test_files = ["test_valid_json", "test_file_exists", "test_schema_descriptions", "test_schema_metadata", "test_duplicated_attributes"]
so if you create a new test you need to extend this line with your file. Bear in mind these points
- that the file you create has to have a function with the same name of the file inside. The file test_schema_descriptions.py has a function named test_schema_descriptions
- Every function returns 3 values. test_name, success, output. test_name is the description of the test run, success is a boolean value indicating if the overall test has been successful. output contains all the messages for the issues or successful passed tests in a json format to be easily manageable.
The file master_tests.py is invoked this way
'python3 master_tests.py <repo_url_or_local_path> <only_report_errors>' . It expects to have all the tests in the subdirectory tests (like in the repo)
- '<repo_url_or_local_path>'. It is the local path or url for the repository where the data model is located. It does not matter because any case the files are copied locally and removed once the tests has finished. Independently if you are going to test one file or all of them the parameter of the function has to be the root of the directory where the files are located. The expect structure is described in the contribution manual. In example https://github.com/smart-data-models/dataModel.Weather/tree/master/WeatherObserved

- '< email >' is the email of the user running the test
- '<only_report_errors>' is a boolean (true or 1) to show just only those unsuccessful tests
What can be contributed by you.
Lots of tests. Just a few
Test that the notes.yaml file is a valid yaml file
Test that the ADOPTERS.yaml file is a valid yaml file
Test that the schema validates the files example.json and example.jsonld
Test the file example-normalized.json is a valid NGSIv2 file
Test the file example-normalized.jsonld is a valid NGSI-LD file