This file enables you to test all the data models located in a internal subject (subdirectories of the root one). Currently this option is not available as a form but if you send us an email to our infno@smartdatamodels.org
In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.jsonld is a valid NGSI LD file.
This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)
The source code for the test is available at the repo.
Remember that if you want to improve / create a new test, just create a PR on the repo.
In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.
This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)
The source code for the test is available at the repo.
Remember that if you want to improve / create a new test, just create a PR on the repo.
Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.
Current test process for new and extended data models
In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:
Pro: it is currently working
Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected
The new process
1) Every test is an independent file:
2) To test the new data model it copies to local the files and then run the tests, which is quicker.
What can you do with basic knowledge of python (or with a good AI service)
Here you can see the current files available in the github repository data-models subdirectory test_data_model
Instructions
Smart Data Models Validator
This Python script validates the structure and contents of a directory containing the basic files for an official Smart Data Model (it local folder) containing standard data models and supporting files. It checks the presence and correctness of JSON schemas, examples, and YAML documentation using a set of predefined tests according to the contribution manual.
🚀 Features
Supports both GitHub URLs and local paths
Downloads all the required files like schema.json, examples/*.json, ADOPTERS.yaml, and more
Runs a series of validation tests and outputs structured JSON results
Configuration-driven paths for results and downloads
Parallel file downloading for GitHub sources
Cleanup of temporary files after execution
🧪 How to Use
📦 Prerequisites
Python 3.6 or newer
requests library (pip install requests)
📁 Configuration
Edit the config.json file with the following structure:
You need to configure the script by editing the file:
{
"results_dir": "Put a local directory where the script can write, and it will store the results for the tests",
"results_dir_help": "Nothing to edit here it is just instructions",
"download_dir": "Put a local directory where the files being tested can be temporary stored (they are removed by the end of the test)",
"download_dir_help": "Nothing to edit here it is just instructions"
}
'<repo_url_or_local_path>'. It is the local path or url for the repository where the data model is located. It does not matter because any case the files are copied locally and removed once the tests has finished. Independently if you are going to test one file or all of them the parameter of the function has to be the root of the directory where the files are located. The expect structure is described in the contribution manual. In example https://github.com/smart-data-models/dataModel.Weather/tree/master/WeatherObserved
'< email >' is the email of the user running the test
'<only_report_errors>' is a boolean (true or 1) to show just only those unsuccessful tests
The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code
so if you create a new test you need to extend this line with your file. Bear in mind these points
that the file you create has to have a function with the same name of the file inside. The file test_schema_descriptions.py has a function named test_schema_descriptions
Every function returns 3 values.
test_name. test_name is the description of the test run,
success. success is a boolean value indicating if the overall test has been successful.
output. output contains all the messages for the issues or successful passed tests in a json format to be easily manageable.
🔍 Smart Data Models Multi-Data Model Validator
This script automates the validation of multiple data models within a GitHub repository or a local directory by invoking the master_tests.py script on each one.
📦 Overview
Automatically lists first-level subdirectories of a specified GitHub folder
Executes master_tests.py for each subdirectory
Aggregates all validation results into a single timestamped JSON file with the results of the tests
Supports filtering to report only errors
🧰 Requirements
Python 3.6+
Dependencies:
requests (for GitHub API)
master_tests.py must be available in the same directory and executable
NOTE: We did yesterday 17-9 the changes. Unfortunately we made a mistake and now we have to revert all these changes, do it again properly and push. this Friday will be ready if not earlier.
NOTE2: It is already updated. Its Wednesday 15:30. Hopefully this time we made no errors.
The single-source-of-truth of the data models is the json schema (file schema.json). This json schema has a tag ‘$schema’ indicating the meta schema the schema is compliant with.
The directory /code/ (see image with one example) in every data model has now a new draft export the pydantic export.
Pydantic is a Python library that provides data validation and settings management using Python type annotations, allowing you to define data models that enforce type constraints and validate data automatically.
Now in most (if not all) data models you have such export to use it freely. Mind that is a first version and errors could happen (It is welcomed if you find any error or just make a suggestion)