There is a new data model in the subject of Environment, CarbonFootprint
CarbonFootprint. Data model to represent the carbon footprint in CO2 equivalents.
There is a new data model in the subject of Environment, CarbonFootprint
CarbonFootprint. Data model to represent the carbon footprint in CO2 equivalents.
In the data-models repository you can access to the first version to use smart data models as a service. Thanks to the works for the Cyclops project.
The files available create a wrap up around pysmartdatamodels package and also add one service for the online validation of NGSI-LD payloads.
Here is the readme contents to have an explanation of this first version
This project consists of two main components:
pysdm_api3.py
) that provides access to Smart Data Models functionalitydemo_script2.py
) that demonstrates the API endpointspysdm_api3.py
)A RESTful API that interfaces with the pysmartdatamodels
library to provide access to Smart Data Models functionality.
Endpoint | Method | Description |
---|---|---|
/validate-url |
GET | Validate a JSON payload from a URL against Smart Data Models |
/subjects |
GET | List all available subjects |
/datamodels/{subject_name} |
GET | List data models for a subject |
/datamodels/{subject_name}/{datamodel_name}/attributes |
GET | Get attributes of a data model |
/datamodels/{subject_name}/{datamodel_name}/example |
GET | Get an example payload of a data model |
/search/datamodels/{name_pattern}/{likelihood} |
GET | Search for data models by approximate name |
/datamodels/exact-match/{datamodel_name} |
GET | Find a data model by exact name |
/subjects/exact-match/{subject_name} |
GET | Check if a subject exists by exact name |
/datamodels/{datamodel_name}/contexts |
GET | Get @context(s) for a data model name |
The /validate-url
endpoint performs comprehensive validation:
demo_script2.py
)A simple interactive script that demonstrates the API endpoints by opening a series of pre-configured URLs in your default web browser.
my_web_urls
listpython demo_script2.py
The demo includes examples of:
pip install fastapi uvicorn httpx pydantic jsonschema
python pysdm_api3.py
python demo_script2.py
Edit the my_web_urls list in demo_script2.py to change which endpoints are demonstrated.
Apache 2.0
Most of the files of the testing process have been updated and make it available the source code:
https://github.com/smart-data-models/data-models/tree/master/test_data_model
But also there is a new file
This file enables you to test all the data models located in a internal subject (subdirectories of the root one). Currently this option is not available as a form but if you send us an email to our infno@smartdatamodels.org
we could create a specific form for that
See here an example of the outcome.
In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.jsonld is a valid NGSI LD file.
This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)
The source code for the test is available at the repo.
Remember that if you want to improve / create a new test, just create a PR on the repo.
In the new testing process, 4th option in the tools menu, now it is available a new test that checks if the example-normalized.json is a valid NGSIv2 file.
This process helps contributors to debug their data models before submit them officially (where there will be new tests before final approval)
The source code for the test is available at the repo.
Remember that if you want to improve / create a new test, just create a PR on the repo.
When you want to contribute a new data model (or an improvement in an existing one) you need to pass a test.
The current process (3rd option in tools menu) keeps on working as it was.
But we have drafted a new method because
– We need to be more explicit about the tests passed and the errors
– We need to improve the performance
So you can check the new method in the 4th option of the Tools menu
Besides this, the tests are very modular so if you are a python programmer you can use them in your own system because the code is being released or indeed you can write new tests that would be included in the official site. Make a PR on the data-models repo and we will add it eventually. Check this post.
Current test process for new and extended data models
In order to approve a new data model a test needs to be passed. It cold be accessed in the 3rd option in the tools menu at the front page:
Pro: it is currently working
Con: It is mostly created in a single file for testing and error messages are not very explicit about the errors detected
The new process
1) Every test is an independent file:
2) To test the new data model it copies to local the files and then run the tests, which is quicker.
What can you do with basic knowledge of python (or with a good AI service)
Here you can see the current files available in the github repository data-models subdirectory test_data_model
Instructions
This Python script validates the structure and contents of a directory containing the basic files for an official Smart Data Model (it local folder) containing standard data models and supporting files. It checks the presence and correctness of JSON schemas, examples, and YAML documentation using a set of predefined tests according to the contribution manual.
schema.json
, examples/*.json
, ADOPTERS.yaml
, and morerequests
library (pip install requests
)Edit the config.json
file with the following structure:
You need to configure the script by editing the file:
This the content of the file config.json
{
"results_dir": "Put a local directory where the script can write, and it will store the results for the tests",
"results_dir_help": "Nothing to edit here it is just instructions",
"download_dir": "Put a local directory where the files being tested can be temporary stored (they are removed by the end of the test)",
"download_dir_help": "Nothing to edit here it is just instructions"
}
The file master_tests.py can be called:
python3 master_tests.py <directory_root> <email> <only_report_errors> [--published true|false] [--private true|false] [--output output.json]
The file master_tests.py executes all the files in the tests directory as long as they are included in this line of code
test_files = [
"test_file_exists",
"test_valid_json",
"test_yaml_files",
"test_schema_descriptions",
"test_schema_metadata",
"test_string_incorrect",
"test_valid_keyvalues_examples",
"test_valid_ngsiv2",
"test_valid_ngsild",
"test_duplicated_attributes",
"test_array_object_structure",
"test_name_attributes"
]
so if you create a new test you need to extend this line with your file. Bear in mind these points
This script automates the validation of multiple data models within a GitHub repository or a local directory by invoking the master_tests.py
script on each one.
master_tests.py
for each subdirectoryrequests
(for GitHub API)master_tests.py
must be available in the same directory and executableInstall required Python packages if needed:
pip install requests
python3 multiple_tests.py <subject_root> <email> <only_report_errors>
Parameter | Description |
---|---|
subject_root |
Full GitHub URL or local directory to the directory containing subfolders to be tested |
email |
Email used in naming result files and tagging output |
only_report_errors |
true or false (case-insensitive) — limits output to failed tests only |
The script creates a JSON file named like test_results_YYYY-MM-DD_HH-MM-SS.json containing all subdirectory test results.
Each entry in the output JSON has:
For questions, bug reports, or feedback, please use the Issues tab or contact: Your Name or Team 📧 info@smartdatamodels.org
NOTE: We did yesterday 17-9 the changes. Unfortunately we made a mistake and now we have to revert all these changes, do it again properly and push. this Friday will be ready if not earlier.
NOTE2: It is already updated. Its Wednesday 15:30. Hopefully this time we made no errors.
The single-source-of-truth of the data models is the json schema (file schema.json). This json schema has a tag ‘$schema’ indicating the meta schema the schema is compliant with.
Now all data models have been updated to the last one “https://json-schema.org/draft/2020-12/schema”
Therefore some errors provided by validators due to the obsolete previous value have been removed.
Thanks to the user Elliopardad in GitHub for its contribution and to the community of json schema for its support.
As we announce earlier we are one of the project listed in its global landscape of projects.
The directory /code/ (see image with one example) in every data model has now a new draft export the pydantic export.
Pydantic is a Python library that provides data validation and settings management using Python type annotations, allowing you to define data models that enforce type constraints and validate data automatically.
Now in most (if not all) data models you have such export to use it freely. Mind that is a first version and errors could happen (It is welcomed if you find any error or just make a suggestion)
The Smart Data Models (SDM) initiative, led by FIWARE Foundation in collaboration with IUDX, TM Forum, and OASC, has firmly established JSON Schema as the core component and single source of truth for creating exports in YAML, SQL, and soon RDF. This strategic move aligns the SDM initiative with the growing JSON schema community, enabling a wider adoption of this powerful data modeling standard.
The SDM initiative is an open collaboration aiming to promote the adoption of a reference architecture and compatible common data models across various sectors, starting with Smart Cities. By leveraging JSON schema as the foundation, the initiative ensures that the data models developed are not only technically robust but also interoperable with a wide range of semantic and linked data initiatives.
“The adoption of JSON schema as the core component of the Smart Data Models initiative is a significant step forward in our mission to enable interoperable smart solutions,” said Alberto Abella (Data Modeling Expert, FIWARE Foundation). “This collaboration with the JSON schema community will further strengthen the initiative and drive the widespread adoption of these common data models.”
In addition to the JSON schema-based data models, the SDM initiative also creates comprehensive specifications in eight languages, including English, French, German, Spanish, Italian, Korean, Chinese, and Japanese. This multilingual approach ensures that the data models are accessible and usable by a global audience, fostering international collaboration and knowledge sharing.
“The alignment of the Smart Data Models initiative with the JSON Schema community is a testament to the power and versatility of this data modeling standard,” said Benjamin Granados (Community Development Senior Manager – Open Technologies, JSON Schema Community, Postman). “We are excited to work closely with the SDM team to further enhance the adoption and integration of JSON schema across various smart applications and services.”
The Smart Data Models initiative welcomes contributions from the public. The data models are licensed under a royalty-free, open-source model, permitting free use, modification, and sharing. This collaborative approach fosters innovation and the creation of interoperable smart solutions, which can be replicated and scaled across various sectors and regions.
For more information about the Smart Data Models initiative and its adoption of JSON schema, please visit the official website at https://smartdatamodels.org or follow the initiative on X @smartdatamodels or in Linkedin.