The data models available for python developers. pysmartdatamodels 0.5.40 published. Beta version.

Now you can find in the python package pysmartdatamodels with 13 functions for the integrators of the data models (more than 800) in external systems and applications. It is a beta version. There is a function, update_data() that whenever is run, it updates the data models to the last version (including adding the new data models). The code is available at the utils directory.
This python package includes all the data models and several functions (listed below) to use in your developments.

If you want to be updated on this package you can join this mailing list (Announcements are sent only when something relevant happens). We love to get your feedback at

There are several online tools to manage and to create the data models, generate examples or to adapt to existing ontologies. See the tools menu option at the home site.

Functions available include:

1- List all data models. Function list_all_datamodels()
2- List all subjects. Function list_all_subjects()
3- List the data models of a subject. Function datamodels_subject(subject)
4- List description of an attribute. Function description_attribute(subject, datamodel, attribute)
5- List data-type of an attribute. Function datatype_attribute(subject, datamodel, attribute)
6- Give reference model for an attribute. Function model_attribute(subject, datamodel, attribute)
7- Give reference units for an attribute. Function attributes_datamodel(subject, datamodel)
8- List the attributes of a data model. Function attributes_datamodel(subject, datamodel)
9- List the NGSI type (Property, Relationship or Geoproperty) of the attribute. Function ngsi_datatype_attribute(subject, datamodel, attribute)
10- Print a list of data models attributes separated by a separator. Function print_datamodel(subject, datamodel, separator, meta_attributes)
11- Returns the link to the repository of a subject. Function subject_repolink(subject)
12- Returns the links to the repositories of a data model name. Function datamodel_repolink(datamodel)
13- Update the official data model list or the database of attributes from the source. Function update_data()


1.- Create a proper documentation

2.- Function to allow submission of improvements (i.e. missing recommended units or model) and comments to the different data models. Currently, you can do it by searching for your data model here visiting the github repo and making your PR or raising your issues there.
3.- Function to submit a new data model to an incubation repository. Currently, this is done manually incubated repository. By filling this form you are granted to contribute with new data models.
4.- Include new functions like search for the subject of a data model or other that you can suggest to us at

### some example code
from pysmartdatamodels import pysmartdatamodels as sdm
subject = "dataModel.Weather"
dataModel = "WeatherForecast"
attribute = "precipitation"
print(sdm.description_attribute(subject, dataModel, attribute))
print(sdm.datatype_attribute(subject, dataModel, attribute))
print(sdm.model_attribute(subject, dataModel, attribute))
print(sdm.units_attribute(subject, dataModel, attribute))
print(sdm.attributes_datamodel(subject, dataModel))
print(sdm.print_datamodel(subject, dataModel, ",", ["property", "type", "dataModel", "repoName", "description", "typeNGSI", "modelTags", "format", "units", "model"]))

Release the script for subjects’ context consolidation.

The utils directory at the data models compiles some scripts we use internally.

Now you have available a script for consolidating several @contexts from several subjects.

It is the script called by the main menu options Home->tools -> Subjects’ @context merger

the help of the script

# This file takes several @contexts and merges them creating two files
# context.jsonld with the elements successfully merged
# and conflicts.json that shows those attributes clashing.
# clashing in conflicts file has to be solved manually
# INPUT PARAMETERS (merge_subjects_context.json, outputToFile)
# parameter merge_subjects_context.json
# it is the full path to a file where the path to the @contexts will be located
# See an example of the file below
# If not provided it merges all subjects in the smart data models program
# {
# “dataModel.Weather”: “”,
# “dataModel.Battery”: “”,
# “dataModel.Building”: “”,
# “dataModel.Device”: “”,
# }
# parameter ouputToFile
# when True it outputs two files conflicts.json and context.jsonld
# conflicts.json stores the conflict in the name of attributes (to be solved manually)
# context.jsonld has the attribute name and the Smart Data Models local IRI

MAS: Manifesto for Agile Standardization

The Manifesto for Agile Standardization (MAS) describes the 7 principles that we apply to the Smart Data Models program.

0. Don't just standardize, be agile and standardize
1. Do not reinvent the wheel
2. Normalize real cases
3. Be open
4. Don't be overly specific
5. Flat not Deep
6. Sustainability is key

If you want to read the agile standardization manifesto’s complete explanation a one-page document is located at the root of our data models repository.


Export of the full database of attributes

At home -> Search -> json export of attributes database of smart data modelsĀ  is the full database of attributes (more than 18000), see the statistics page as an array of JSON objects.

Fields for each attribute

_id: identifier of the item

property: the name of the attribute

dataModel: the data model this attribute is present

repoName: the subject this data model belongs to

description: the description of the attribute

typeNGSI: Whether it is a property, Geoproperty, or relationship

modelTags: inherited from the data model tags

license: link to the license for the data model

schemaVersion: version of the data model

type: data type

model: when available the reference model for the attribute

units: when available the recommended units for the attribute

format: either date, or time, or date-time, or URI, etc the format of the attribute

Help to early contributors

If have approached the Smart Data Models Program (SDM) for the first time and you want to become a contributor there are some technical concepts that you need to know about the elements compiled at SDM.

Once checked this presentation maybe you want to review the contribution manual


Using Smart Data Models has its rewards

Smart Data Models are openly licensed. It means that nobody is asking you for anything to use them. Indeed it is possible to modify the models and share them.

Anyhow if you are interested in getting some dissemination because you are using them, check the new menu entry (I am already using it!) with a form in which you can be listed as an adopter and get in return dissemination on LinkedIn, Twitter, and in the smart data models blog.

Live support more flexible

We are going to replace the live support session on Mondays by a more flexible calendar service where you can book a 30 min slot for getting support on your data model design or use.

Always available in the upper part of the page.

Almost empty calendar

New projects incubated list

When creating a new data model is good to collaborate. In the Smart Data Models Program you can announce you’re in the process of creating a new data model by pressing the green button on the front page

It will take you to a form where you can announce the main data and include a public mail (mandatory) for being contacted. After a manual review, SDM will grant you the work on the incubated repository thanks to the github user (mandatory).

Once approved you will be published in the official list of data models. You can access this list by clicking the link just below the green button.

Besides this now the creation of a new data model is simpler than ever. You need only a key values payload of your new data model and most of the job is done by this service available at Menu -> Draft a data model -> Generate your schema out of your payload.

NOTE: Apologies to the contributors that sent their last projects (a deactivated plugin prevent their projects to be published publicly, now it’s fixed)

Survey. What ontologies are you using, if any?

    Smart Data Models Program is willing to ease the use of existing and adopted ontologies.

    Q1: Do you use semantic ontologies for your data

    If you do not know it is quite likely that the answer would be no

    Q2: Do you use them with NGSI-LD?

    Q3: What ontologies are you using (or planning to use)?

    Q4: Are you a member of the Smart Data Models organizations ?

    Member of FIWARE FoundationMember of TMForumMember of IUDXMember of OASCNo membership