Generate a Physical Data Model Using the API
This article describes how to generate the PDM using the API. If you cannot or do not want to use the API, you can work in the LDM Modeler. The LDM Modeler will scan your database and save the generated PDM under the corresponding data source. You can also choose to generate the LDM at the same time. Quality of the LDM and usefulness of your analyses depend on the state of your database.
To generate the PDM using API, do the following:
Scan the Relational Model of Your Database
To scan the relational model in your database, submit a POST
request to api/v1/actions/dataSources/<data-source-id>/scan
. <data-source-id>
is the ID of the data source that corresponds to the database you want to scan.
By default, all the tables, views, and columns in your database are scanned, and a JSON file with a declarative definition of the PDM is generated. To generate a more accurate PDM, especially when your database contains a large amount of tables/views, you can do the following:
Change tables, views, and columns so that the database contains only the entities that have to be included in the PDM.
Fine-tune the API request to narrow down the scope of the database objects to scan:
- Define what to scan: tables, views, or both.
- Define the prefix to scan only the tables/views whose names contain this prefix.
The JSON file with the PDM definition also contains a top-level section calledwarnings
. For each skipped table/view/column, this section provides an explanation why it was skipped.
You can also prepare the tables/views for use in complex analytical scenarios.
Example: Scanning the demo-ds
data source that represents the pre-installed PostgreSQL database with the sample data prepared in the GoodData.CN Community Edition image
The PDM definition will be saved to the pdm.json
file.
curl $ENDPOINT/api/v1/actions/dataSources/demo-ds/scan \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIz" \
-X POST \
-d '{"separator": "__", "scanTables": false, "scanViews": true}' \
| jq . > pdm.json
Invoke-RestMethod -Method Post -Uri "$ENDPOINT/api/v1/actions/dataSources/demo-ds/scan" `
-ContentType 'application/json' `
-H @{
'Accept' = 'application/json'
'Authorization' = 'Bearer YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIz'
} `
-Body '{"separator": "__", "scanTables": false, "scanViews": true}' | ConvertTo-Json > pdm.json
Store the PDM under the Corresponding Data Source
To store the generated PDM under the corresponding data source, submit a PUT
request to /api/v1/layout/dataSources/<data-source-id>/physicalModel
. <data-source-id>
is the ID of the data source that corresponds to the scanned database. In the API request, specify the JSON file with the PDM definition.
Example: Storing the PDM generated from the demo-ds
data source
curl $ENDPOINT/api/v1/layout/dataSources/demo-ds/physicalModel \
-H "Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIz" \
-H "Content-Type: application/json" \
-X PUT -d @pdm.json
Invoke-RestMethod -Method Put -Uri "$ENDPOINT/api/v1/layout/DataSources/demo-ds/physicalModel" `
-ContentType 'application/json' `
-H @{ 'Authorization' = 'Bearer YWRtaW46Ym9vdHN0cmFwOmFkbWluMTIz' } `
-InFile pdm.json
Once you have stored the PDM, build the LDM.
- If your database is prepared for building analytics and does not contain the complex analytical scenarios, generate the LDM automatically from the stored PDM either using the API or in the LDM Modeler.
- Otherwise, create the LDM manually in the LDM Modeler.
Advanced Use Cases
Multiple Datasets Mapped to the Same Table
This is typically needed when one table represents multiple logical entities, and each entity should be represented by a separate dataset in the LDM.
While the LDM Modeler supports mapping of multiple datasets to the same table, publishing an LDM with such mapping fails.
To avoid this issue, create multiple views on top of the table and map each dataset to a separate view.
For example, you have two tables, users
and tickets
.
- The
users
table contains ticket creators and assignees. - The
tickets
table contains theassignee_id
andcreator_id
columns.
To avoid mapping multiple datasets to the users
table, do the following:
- In the database, create two views on top of the
users
table:v_users_assignees
andv_users_creators
. - In the LDM, create three datasets:
assignees
,creators
, andtickets
. - Map the
tickets
dataset to thetickets
table. - Map the
assignees
dataset to thev_users_assignees
view. - Map the
creators
dataset to thev_users_creators
view. - Create a relationship from the
assignees
dataset to the thetickets
dataset using theassignee_id
column as a primary key in theassignees
dataset. - Create a relationship from the
creators
dataset to thetickets
dataset using thecreator_id
column as a primary key in thecreators
dataset.
No Single-column Primary Key in Tables
While the LDM Modeler supports setting multi-column primary keys in datasets, publishing an LDM with multi-column primary keys fails.
To avoid this issue, create one-attribute primary keys in your database:
- Concatenate the table columns that comprise the primary key in the table.
- Calculate a hash value for the concatenated columns.
- Use the hash value as a one-attribute primary key.
Multiple PDMs Generated from the Same Data Source
If you need to prepare different PDMs from the same data source to serve various business needs, we recommend that you create multiple data sources from the same database and generate a PDM for each data source separately.
To narrow down the scope of the database to scan, you can create views with different prefixes and only scan the database with a specific prefix for each data source. Also, you can define different credentials/privileges or schema for each data source.