In a previous blog post (link provided below), I discussed how to store tables in the Fabric Lakehouse that are generated using INFO.xxx DAX functions.
Storing a result of INFO.XXX DAX Query in Fabric Lakehouse: Using It as a Source Table (jihwankimcscp.wixsite.com)
In this writing, I would like to share how I learned to understand and write DAX queries using INFO.xxx functions to create tables that contain information about the semantic model. The tables created here will eventually be imported into Power BI to generate a "Documenting Power BI Semantic Models" report.
To start, I determine the information to gather from the semantic model. The list below outlines the details I want to display in the "Documenting Power BI Semantic Models" report, along with the corresponding INFO.xxx DAX functions to use.
INFO.TABLES( )
semantic model name
table name
table description
table is hidden
column name
column description
column is hidden
column is unique
column is key
INFO.REFRESHPOLICIES( )
table name
table source expression
table rolling window period
table incremental period
INFO.MEASURES( )
display table name
display folder name
measure name
measure description
measure expression
measure is hidden
INFO.RELATIONSHIPS( )
table from
column from
from cardinality
table to
column to
to cardinality
single direction or bi direction
is active
INFO.CALCDEPENDENCY( )
object type
object name
object expression
referenced object type
referenced object name
referenced object expression
INFO.EXPRESSIONS( )
parameter name
parameter description
parameter expression
Additional INFO.xxx DAX functions can provide valuable information about the semantic model. And more functions are described in the link below.
Below, I share how I create two tables that provide information on tables, columns, and refresh policies.
You can create more tables with additional information using similar techniques.
DAX query for Tables and refresh information
EVALUATE
VAR _tableinfo =
SELECTCOLUMNS(
INFO.TABLES(),
"semantic_model_name", "sales_return",
"table_id", [ID],
"table_name", [Name],
"table_description", [Description],
"table_is_hidden", [IsHidden]
)
VAR _refreshinfo = SELECTCOLUMNS(
INFO.REFRESHPOLICIES(),
"table_id", [TableID],
"table_source_expression", [SourceExpression],
"table_rolling_window_period", [RollingWindowPeriods],
"table_incremental_period", [IncrementalPeriods]
)
RETURN
NATURALLEFTOUTERJOIN(
_tableinfo,
_refreshinfo
)
The above process results in a table containing the following columns.
semantic_model_name
table_id
table_name
table_description
table_is_hidden
table_source_expression
table_rolling_window_period
table_incremental_period
DAX query for Tables and columns information
EVALUATE
VAR _tableinfo =
SELECTCOLUMNS(
INFO.TABLES(),
"semantic_model_name", "sales_return",
"table_id", [ID],
"table_name", [Name],
"table_description", [Description],
"table_is_hidden", [IsHidden]
)
VAR _columninfo =
SELECTCOLUMNS(
INFO.COLUMNS(),
"table_id", [TableID],
"column_id", [ID],
"column_name", [ExplicitName],
"column_description", [Description],
"column_is_hidden", [IsHidden],
"column_is_unique", [IsUnique],
"column_is_key", [IsKey],
"column_modified_time", [ModifiedTime],
"column_structure_modified_time", [StructureModifiedTime]
)
RETURN
NATURALLEFTOUTERJOIN(
_tableinfo,
_columninfo
)
The above process results in a table containing the following columns.
semantic_model_name
table_id
table_name
table_description
table_is_hidden
column_id
column_name
column_description
column_is_hidden
column_is_unique
column_is_key
column_modified_time
column_structure_modified_time
As mentioned at the beginning of this blog post, when storing the results of the DAX query in the Fabric Lakehouse, refer to the previous blog post. Or, you can write the DAX query within the quotations in the PySpark code in the Fabric Notebook, as shown below.
# tables and columns information
# Importing the necessary library
from sempy import fabric
import pandas as pd
result = fabric.evaluate_dax(
workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
dax_string=
"""
"""
)
df = pd.DataFrame(result)
spark_df = spark.createDataFrame(df)
# Load the data into a Delta table
table_name = "testing_documentation" # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)
To summarize, I learned the powerful capabilities of INFO.xxx DAX functions for documenting semantic models in Power BI. By leveraging these functions, I can gather essential information about tables, columns, measures, calculation dependencies, relationships, parameters information, and refresh policies directly from the semantic model. Storing the results of these DAX queries in Fabric Lakehouse proves to be a highly effective method. This approach not only facilitates the creation of comprehensive documentation but also ensures that the documentation remains updated with schedule refresh configuration and it reflects any changes made to the model.
Creating documentation may not always be the most thrilling task, but I aim to make it enjoyable and impactful through this blog post, especially when documenting Power BI Semantic Models.
Comentarios