top of page
Jihwan Kim

Documenting DAX Measure Dependencies in a Power BI Semantic Model Using Fabric Notebook

In this writing, I'll share how I learned and discovered the process of documenting dependencies of DAX measures in a Power BI semantic model using Fabric Notebook.


From Chris Webb's BI Blog post, "Documenting Dependencies Between DAX Calculations" , I learned how to write queries to obtain information about measure dependencies in a Power BI semantic model.

select * from $system.discover_calc_dependency

Recently, with the introduction of DAX Query View in Power BI Desktop, I learned to write the above query directly in DAX Query View to view the information, as shown below.



Upon seeing the result like above, I wanted to convert this query view into an actual table within the source (such as in Fabric Lakehouse) and then import it into Power BI to create a visualization for documenting the measure dependencies.


I can get visualizations from AppSource and add it to Power BI Desktop to create upstream/downstream visualizations. Alternatively, I can transform this table into a more Power-BI-developer-friendly structure to focus on the dependency paths of measures and visualize it in a matrix or table visualization.


Nevertheless, my the first step was to create a table from this query view and store it in the Fabric Lakehouse.


After discovering some intriguing capabilities, such as writing DAX queries in a Fabric Notebook, I was able to write some simple code in the notebook, as demonstrated below. This allowed me to create a dataframe and ingest it into the Lakehouse.

Since I am still relatively new to writing Python, the code provided below might not be the most optimized. I welcome any suggestions for improvements or more efficient versions.


Here are the steps to create a measures_dependencies table in Lakehouse.


In the same workspace where the semantic model is located, create a new Lakehouse.


Open New notebook.


Rename the notebook.


Write the code as demonstrated below, and run it.


# Importing the necessary library
from sempy import fabric
import pandas as pd

result = fabric.evaluate_dax(
    workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
    dax_string=
    """
    SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY
    """
    )

df = pd.DataFrame(result)

spark_df = spark.createDataFrame(df)


# Load the data into a Delta table
table_name = "measures_dependencies"  # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)


Once the code has been successfully executed, click on the refresh button to ensure that the newly created table is visible.


This is also checked within Lakehouse, confirming that the table is now prepared for importing into Power BI Desktop.


Establishing the actual table in this manner serves as a pivotal initial step, simplifying the process of importing it into Power BI Desktop. From there, I can seamlessly begin visualizing the measure dependencies within the Power BI report.


To summarize,

in this writing, I detailed how I learned to efficiently document measure dependencies within the Power BI semantic model. I learned to created a table in Lakehouse showcasing measure dependencies, enabling seamless integration into Power BI Desktop.

This process streamlines the documentation process, resulting in enhanced efficiency in documenting measure dependencies.


I hope this helps having fun in your documentation process for Power BI semantic models.

396 views0 comments

Recent Posts

See All

留言


bottom of page