top of page
Jihwan Kim

Storing a result of INFO.XXX DAX Query in Fabric Lakehouse: Using It as a Source Table

In this writing, I like to share how I learned to extract information from a semantic model, create a table from this data, and store it in Fabric Lakehouse. This stored table then serves as a dynamic source for a Power BI report, enabling the automatic generation of documentation for the semantic model.


One of many effective methods to obtain information from a semantic model is by utilizing INFO DAX functions within the DAX Query View in Power BI Desktop.



As outlined in the link above, there are numerous valuable INFO DAX functions. I will demonstrate how I utilized some of these functions to create tables in Fabric Lakehouse, using them as a source of my Power BI report.


In the demonstration, I utilize the following INFO DAX functions.


INFO.TABLES( )

INFO.COLUMNS( )

INFO.REFRESHPOLICIES( )

INFO.RELATIONSHIPS( )

INFO.MEASURES( )


Additionally, use the following query to obtain information about measure dependencies and create a table in Fabric Lakehouse (refer to the previous blog post: Link).


SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY




When you write INFO.TABLES( ) in the DAX Query View in Power BI Desktop, the result appears as follows:


The results of other queries are shown below.



As demonstrated above, since each query's result is a table, I combined some of these DAX queries to create summarized tables in DAX Query View, as shown below.


(1) Information on Tables and Columns with Refresh Policy Details in the Semantic Model.

EVALUATE
	VAR _table =
	NATURALLEFTOUTERJOIN(
		SELECTCOLUMNS(
			INFO.TABLES(),
			"TableID", [ID],
			"TableName", [Name]
		),
		SELECTCOLUMNS(
			INFO.REFRESHPOLICIES(),
			[TableID],
			[RollingWindowGranularity],
			[RollingWindowPeriods],
			[IncrementalPeriods]
		)
	)
	VAR _column = SELECTCOLUMNS(
		INFO.COLUMNS(),
		[TableID],
		"ColumnName", [ExplicitName],
		"ColumnID", [ID],
		"ColumnDescription", [Description],
		[IsHidden],
		[IsUnique],
		[IsKey],
		[ModifiedTime]
	)
	RETURN
		FILTER(
			NATURALLEFTOUTERJOIN(
				_table,
				_column
			),
			NOT (CONTAINSSTRING(
				[ColumnName],
				"rownumber-"
			))
		)

(2) Information on Relationships Between FromTableColumn and ToTableColumn in the Semantic Model.

EVALUATE
	ADDCOLUMNS(
		SELECTCOLUMNS(
			INFO.RELATIONSHIPS(),
			[IsActive],
			[FromTableID],
			[FromColumnID],
			[FromCardinality],
			[ToTableID],
			[ToColumnID],
			[ToCardinality]
		),
		"FromTableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([FromTableID])
			),
			[TableName]
		),
		"FromColumnName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.COLUMNS(),
					[TableID],
					"ColumnName", [ExplicitName],
					"ColumnID", [ID]
				),
				[ColumnID] = EARLIER([FromColumnID])
			),
			[ColumnName]
		),
		"ToTableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([ToTableID])
			),
			[TableName]
		),
		"ToColumnName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.COLUMNS(),
					[TableID],
					"ColumnName", [ExplicitName],
					"ColumnID", [ID]
				),
				[ColumnID] = EARLIER([ToColumnID])
			),
			[ColumnName]
		)
	)

(3) Information on Measures in the Semantic Model.

EVALUATE
	ADDCOLUMNS(
		SELECTCOLUMNS(
			INFO.MEASURES(),
			[TableID],
			"MeasureName", [Name],
			[Description],
			"DAX", [Expression],
			[DetailRowsDefinitionID],
			[ModifiedTime]
		),
		"TableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([TableID])
			),
			[TableName]
		)
	)


Together with the below query,

SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY

all of these queries can be integrated into a Fabric Notebook to generate delta tables and store them in Fabric Lakehouse.


Author PySpark code as depicted below, resulting in the creation of four delta tables within the Lakehouse.

# tables and columns information
# Importing the necessary library
from sempy import fabric
import pandas as pd

result = fabric.evaluate_dax(
    workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
    dax_string=
    """
EVALUATE
	VAR _table =
	NATURALLEFTOUTERJOIN(
		SELECTCOLUMNS(
			INFO.TABLES(),
			"TableID", [ID],
			"TableName", [Name]
		),
		SELECTCOLUMNS(
			INFO.REFRESHPOLICIES(),
			[TableID],
			[RollingWindowGranularity],
			[RollingWindowPeriods],
			[IncrementalPeriods]
		)
	)
	VAR _column = SELECTCOLUMNS(
		INFO.COLUMNS(),
		[TableID],
		"ColumnName", [ExplicitName],
		"ColumnID", [ID],
		"ColumnDescription", [Description],
		[IsHidden],
		[IsUnique],
		[IsKey],
		[ModifiedTime]
	)
	RETURN
		FILTER(
			NATURALLEFTOUTERJOIN(
				_table,
				_column
			),
			NOT (CONTAINSSTRING(
				[ColumnName],
				"rownumber-"
			))
		)
    """
    )

df = pd.DataFrame(result)
spark_df = spark.createDataFrame(df)

# Load the data into a Delta table
table_name = "tables_and_columns"  # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)
# relationships information
from sempy import fabric
import pandas as pd

result = fabric.evaluate_dax(
    workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
    dax_string=
    """
EVALUATE
	ADDCOLUMNS(
		SELECTCOLUMNS(
			INFO.RELATIONSHIPS(),
			[IsActive],
			[FromTableID],
			[FromColumnID],
			[FromCardinality],
			[ToTableID],
			[ToColumnID],
			[ToCardinality]
		),
		"FromTableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([FromTableID])
			),
			[TableName]
		),
		"FromColumnName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.COLUMNS(),
					[TableID],
					"ColumnName", [ExplicitName],
					"ColumnID", [ID]
				),
				[ColumnID] = EARLIER([FromColumnID])
			),
			[ColumnName]
		),
		"ToTableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([ToTableID])
			),
			[TableName]
		),
		"ToColumnName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.COLUMNS(),
					[TableID],
					"ColumnName", [ExplicitName],
					"ColumnID", [ID]
				),
				[ColumnID] = EARLIER([ToColumnID])
			),
			[ColumnName]
		)
	)
    """
    )

df = pd.DataFrame(result)
spark_df = spark.createDataFrame(df)

# Load the data into a Delta table
table_name = "relationship"  # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)
# Measures information
from sempy import fabric
import pandas as pd

result = fabric.evaluate_dax(
    workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
    dax_string=
    """
    EVALUATE
	ADDCOLUMNS(
		SELECTCOLUMNS(
			INFO.MEASURES(),
			"MeasureID", [ID],
			[TableID],
			"MeasureName", [Name],
			[Description],
			"DAX", [Expression],
			[DetailRowsDefinitionID],
			[ModifiedTime]
		),
		"TableName", MAXX(
			FILTER(
				SELECTCOLUMNS(
					INFO.TABLES(),
					"TableID", [ID],
					"TableName", [Name]
				),
				[TableID] = EARLIER([TableID])
			),
			[TableName]
		)
	)
    """
    )

df = pd.DataFrame(result)
spark_df = spark.createDataFrame(df)

# Load the data into a Delta table
table_name = "measures_expression"  # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)
# Measure Dependencies information
from sempy import fabric
import pandas as pd

result = fabric.evaluate_dax(
    workspace = "incremental_refresh", dataset = "incremental_refresh_sales_return",
    dax_string=
    """
    SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY
    """
    )

df = pd.DataFrame(result)
spark_df = spark.createDataFrame(df)

# Load the data into a Delta table
table_name = "measures_dependencies"  # Replace with your desired table name
spark_df.write.format("delta").mode("overwrite").saveAsTable(table_name)


I import the aforementioned tables from Fabric Lakehouse, perform necessary transformations, construct a data model, and develop a Power BI report aimed at documenting the semantic model.

Clearly, the visualizations above are not the final iteration of documentation; however, they represent the initial steps toward establishing comprehensive documentation for the semantic model.


For my next learning step, I want to go beyond just using four or five INFO DAX functions. I'll dive deep into understanding many INFO DAX functions. I'll use them to make detailed Power BI Documentation. By learning and using more functions, I can document my data model better and explain its details more clearly in the documentation.



In summary, I described using different INFO DAX functions to make better documentation for Power BI Semantic Models. I explored many functions and learned how to make detailed Power BI reports that explain my data models well. By learning and using more INFO DAX functions, I can understand my data models better and make my documentation more detailed. This makes the documentation process more interesting and helpful, so I can get more value from my Power BI Semantic Models.



I hope this makes documenting Power BI Semantic Models more enjoyable.








429 views0 comments

Comentarios


bottom of page