In this writing, I like to share how I learn and try using Fabric Notebooks to write DAX queries, enabling me to extract the most critical metrics from multiple semantic models and consolidate them into a single, aggregated semantic model.
Situation
Managing complex Power BI ecosystems can be daunting, especially when dealing with multiple semantic models and reports. In my scenario, I manage 15 Power BI semantic models and 20 reports, each containing a diverse range of metrics and KPIs. Here's what makes this even more challenging:
- Each semantic model consists of more than seven dimension tables, describing distinct business processes, but common dimensions like Date and Branch unify them.
- Despite having over 20 metrics per model, only 2-3 metrics per semantic model are critical for top management.
- Retrieving insights from these metrics across all models is inefficient, requiring significant effort to harmonize data for executive reporting.
Improvement Points
Streamlining Metrics: With an abundance of metrics across semantic models, it’s essential to have a mechanism that aggregates the key metrics into a concise and actionable format.
Unified Results: Consolidating top-priority metrics scattered across 15 different semantic models is challenging with current Power BI solutions. I believe this might be addressed by leveraging the Metric Sets feature in Fabric, which I plan to explore in more depth.
Efficient Data Preparation: Writing DAX queries directly in Power BI provides functionality but falls short in scalability when it comes to preparing aggregated tables for executive reporting.
How Fabric Notebook Helps
Fabric notebooks bridge the gap between raw DAX queries and structured, reproducible data outputs. Here's how it helps:
- Scalability: Fabric notebooks allow you to write code to interact with Power BI semantic models, execute DAX queries, and transform data at scale.
- Automation: Once written, notebook scripts can be reused or adjusted across multiple semantic models, reducing manual repetition.
- Integration with Lakehouse: Outputs from Fabric notebooks can be loaded into Lakehouse as Delta tables, providing a foundation for further aggregation and visualization.
To-Be Situation
After implementing this approach, the workflow and outcomes are significantly enhanced:
1. Aggregated Metric Tables: Metrics for each semantic model are now stored in a new fact table in Fabric Lakehouse, with granularity limited to Date and Branch.
2. Unified Reporting: By combining outputs from all semantic models into a single table, I can provide clear, actionable insights for top management.
3. Simplified Querying: Instead of dealing with numerous models, reporting teams can query the unified table with ease.
Solution
Here’s a simplified implementation of my process for handling a single semantic model. It demonstrates how to write a reusable DAX query and load the results into Fabric Lakehouse. Keep in mind that this is a work-in-progress, and as I continue to learn and improve, there may be more optimized ways to structure code in Fabric notebooks.
Sample DAX Query Execution in Fabric Notebook
from sempy import fabric
import pandas as pd
# Execute DAX query on the semantic model
result = fabric.evaluate_dax(
workspace="InsightsWorkspace",
dataset="SemanticModel_V1",
dax_string="""
EVALUATE
VAR MetricOne =
SUMMARIZECOLUMNS (
'Branch'[BranchID],
'Date'[DateID],
FILTER ( 'Date', 'Date'[Date] >= TODAY() - 365 ),
"MetricIndex", 1001,
"MetricName", "Metric One",
"MetricValue", FORMAT ( [MetricOneMeasure], "#0.00" )
)
VAR MetricTwo =
SUMMARIZECOLUMNS (
'Branch'[BranchID],
'Date'[DateID],
FILTER ( 'Date', 'Date'[Date] >= TODAY() - 365 ),
"MetricIndex", 1002,
"MetricName", "Metric Two",
"MetricValue", FORMAT ( [MetricTwoMeasure], "#0.00" )
)
RETURN
UNION ( MetricOne, MetricTwo )
"""
)
# Convert the results to a Pandas DataFrame
result_df = pd.DataFrame(result)
# Load the DataFrame into Spark and save to Lakehouse as a Delta table
result_spark_df = spark.createDataFrame(result_df)
result_spark_df.write.format("delta").mode("overwrite").saveAsTable("SemanticModel_Metrics")
With 15 semantic models to manage, writing repetitive code for each in Fabric notebooks is not practical. To streamline the process, I can use a parameterized approach, enabling a single, reusable script to dynamically handle all 15 semantic models based on input parameters. This is something I’ll dive into in my next blog post! 😅
An important consideration is that some metric results, particularly percentages, are not additive. For example, a daily percentage cannot simply be multiplied by 7 days or averaged over 7 days to derive a weekly result. To calculate weekly or monthly metrics accurately, the DateID column in the Date dimension table needs to be adjusted to use a WeekID or MonthID column instead.
Scalability
For 15 semantic models, this process is repeated with slight modifications to the DAX queries, each generating its corresponding metrics table. The final step is to union all tables into a master table containing:
BranchID
DateID
MetricIndex
MetricName
MetricValue
Summary
Learning to write DAX queries in Fabric notebooks revolutionized how I manage and present metrics across multiple Power BI semantic models. By leveraging Fabric notebooks,
1. I simplified workflows for extracting top metrics.
2. I created scalable and reusable processes for managing large datasets.
3. I delivered actionable insights in an executive-friendly format.
The power of Fabric lies in its ability to unify data preparation and analysis, enabling efficient management of complex Power BI ecosystems.
What makes this approach particularly impactful is its dynamic nature—every important metric in the aggregated fact table in this semantic model directly depends on the original metrics in the source semantic models. This ensures that any changes made to the original formulas are seamlessly reflected in the aggregated results, maintaining consistency and reducing maintenance overhead.
I hope this helps having fun in writing DAX queries in Fabric notebooks as much as I do.
Comments