One-Click Workspaces for Power BI CI/CD: Is It Possible?

At Plainsight, we know how to automate repetitive tasks. In this blog, we’ll explain how we automated a set of steps to deploy Power BI Reports through code. This blog covers a scenario where a ‘report/product’ is deployed and updated to multiple workspaces at multiple tenants. This implies that we cannot count on the standard Power BI pipelines. So another solution was required. The goal was to have an (almost) one-click deployment, where all connections are preconfigured, making deployments from development to production seamless. This setup reduces human error, saves time, and empowers you to scale analytics efforts or launch new initiatives efficiently.

In this post, we’ll explore how to achieve this with the help of Azure DevOps, the principles of CI/CD, and where tools like PowerShell cmdlets and the Power BI APIs come together to make hassle-free Power BI workspaces a reality. We’ll cover some essential steps that are necessary to automate the creation of workspaces, deploy and configure the datasets (nowadays called the semantic models) and link the reports to it. This is no deep dive in the code, but a structured overview of the steps you need to take. Upon implementation, it is needed to adjust the information provided within this blog to make a solution that best suits your needs. Nevertheless, it’s value maybe in pointing you towards a new thinking direction.

The Power of Power BI and PowerShell Automation

Let’s dive into the process of creating an automated CI/CD setup for Power BI, using PowerShell, service principals, and APIs to streamline deployment.

Step 1: Setting Up a Service Principal

To start, you’ll need a service principal. This is needed for authentication with the Power BI service. Make sure to assign it the necessary permissions in the Power BI Admin Portal, including enabling API access.

  • Power BI Admin Configuration: Navigate to the Power BI Admin Portal and ensure that API access permissions are enabled. This step is crucial for automating workspace management and deployment through scripts as a lot of our scripts connect to this API.

Be sure to enable Read/Write for the XMLA-endpoint, as otherwise you won’t be able to publish the dataset. The XMLA-Endpoint allows you to make changes to your semantic model through code. In short you can manage, load, update, recalculate the tables of your semantic model over this endpoint. This is necessary for the deployment of our semantic models.

But there are also changes required in the Azure Portal. Within Azure Entra ID, It’s important to give ‘API Permissions’ to the App Registration.
I suggest to just include the necessary one’s to get the job done.

  • Fabric: Workspace.ReadWrite.All

  • Fabric: Dataset.ReadWrite.All

  • Fabric: Report.ReadWrite.All

  • Fabric: Capacity.ReadWrite.All

When this all is set up. We can go further to create the automation magic!

Step 2: Checking Existing Workspaces and Creating New Ones

Before creating a workspace, confirm whether it already exists. Retrieve a list of workspaces accessible to your service principal. If the desired workspace isn’t yet created, use a Power BI API POST call to create it, then assign the necessary access rights for specific users or security groups. Which can be done with this API-call:

With this code you add the specified group as a member in the workspace with the ‘Admin’ rights. TIP: It can be useful when testing to give yourself those rights, as otherwise you won’t be able to see the newly created workspace.

Step 3: Setting Up the Power BI Gateway

If a Power BI gateway is already configured, skip to the next section. Otherwise, set up the gateway on a VM/Server/…  using PowerShell. After this has been set-up, establish the required data sources, with the credentials to authenticate to it. The service principal can then bind data sources for datasets, making data access seamless across environments.

Here can be found more information and useful pieces of scripts to assist in this topic:

If this is already set-up you can skip these steps…


Step 4: Deploying Datasets with Tabular Editor CLI

For dataset deployment, the Tabular Editor CLI is a powerful choice, particularly if you’re using Power BI Premium, Premium Per User or a Fabric Capacity. Only in those cases, it is possible to use the XMLA-endpoint of the workspace to deploy your data model. Depending on the solution you have, you can overwrite the parameters of the semantic model in your .BIM file or you can give the correct ones with the command as the ‘connection’. The .BIM file contains the logic of the dataset.

The code I’m using to deploy the dataset to our Power BI Workspace

TIP When deploying with Tabular Editor CLI in a DevOps-pipeline the parameter -V is pretty useful. Otherwise you will have errors in your pipeline’s when deploying new tables, measure’s which are at the moment of deployment still ‘Unprocessed’


Step 5: Linking Reports to Datasets

To streamline report linking, leverage Power BI’s newer .pbip file format, which allows you to adjust the connection file without needing to manipulate .pbix files directly (Like it was in the past..). This new approach simplifies connecting reports to datasets without the risk of duplicating reports.

  • Report Linking: Use the .pbip file format to adjust connections and link reports to the correct datasets, ensuring seamless updates without duplicating reports.

TIP: Do some API call’s to find out the correct ID of the semantic model you will be connecting to. And assign this in the .connections file. When uploading this file it will be replaced/updated instead of a ‘new report’ because of it being linked to another semantic model.

TIP: With the PowerShell cmdlet; New-PowerBIReport you can deploy correctified report for the right environment to the workspace.

Step 6: Publishing for End-Users

There’s still one thing that, at this moment in time, is not able to be automated. There’s no cmdlet or API call to define or update an Power BI App. With the Power BI Pipelines it can be an option. But I’ve not further invested time in this because for my tailor made solution it was required to deploy to other tenants. So those Power BI pipelines weren’t an option. You can see it as an end-user-acceptance. You can include in the process to verify that the published reports are checked and confirmed to be used before the app will be updated.

Wrap up

Now, with an automated CI/CD process, you can quickly provide workspaces, pre-configured data sources, and link reports once the set-up is done with minimal, almost no manual interaction. This setup simplifies deployment across environments, enabling rapid iteration and minimizing error.

TIP: A last tip, but it can be personal. I’ve searched for name’s instead of keeping track of ID’s when doing the API calls. Because a workspace can be deleted and be recreated with the same name. When holding on ID’s, you need to fix this parameter and have some time maintaining variables in libraries. When using name’s your code will adapt and search for the right ID to use in further steps.

Neil Van den Broeck

Neil is a data enthusiast who likes to automate repetitive tasks, both in his day-to-day life as a data engineer and nonprofessionally. Python is his go-to programming language, and over the years, he has gained a lot of experience in CI/CD, DevOps, and SQL.

During his leisure time, you can find him capturing videos, acting in theatre, or riding his motorcycle.

Anxious to know what Plainsight could mean for you?

Previous
Previous

Meet the team: Jord🚀

Next
Next

Meet the team: Femke 🚀