Python dominates the data science and analytics ecosystem, making it the natural choice for teams building automation around Microsoft Fabric analytics platforms. The Azure Fabric Management SDK for Python enables AI agents to programmatically manage Fabric capacity resources from Python code, providing seamless integration with data pipeline orchestration, cost optimization scripts, and analytics platform automation written in Python.
What This Skill Does
The azure-mgmt-fabric-py skill provides Python interfaces for managing Microsoft Fabric capacity resources through the Azure Management API. It enables capacity creation with specific SKU configurations, scaling operations between F-series sizes, suspend and resume operations for cost management, administrator assignment, name availability checks, and comprehensive resource lifecycle management from Python applications.
This skill allows agents to create capacities ranging from F2 (2 capacity units) to F2048 (2048 capacity units), manage long-running operations with proper polling and completion detection, list capacities across resource groups and subscriptions for inventory management, query available SKUs before provisioning, and update capacity properties including administrators and tags.
The SDK provides Pythonic interfaces with proper type hints, integrates with Azure Identity for multiple authentication methods including service principals and managed identity, and handles long-running operations through polling patterns with result() methods that block until completion.
Getting Started
Install the Azure Fabric management library and Azure Identity:
pip install azure-mgmt-fabric
pip install azure-identity
Set up environment variables for Azure resource targeting:
export AZURE_SUBSCRIPTION_ID="your-subscription-id"
export AZURE_RESOURCE_GROUP="your-resource-group"
Initialize the client with DefaultAzureCredential:
from azure.identity import DefaultAzureCredential
from azure.mgmt.fabric import FabricMgmtClient
import os
credential = DefaultAzureCredential()
client = FabricMgmtClient(
credential=credential,
subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"]
)
DefaultAzureCredential automatically discovers credentials from environment variables, managed identity, Azure CLI, or Visual Studio Code, selecting the first available method.
Key Features
Capacity Creation: Provision Fabric capacities with configurable SKUs, Azure regions, administrator assignments, and tags. Creation is a long-running operation handled through the SDK's begin_create_or_update method, which returns a poller for completion tracking.
SKU Flexibility: Support for all F-series SKUs from F2 through F2048, with capacity units corresponding to the numerical value. List available SKUs before provisioning to ensure your desired configuration is supported in your target region.
Scaling Operations: Update capacity SKUs to scale up or down based on workload demands. Scaling operations may involve brief downtime, and the SDK properly handles the asynchronous nature through long-running operation pollers.
Cost Optimization: Suspend capacities during idle periods to stop compute billing while retaining resource configuration and associated workspaces. Resume operations restore full functionality. These state transitions are long-running operations managed through the SDK's polling patterns.
Administrator Management: Specify capacity administrators using email addresses or user principal names. Update administrator lists through capacity update operations without recreating resources.
Query Operations: List capacities in resource groups or across entire subscriptions, get detailed capacity configurations including SKU and state information, and check name availability before attempting creation to avoid provisioning failures.
Usage Examples
Creating a development Fabric capacity with Python:
from azure.mgmt.fabric import FabricMgmtClient
from azure.mgmt.fabric.models import (
FabricCapacity,
FabricCapacityProperties,
FabricCapacityAdministration,
CapacitySku
)
from azure.identity import DefaultAzureCredential
import os
credential = DefaultAzureCredential()
client = FabricMgmtClient(
credential=credential,
subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"]
)
resource_group = os.environ["AZURE_RESOURCE_GROUP"]
capacity = client.fabric_capacities.begin_create_or_update(
resource_group_name=resource_group,
capacity_name="dev-analytics-capacity",
resource=FabricCapacity(
location="eastus",
sku=CapacitySku(
name="F4",
tier="Fabric"
),
properties=FabricCapacityProperties(
administration=FabricCapacityAdministration(
members=["[email protected]"]
)
),
tags={
"environment": "development",
"cost-center": "analytics"
}
)
).result()
print(f"Capacity provisioned: {capacity.name}")
print(f"State: {capacity.properties.state}")
Implementing auto-scaling based on workload patterns:
from azure.mgmt.fabric.models import FabricCapacityUpdate, CapacitySku
# Get current capacity
capacity = client.fabric_capacities.get(
resource_group_name=resource_group,
capacity_name="prod-analytics-capacity"
)
# Scale up during business hours
updated = client.fabric_capacities.begin_update(
resource_group_name=resource_group,
capacity_name="prod-analytics-capacity",
properties=FabricCapacityUpdate(
sku=CapacitySku(
name="F64", # Scale from F32 to F64
tier="Fabric"
)
)
).result()
print(f"Scaled to: {updated.sku.name}")
Automating cost optimization through scheduled suspension:
import time
# Suspend capacity during off-hours
print("Suspending capacity...")
client.fabric_capacities.begin_suspend(
resource_group_name=resource_group,
capacity_name="dev-analytics-capacity"
).result()
print("Capacity suspended - compute billing stopped")
# Later: Resume when needed
print("Resuming capacity...")
client.fabric_capacities.begin_resume(
resource_group_name=resource_group,
capacity_name="dev-analytics-capacity"
).result()
print("Capacity active and ready")
Best Practices
Use DefaultAzureCredential for authentication to support multiple authentication methods without code changes. In development, it uses Azure CLI credentials. In production on Azure, it automatically uses managed identity. For CI/CD pipelines, it discovers service principal credentials from environment variables.
Always call .result() on long-running operation pollers when subsequent code depends on operation completion. Capacity provisioning, scaling, and state transitions must complete before workspaces can be created or workloads assigned to the capacity.
Suspend unused capacities, especially development and testing environments. Fabric capacities bill for compute even when idle, and suspending stops these charges while preserving all configurations, workspaces, and assignments.
Start with smaller SKUs (F2, F4, F8) for development and scale based on actual utilization metrics. Each capacity unit costs money, and premature scaling increases costs unnecessarily. Monitor utilization through Azure Monitor before scaling decisions.
Check name availability before creating capacities in automated workflows. Since capacity names must be globally unique across all Azure subscriptions, conflicts are common. Implement retry logic with alternative names or timestamp-based suffixes.
Use tags consistently for cost allocation and resource organization. Include environment type, cost center, team ownership, and automation source. Tags enable filtering in Azure Cost Management and programmatic resource discovery.
Handle exceptions appropriately for Azure SDK errors. Operations can fail due to invalid configurations, quota limits, or transient network issues. Check exception details for specific error codes and messages to determine appropriate retry logic.
When to Use This Skill
Use this skill when building Python-based analytics platform automation. Teams orchestrating data pipelines with Python should manage Fabric infrastructure from the same language as their pipeline code, enabling unified deployment scripts and infrastructure-as-code implementations.
It's ideal for data engineering teams who live in Python and want to automate capacity provisioning without learning .NET or switching languages. The Python SDK provides complete management capabilities matching the .NET SDK functionality.
The skill is valuable for implementing auto-scaling policies based on workload metrics. Monitor capacity utilization through Azure Monitor, and programmatically scale up during peak analysis periods or scale down during off-hours to optimize costs while maintaining performance.
Use it for multi-tenant analytics platforms where each tenant receives dedicated capacity. Programmatically create capacity resources per tenant with appropriate SKUs based on their service tier, ensuring isolation and predictable billing.
When Not to Use This Skill
Don't use this skill for managing Fabric workspaces, lakehouses, warehouses, notebooks, or data items. Those are data plane operations handled through the Microsoft Fabric REST API or data plane SDKs after capacity provisioning.
If you're deploying capacity infrastructure using ARM templates, Bicep, or Terraform, you don't need this SDK for resource creation. Infrastructure-as-code tools provide declarative resource definitions often simpler for pure infrastructure provisioning.
Avoid it for querying analytics data, running Spark jobs, or executing notebooks. This SDK manages the capacity compute resources, not the analytics workloads running on that capacity. Use Fabric APIs and tools for data operations.
Don't use it for one-time capacity creation through the Azure portal. The SDK adds value for automated, repeatable provisioning workflows and dynamic management based on utilization, not single manual operations.
Related Skills
- azure-mgmt-fabric-dotnet - Manage Fabric capacities using .NET
- azure-monitor-query-py - Query capacity utilization metrics for scaling decisions
- azure-servicebus-py - Integrate event-driven capacity management
Source
This skill is provided by Microsoft as part of the Azure SDK for Python. Learn more at the PyPI package page, explore the API reference documentation, and view source code on GitHub.