ExpressRoute Direct Overview
ExpressRoute Direct provides dedicated physical connectivity directly to Microsoft edge locations without service provider intermediaries. This enables provisioning multiple ExpressRoute circuits on-demand with flexible bandwidth allocation.
Key technical advantages include dedicated fiber-optic connections, sub-rate circuit provisioning, dual-port redundancy, and direct BGP peering with Microsoft's backbone infrastructure.
Supports 10 Gbps and 100 Gbps port speeds with the ability to create multiple circuits per port pair, each with independent bandwidth requirements and routing policies.
Physical Architecture
Direct fiber connectivity terminates at Microsoft Edge Exchange locations using single-mode fiber optics. Each ExpressRoute Direct deployment requires dual physical connections for redundancy.
The architecture separates physical layer connectivity from logical circuit provisioning. Physical ports provide raw bandwidth while circuits handle traffic segmentation and routing policies.
Cross-connect infrastructure links customer equipment to Microsoft edge routers through dedicated patch panels and fiber management systems.
Bandwidth Configuration
Port speeds of 10 Gbps support circuits from 50 Mbps to 10 Gbps. The 100 Gbps option supports circuits from 50 Mbps to 100 Gbps with higher circuit density per port pair.
Sub-rate circuits enable bandwidth optimization across multiple workloads. Circuit bandwidth can be modified post-deployment without physical infrastructure changes.
Each circuit operates independently with separate BGP sessions, ASN assignments, and routing policies.
Technical Requirements
The az account show command verifies current subscription context and user permissions. The az provider show command checks Microsoft.Network provider registration status required for ExpressRoute resources.
The az network express-route port list command validates ExpressRoute Direct quota availability in your subscription. Physical requirements include single-mode fiber infrastructure and compatible optics.
Network requirements include assigned public ASN, IP addressing plan for BGP peering, and routing policies for traffic engineering.
Planning Considerations
Capacity planning involves analyzing current and projected bandwidth requirements across all circuits. Consider burst capacity, growth patterns, and redundancy requirements.
Design considerations include BGP routing policies, traffic engineering, failover scenarios, and integration with existing network architecture.
Location selection depends on latency requirements, physical connectivity options, and Microsoft edge presence in target regions.
ExpressRoute Direct Resource Creation
The az network express-route port create command provisions the ExpressRoute Direct resource. The --name parameter sets the resource identifier, while --location specifies the Azure region for resource placement.
The --bandwidth parameter sets port speed (10 or 100) in Gbps and directly impacts monthly costs. The --encapsulation "Dot1Q" parameter configures standard 802.1Q VLAN tagging for circuit isolation.
The --peering-location parameter determines which Microsoft edge facility hosts your physical ports - this is where your fiber will terminate, such as "Seattle".
The az network express-route port show command retrieves port status. The --query parameter filters output to show adminState and operationalStatus for monitoring deployment progress.
Physical Cross-Connection Setup
The az network express-route port show command with --query parameter monitors physical layer status during installation. The query "links[*].{Link:name, State:operationalStatus, Power:rxLightLevel}" shows both primary and secondary link status.
The az network express-route port update command with --admin-state "Enabled" activates the ports after physical cross-connect completion. This administrative control prevents premature activation before fiber installation.
The rxLightLevel values indicate optical power reception measured in dBm. Healthy fiber connections typically show values between -10 and -1 dBm.
Circuit Creation on Direct Ports
The az network express-route create command provisions circuits on ExpressRoute Direct ports. The --name parameter sets the circuit identifier, while --peering-location must match the Direct port location.
The --bandwidth parameter sets circuit capacity in Mbps (2000 = 2 Gbps). The --provider "Microsoft" parameter indicates Direct port usage rather than service provider circuits.
The --express-route-port parameter links the circuit to specific Direct port resource using the full resource path. The az network express-route update command assigns VLAN IDs using --set expressRoutePort.links[0].vlanId=100.
The az network express-route show command verifies circuitProvisioningState and bandwidth allocation to confirm successful provisioning.
BGP Configuration & Basic Routing
The az network express-route peering create command establishes BGP sessions. The --peering-type "AzurePrivatePeering" parameter enables VNet connectivity, while --peer-asn sets your organization's BGP ASN (65001).
The --primary-peer-subnet and --secondary-peer-subnet parameters define /30 point-to-point networks for BGP neighbors. The --vlan-id must match circuit VLAN assignments.
The --shared-key parameter provides MD5 authentication for BGP session security. For Microsoft peering, --advertised-public-prefixes parameter lists your public IP ranges advertised to Microsoft's network.
Microsoft peering requires public IP addresses and supports Office 365 connectivity through --peering-type "MicrosoftPeering" parameter.
Advanced Multi-Circuit BGP Setup
The az network express-route peering update command applies route filters to existing BGP peerings. The --route-filter parameter references a route filter resource that controls which Microsoft service prefixes are advertised over Microsoft peering.
The az network route-filter create command builds service-specific filters using --rules parameter with JSON syntax. The "communities": ["12076:5010"] targets Exchange Online services for selective routing control.
The az network route-filter rule create command adds additional filtering rules to existing filters. The --communities parameter accepts Microsoft's predefined community values for specific services like SharePoint (12076:5020).
Route filters enable granular control over which Microsoft 365 services are accessible through each circuit, allowing traffic engineering and security policies at the Azure service level.
Virtual Network Gateway Integration
The az network vnet-gateway create command provisions ExpressRoute gateways. The --gateway-type "ExpressRoute" parameter specifies gateway function, while --sku determines performance characteristics including bandwidth limits and maximum connections.
The --public-ip-addresses parameter accepts multiple public IPs for active-active gateway configurations. The ErGw3AZ SKU provides zone redundancy, highest performance, and FastPath support for bypassing gateway processing.
The az network vpn-connection create command links gateways to circuits. The --express-route-circuit2 parameter references the circuit resource ID, while --routing-weight controls traffic distribution across multiple circuits.
Higher routing weight values receive proportionally more traffic in active-active scenarios. The az network vpn-connection update command with --enable-fastpath true bypasses gateway data plane processing for improved performance.
Route Advertisement Management
The az network route-filter rule create command builds service-specific filters. The --access "Allow" parameter permits matching traffic, while --route-filter-rule-type "Community" enables BGP community-based filtering for Microsoft services.
The --communities parameter accepts Microsoft's predefined community values like "12076:5010" for Exchange Online. Multiple rules can target different services within the same filter for granular control.
The az network express-route peering update command applies route filters to Microsoft peering using --route-filter parameter. This controls which Azure service prefixes are advertised over the BGP session.
Route filtering enables security policies by limiting service access scope and provides traffic engineering capabilities by controlling which services are reachable through specific circuits.
High Availability Configuration
The az network vpn-connection create commands with different --routing-weight values create weighted load balancing. Weight 200 for primary and 100 for secondary creates 2:1 traffic distribution across redundant circuits.
Zone-redundant gateways using ErGw3AZ SKU automatically distribute across availability zones without additional configuration. This provides protection against datacenter-level failures with automatic failover.
Multiple circuits across different Microsoft edge locations provide geographic redundancy. Each circuit operates independently with separate BGP sessions and can fail over without affecting other circuits.
Connection monitoring and health probes ensure rapid detection of circuit failures, enabling automatic traffic redirection to backup paths within seconds of failure detection.
Traffic Engineering Advanced
The az network route-filter rule create command with different --communities values enables service-specific traffic policies. Using "12076:5010" for Exchange Online and "12076:5020" for SharePoint allows independent routing control.
The --access parameter controls whether specific services are permitted or denied through each circuit. Combined with multiple circuits, this creates sophisticated traffic engineering scenarios.
Multiple route filters can be applied to different circuits, enabling scenarios where critical services use primary circuits while less critical services use backup circuits or internet breakout.
Dynamic route filter updates using az network route-filter rule commands allow real-time traffic engineering without circuit disruption or BGP session resets.
Security and Access Control
The az network private-endpoint create command establishes private connectivity to Azure PaaS services. The --group-id "blob" parameter specifies the storage service type, while --private-connection-resource-id references the target storage account.
The az network private-dns zone create command builds private DNS resolution for private endpoints. The zone name "privatelink.blob.core.windows.net" enables proper name resolution for private endpoint connections.
The az network nsg rule create command defines traffic filtering rules. The --source-address-prefixes parameter accepts CIDR blocks, while --destination-port-ranges controls allowed ports for granular security control.
The az network route-table route create command with --next-hop-type "VirtualNetworkGateway" forces traffic through ExpressRoute rather than internet paths, implementing forced tunneling for security compliance.
Monitoring and Diagnostics
The az network watcher connection-monitor create command establishes end-to-end connectivity testing. The --source-resource and --dest-resource parameters define test endpoints, while --dest-port specifies application ports for realistic testing.
The az monitor diagnostic-settings create command enables metrics collection. The --workspace parameter targets Log Analytics for centralized logging, while the metrics array configures which performance counters to collect.
The Enable-AzNetworkWatcherExtension PowerShell command deploys monitoring agents to VMs. Network Performance Monitor requires agents on both source and destination endpoints for accurate bidirectional measurement.
KQL queries in Log Analytics use BGPRouteTable to analyze routing changes. The bin function aggregates data into time intervals, while render timechart creates visual representations of route stability and convergence events.
Performance Optimization
The az network vpn-connection update command with --enable-fastpath true bypasses gateway processing for supported traffic flows. FastPath requires UltraPerformance or ErGw3AZ gateway SKUs and eliminates gateway latency.
Windows netsh commands optimize TCP stack behavior for high-throughput scenarios. The autotuninglevel=normal parameter enables automatic window scaling, while chimney=enabled activates TCP offload engine capabilities.
Linux sysctl parameters configure kernel network buffers for optimal throughput. The net.core.rmem_max setting defines maximum receive buffer size, while tcp_rmem configures dynamic buffer scaling from minimum to maximum values.
Quality of Service policies using policy-map configurations allocate link capacity with priority queuing for critical traffic and bandwidth guarantees for different service classes, ensuring optimal application performance.
Troubleshooting Methodology
The az network express-route port show command with --query parameter filters output to essential diagnostic information. The rxLightLevel field indicates optical power reception - values below -15 dBm suggest physical connectivity issues.
The az network express-route show command displays circuit provisioning state and BGP session status. The --query parameter can filter to specific fields like circuitProvisioningState and serviceProviderProvisioningState.
The az network watcher packet-capture create command enables traffic analysis with --filters parameter accepting JSON arrays specifying protocol, ports, and direction for targeted packet collection and analysis.
Network testing tools integration with Azure monitoring provides comprehensive diagnostics from physical layer through application performance, enabling systematic troubleshooting methodology.
Automation and Infrastructure as Code
ARM template properties section defines ExpressRoute Direct configuration declaratively. The bandwidthInGbps property sets port speed, while peeringLocation determines the Microsoft edge facility for physical connectivity.
Bicep syntax simplifies template authoring with strong typing and IntelliSense support. The resource declaration uses Microsoft.Network/expressRoutePorts resource type with API version specifying feature compatibility and supported parameters.
Terraform azurerm_express_route_port resource enables multi-cloud deployments with consistent syntax. The link1 and link2 blocks configure dual physical connections with admin_enabled controlling port activation state.
PowerShell New-AzExpressRoutePort cmdlet uses parameter splatting for readable configuration. The @params hashtable technique improves script maintainability and enables parameter validation and reuse across deployments.
Migration and Transition Strategies
The az network express-route create command with --express-route-port parameter creates circuits on Direct ports for parallel deployment. Initial lower --routing-weight values minimize traffic impact during testing phases.
The az network vpn-connection update command enables gradual traffic migration by adjusting routing weights. Increasing weight from 50 to 150 shifts traffic preference without complete cutover, allowing validation at each step.
The az network vpn-connection delete command safely removes legacy connections after successful migration validation. The --no-wait parameter enables asynchronous deletion without blocking subsequent operations.
Migration validation using az network watcher connection-monitor commands confirms end-to-end connectivity before decommissioning legacy infrastructure, ensuring zero-downtime transitions with rollback capabilities.
Azure ExpressRoute Direct
Premium Dedicated Connectivity
Core Technical Capabilities
- Direct Physical Connectivity: Dedicated fiber to Microsoft edge infrastructure
- Multiple Circuit Support: Provision multiple ExpressRoute circuits on single port pair
- Flexible Bandwidth: Sub-rate circuits from 50 Mbps to full port capacity
- Independent Routing: Separate BGP sessions per circuit
10 Gbps
Enterprise deployments
Circuits: 50 Mbps - 10 Gbps
100 Gbps
Hyperscale scenarios
Circuits: 50 Mbps - 100 Gbps
graph TB
Customer[Customer Network] --> Fiber[Direct Fiber]
Fiber --> EdgeRouter[Microsoft Edge Router]
EdgeRouter --> Backbone[Microsoft Global Network]
Backbone --> Azure[Azure Services]
Backbone --> O365[Microsoft 365]
Architecture & Physical Connectivity
graph TB
subgraph CustomerSite[Customer Site]
CE[Customer Edge Router]
end
subgraph Colocation[Colocation Facility]
Panel[Patch Panel]
CE --> Fiber1[Primary Fiber]
CE --> Fiber2[Secondary Fiber]
Fiber1 --> Panel
Fiber2 --> Panel
end
subgraph MicrosoftEdge[Microsoft Edge Location]
MSEE1[Primary MSEE Router]
MSEE2[Secondary MSEE Router]
Panel --> MSEE1
Panel --> MSEE2
end
subgraph MSFTBackbone[Microsoft Backbone]
MSEE1 --> Backbone[Global Network]
MSEE2 --> Backbone
end
Physical Layer Requirements
- Fiber Type: Single-mode fiber optic cables
- Redundancy: Dual physical connections (primary/secondary)
- Optics: Compatible transceivers (10G SR/LR, 100G SR4/LR4)
- Cross-Connect: Direct patch to Microsoft edge equipment
Bandwidth & Port Options
| Port Speed |
Min Circuit |
Max Circuit |
Max Circuits |
Use Cases |
| 10 Gbps |
50 Mbps |
10 Gbps |
Multiple |
Enterprise, Multi-tenant |
| 100 Gbps |
50 Mbps |
100 Gbps |
High density |
Hyperscale, MSP |
Circuit Allocation Example (10 Gbps Port)
- Production Circuit: 4 Gbps - Critical workloads
- Development Circuit: 2 Gbps - Testing environments
- Backup Circuit: 1 Gbps - Data replication
- Management Circuit: 500 Mbps - Monitoring/admin
- Available Capacity: 2.5 Gbps - Future expansion
graph LR
subgraph Port[10 Gbps Port Pair]
Circuit1[Production - 4G]
Circuit2[Development - 2G]
Circuit3[Backup - 1G]
Circuit4[Management - 500M]
Available[Available - 2.5G]
end
Prerequisites & Requirements
Technical Prerequisites
- Public ASN: Registered autonomous system number
- IP Addressing: /30 subnets for BGP peering
- Physical Connectivity: Single-mode fiber to edge location
- Equipment: Compatible router with BGP support
| Component |
Requirement |
Specifications |
| Fiber Optics |
Single-mode |
OS2 standard, LC connectors |
| Transceivers |
Compatible optics |
10G: SR/LR, 100G: SR4/LR4 |
| BGP ASN |
Public ASN |
ARIN/RIPE/APNIC registered |
| IP Space |
Public IPs |
/30 per BGP session |
# Azure subscription requirements
az account show --query "user.name"
az provider show --namespace Microsoft.Network --query "registrationState"
# Verify ExpressRoute Direct quota
az network express-route port list --query "length(@)"
Planning & Design Considerations
graph TB
subgraph Planning[Planning Phase]
Capacity[Capacity Analysis]
Location[Location Selection]
Redundancy[Redundancy Design]
end
subgraph Design[Design Phase]
BGP[BGP Policy Design]
VLAN[VLAN Assignment]
Routing[Routing Architecture]
end
subgraph Implementation[Implementation Phase]
Physical[Physical Setup]
Logical[Circuit Provisioning]
Testing[Testing & Validation]
end
Planning --> Design
Design --> Implementation
Capacity Planning Matrix
- Current Utilization: Baseline existing ExpressRoute usage
- Growth Projection: 3-year bandwidth requirements
- Burst Capacity: Peak traffic handling capability
- Circuit Segmentation: Workload isolation requirements
| Design Element |
Consideration |
Impact |
| BGP Design |
AS-PATH, Local Pref |
Traffic engineering |
| VLAN Strategy |
Circuit isolation |
Security, management |
| Redundancy |
Dual circuits, paths |
High availability |
ExpressRoute Direct Resource Creation
# Create ExpressRoute Direct resource
az network express-route port create \
--name "er-direct-seattle-01" \
--location "Seattle" \
--bandwidth 10 \
--encapsulation "Dot1Q" \
--peering-location "Seattle"
# Returns: Resource ID, Admin State, Links array
# Verify port creation and status
az network express-route port show \
--name "er-direct-seattle-01" \
--query "{adminState:adminState, operationalStatus:links[0].operationalStatus}"
# Check available bandwidth
az network express-route port show \
--name "er-direct-seattle-01" \
--query "bandwidthInGbps"
| Parameter |
Purpose |
Values |
| bandwidth |
Port speed configuration |
10, 100 (Gbps) |
| encapsulation |
VLAN tagging method |
Dot1Q, QinQ |
| peering-location |
Microsoft edge facility |
Seattle, Amsterdam, etc. |
Resource Creation Output
- Link IDs: Primary and secondary link identifiers
- Admin State: Enabled/Disabled port control
- Patch Panel: Physical connection details
- Service Key: Microsoft coordination reference
Physical Cross-Connection Setup
sequenceDiagram
participant Customer
participant Microsoft
participant Colo as Colocation Provider
Customer->>Microsoft: Submit LOA (Letter of Authorization)
Microsoft->>Colo: Schedule cross-connect installation
Colo->>Customer: Coordinate fiber installation
Customer->>Colo: Install customer equipment
Colo->>Microsoft: Complete cross-connect
Microsoft->>Customer: Confirm link establishment
Customer->>Microsoft: Verify optical power levels
# Monitor link status during installation
az network express-route port show \
--name "er-direct-seattle-01" \
--query "links[*].{Link:name, State:operationalStatus, Power:rxLightLevel}"
# Enable administrative state after physical connection
az network express-route port update \
--name "er-direct-seattle-01" \
--admin-state "Enabled"
Physical Layer Verification
- Link State: UP/DOWN operational status
- Optical Power: Receive light level measurements
- Interface Status: Layer 1 connectivity confirmation
- Error Counters: CRC, frame errors monitoring
| Phase |
Timeline |
Dependencies |
| LOA Submission |
1-2 days |
Microsoft approval |
| Cross-connect |
5-10 days |
Colocation scheduling |
| Testing |
1-2 days |
Optical power verification |
Circuit Creation on Direct Ports
# Create ExpressRoute circuit on Direct port
az network express-route create \
--name "er-circuit-production" \
--peering-location "Seattle" \
--bandwidth 2000 \
--provider "Microsoft" \
--sku-family "MeteredData" \
--sku-tier "Premium" \
--express-route-port "/subscriptions/.../er-direct-seattle-01"
# Assign VLAN ID to circuit
az network express-route update \
--name "er-circuit-production" \
--set expressRoutePort.links[0].vlanId=100
# Create additional circuits on same port
az network express-route create \
--name "er-circuit-development" \
--bandwidth 1000 \
--express-route-port "/subscriptions/.../er-direct-seattle-01" \
--set expressRoutePort.links[0].vlanId=200
# Verify circuit provisioning state
az network express-route show \
--name "er-circuit-production" \
--query "{state:circuitProvisioningState, bandwidth:serviceProviderProperties.bandwidthInMbps}"
| Circuit Parameter |
Purpose |
Constraints |
| bandwidth |
Circuit capacity (Mbps) |
50 - port maximum |
| vlanId |
Traffic isolation |
1-4094, unique per port |
| express-route-port |
Parent port assignment |
Must reference existing port |
graph LR
subgraph DirectPort[ExpressRoute Direct Port - 10 Gbps]
subgraph Circuits[Circuits]
Prod[Production Circuit
VLAN 100 - 2 Gbps]
Dev[Development Circuit
VLAN 200 - 1 Gbps]
Test[Test Circuit
VLAN 300 - 500 Mbps]
end
Available[Available: 6.5 Gbps]
end
BGP Configuration & Basic Routing
# Configure private peering for circuit
az network express-route peering create \
--circuit-name "er-circuit-production" \
--peering-type "AzurePrivatePeering" \
--peer-asn 65001 \
--primary-peer-subnet "192.168.100.0/30" \
--secondary-peer-subnet "192.168.100.4/30" \
--vlan-id 100 \
--shared-key "MySharedKey123"
# Configure Microsoft peering for Office 365
az network express-route peering create \
--circuit-name "er-circuit-production" \
--peering-type "MicrosoftPeering" \
--peer-asn 65001 \
--primary-peer-subnet "203.0.113.0/30" \
--secondary-peer-subnet "203.0.113.4/30" \
--vlan-id 200 \
--advertised-public-prefixes "203.0.113.0/24"
| BGP Parameter |
Private Peering |
Microsoft Peering |
| IP Addressing |
RFC 1918 or Public |
Public IP required |
| Route Filtering |
Azure VNet routes |
Service-specific prefixes |
| ASN Requirements |
Private/Public ASN |
Public ASN required |
graph TB
subgraph CustomerNetwork[Customer Network - ASN 65001]
CE[Customer Edge Router]
end
subgraph ExpressRoute[ExpressRoute Circuit]
BGP1[Private Peering
VLAN 100]
BGP2[Microsoft Peering
VLAN 200]
end
subgraph Azure[Azure Services]
VNet[Virtual Networks]
O365[Microsoft 365]
end
CE --> BGP1
CE --> BGP2
BGP1 --> VNet
BGP2 --> O365
Virtual Network Gateway Integration
# Create ExpressRoute virtual network gateway
az network vnet-gateway create \
--name "ergw-prod-eastus" \
--vnet "vnet-prod-eastus" \
--gateway-type "ExpressRoute" \
--sku "ErGw3AZ" \
--public-ip-addresses "pip-ergw-prod-eastus-1" "pip-ergw-prod-eastus-2"
# Create connection to ExpressRoute circuit
az network vpn-connection create \
--name "conn-er-production" \
--vnet-gateway1 "ergw-prod-eastus" \
--express-route-circuit2 "/subscriptions/.../er-circuit-production" \
--routing-weight 100
# Enable FastPath for high performance
az network vpn-connection update \
--name "conn-er-production" \
--enable-fastpath true
# Create redundant connection with different weight
az network vpn-connection create \
--name "conn-er-secondary" \
--vnet-gateway1 "ergw-prod-eastus" \
--express-route-circuit2 "/subscriptions/.../er-circuit-secondary" \
--routing-weight 50
| Gateway SKU |
Bandwidth |
Connections |
FastPath |
| Standard |
1 Gbps |
4 |
No |
| HighPerformance |
2 Gbps |
4 |
No |
| UltraPerformance |
10 Gbps |
4 |
Yes |
| ErGw3AZ |
10 Gbps |
16 |
Yes |
graph TB
subgraph ExpressRoute[ExpressRoute Direct]
Circuit1[Primary Circuit
Weight: 100]
Circuit2[Secondary Circuit
Weight: 50]
end
subgraph Azure[Azure Region]
Gateway[ExpressRoute Gateway
ErGw3AZ - Zone Redundant]
VNet1[Production VNet]
VNet2[Development VNet]
end
Circuit1 --> Gateway
Circuit2 --> Gateway
Gateway --> VNet1
Gateway --> VNet2
FastPath Configuration Benefits
- Bypass Gateway: Direct data plane connectivity
- Latency Reduction: Eliminates gateway processing overhead
- Higher Throughput: Near line-rate performance
- Zone Redundancy: ErGw3AZ provides 99.95% availability
Route Advertisement Management
# Create route filter for Microsoft peering
az network route-filter create \
--name "filter-o365-services"
# Add specific service communities
az network route-filter rule create \
--filter-name "filter-o365-services" \
--name "allow-exchange-online" \
--access "Allow" \
--route-filter-rule-type "Community" \
--communities "12076:5010"
az network route-filter rule create \
--filter-name "filter-o365-services" \
--name "allow-sharepoint-online" \
--access "Allow" \
--route-filter-rule-type "Community" \
--communities "12076:5020"
# Apply route filter to Microsoft peering
az network express-route peering update \
--circuit-name "er-circuit-production" \
--name "MicrosoftPeering" \
--route-filter "/subscriptions/.../filter-o365-services"
# Remove specific service access
az network route-filter rule delete \
--filter-name "filter-o365-services" \
--name "allow-sharepoint-online"
| BGP Community |
Service |
Route Count |
Access Control |
| 12076:5010 |
Exchange Online |
~20 prefixes |
Allow/Deny |
| 12076:5020 |
SharePoint Online |
~30 prefixes |
Allow/Deny |
| 12076:5030 |
Skype for Business |
~15 prefixes |
Allow/Deny |
| 12076:5100 |
CRM Online |
~10 prefixes |
Allow/Deny |
graph LR
subgraph RouteFilter[Route Filter: o365-services]
Rule1[Exchange Online
12076:5010
Allow]
Rule2[SharePoint Online
12076:5020
Allow]
Rule3[Skype for Business
12076:5030
Deny]
end
subgraph Services[Microsoft 365 Services]
Exchange[Exchange Online
Accessible]
SharePoint[SharePoint Online
Accessible]
Skype[Skype for Business
Blocked]
end
Rule1 --> Exchange
Rule2 --> SharePoint
Rule3 --> Skype
High Availability Configuration
# Create primary connection with higher weight
az network vpn-connection create \
--name "conn-er-primary" \
--vnet-gateway1 "ergw-prod-eastus" \
--express-route-circuit2 "/subscriptions/.../er-circuit-primary" \
--routing-weight 200
# Create secondary connection with lower weight
az network vpn-connection create \
--name "conn-er-secondary" \
--vnet-gateway1 "ergw-prod-eastus" \
--express-route-circuit2 "/subscriptions/.../er-circuit-secondary" \
--routing-weight 100
# Create zone-redundant gateway for maximum availability
az network vnet-gateway create \
--name "ergw-prod-zone-redundant" \
--vnet "vnet-prod-eastus" \
--gateway-type "ExpressRoute" \
--sku "ErGw3AZ" \
--public-ip-addresses "pip-ergw-zr-1" "pip-ergw-zr-2"
# Monitor connection health
az network vpn-connection show \
--name "conn-er-primary" \
--query "{status:connectionStatus, ingressBytes:ingressBytesTransferred}"
graph TB
subgraph ExpressRouteDirect[ExpressRoute Direct Multi-Location]
subgraph PrimaryLocation[Primary Location - Seattle]
Circuit1[Primary Circuit
Weight: 200
Active]
end
subgraph SecondaryLocation[Secondary Location - Silicon Valley]
Circuit2[Secondary Circuit
Weight: 100
Standby]
end
end
subgraph AzureRegion[Azure East US]
Gateway[Zone-Redundant Gateway
ErGw3AZ
99.95% SLA]
subgraph AZs[Availability Zones]
AZ1[Zone 1]
AZ2[Zone 2]
AZ3[Zone 3]
end
VNet[Production VNet]
end
Circuit1 --> Gateway
Circuit2 --> Gateway
Gateway --> AZ1
Gateway --> AZ2
Gateway --> AZ3
Gateway --> VNet
| HA Component |
Configuration |
Failover Time |
Availability |
| Physical Links |
Dual fiber connections |
Immediate |
99.9% |
| Circuit Redundancy |
Multi-location circuits |
30-180 seconds |
99.95% |
| Zone-Redundant Gateway |
ErGw3AZ across AZs |
10-30 seconds |
99.95% |
| Connection Weights |
Traffic distribution |
Real-time |
Load balancing |
High Availability Best Practices
- Geographic Redundancy: Circuits in different edge locations
- Weight-Based Failover: Primary/secondary traffic distribution
- Zone Redundancy: Protection against datacenter failures
- Health Monitoring: Automated failure detection and alerting
Traffic Engineering Advanced
# Create service-specific route filters for traffic engineering
az network route-filter create \
--name "filter-critical-services"
az network route-filter rule create \
--filter-name "filter-critical-services" \
--name "critical-exchange" \
--access "Allow" \
--route-filter-rule-type "Community" \
--communities "12076:5010"
# Create route filter for standard services
az network route-filter create \
--name "filter-standard-services"
az network route-filter rule create \
--filter-name "filter-standard-services" \
--name "standard-sharepoint" \
--access "Allow" \
--route-filter-rule-type "Community" \
--communities "12076:5020"
# Apply filters to different circuits for traffic engineering
az network express-route peering update \
--circuit-name "er-circuit-primary" \
--name "MicrosoftPeering" \
--route-filter "/subscriptions/.../filter-critical-services"
az network express-route peering update \
--circuit-name "er-circuit-secondary" \
--name "MicrosoftPeering" \
--route-filter "/subscriptions/.../filter-standard-services"
# Dynamic filter updates for maintenance scenarios
az network route-filter rule update \
--filter-name "filter-critical-services" \
--name "critical-exchange" \
--access "Deny"
| Traffic Class |
Services |
Circuit Assignment |
Priority |
| Critical |
Exchange Online |
Primary Circuit |
High |
| Business |
SharePoint Online |
Secondary Circuit |
Medium |
| Standard |
Teams, Skype |
Backup Circuit |
Low |
| Archive |
OneDrive Sync |
Internet Breakout |
Best Effort |
graph TB
subgraph TrafficEngineering[Advanced Traffic Engineering]
subgraph Primary[Primary Circuit - High Performance]
CriticalFilter[Critical Services Filter
Exchange Online: 12076:5010]
end
subgraph Secondary[Secondary Circuit - Standard]
StandardFilter[Standard Services Filter
SharePoint: 12076:5020
Teams: 12076:5030]
end
subgraph Backup[Backup Circuit - Best Effort]
BackupFilter[Backup Services Filter
OneDrive: 12076:5040
Archive: 12076:5050]
end
end
subgraph Services[Microsoft 365 Services]
Exchange[Exchange Online
Mission Critical]
SharePoint[SharePoint Online
Business Critical]
Teams[Microsoft Teams
Standard]
OneDrive[OneDrive Sync
Background]
end
CriticalFilter --> Exchange
StandardFilter --> SharePoint
StandardFilter --> Teams
BackupFilter --> OneDrive
Dynamic Traffic Engineering
- Service-Based Routing: BGP communities for granular control
- Real-Time Updates: Filter modifications without downtime
- Maintenance Windows: Temporary traffic redirection
- Performance Optimization: Critical services on premium circuits
Security and Access Control
# Configure private endpoint for storage account
az network private-endpoint create \
--name "pe-storage-prod" \
--vnet-name "vnet-prod-eastus" \
--subnet "snet-private-endpoints" \
--private-connection-resource-id "/subscriptions/.../storageAccounts/prodstorageacct" \
--group-id "blob" \
--connection-name "storage-private-connection"
# Create private DNS zone for name resolution
az network private-dns zone create \
--name "privatelink.blob.core.windows.net"
az network private-dns link vnet create \
--zone-name "privatelink.blob.core.windows.net" \
--name "storage-dns-link" \
--virtual-network "vnet-prod-eastus" \
--registration-enabled false
# Network Security Group rules for ExpressRoute traffic
az network nsg rule create \
--nsg-name "nsg-expressroute-subnet" \
--name "AllowExpressRouteInbound" \
--access "Allow" \
--direction "Inbound" \
--priority 100 \
--source-address-prefixes "10.0.0.0/8" \
--destination-port-ranges "443" "80" \
--protocol "Tcp"
# Force tunneling through ExpressRoute
az network route-table route create \
--route-table-name "rt-expressroute" \
--name "RouteToOnPremises" \
--address-prefix "0.0.0.0/0" \
--next-hop-type "VirtualNetworkGateway"
| Security Control |
Function |
Implementation |
Scope |
| Route Filtering |
Service access control |
BGP community filtering |
Circuit level |
| Private Endpoints |
PaaS service protection |
Private IP connectivity |
Service level |
| NSG Rules |
Traffic filtering |
Subnet/NIC level controls |
Network level |
| Forced Tunneling |
Internet traffic control |
UDR to ExpressRoute |
VNet level |
graph TB
subgraph OnPremises[On-Premises Network]
Users[Corporate Users]
Firewall[Corporate Firewall]
end
subgraph ExpressRoute[ExpressRoute Direct]
PrivatePeering[Private Peering
Route Filtered]
end
subgraph Azure[Azure VNet Security Layers]
NSG[Network Security Groups
Port/Protocol Control]
RouteTable[User Defined Routes
Forced Tunneling]
subgraph Services[Azure Services]
VM[Virtual Machines
Protected by NSG]
Storage[Storage Account
Private Endpoint]
SQL[SQL Database
Private Endpoint]
end
end
Users --> Firewall
Firewall --> PrivatePeering
PrivatePeering --> NSG
NSG --> RouteTable
RouteTable --> VM
RouteTable --> Storage
RouteTable --> SQL
Defense in Depth Security
- Network Segmentation: Separate subnets for different security zones
- Private Connectivity: Eliminate internet exposure for PaaS services
- Traffic Inspection: Force tunneling through security appliances
- Least Privilege: Minimal necessary route advertisements and access
Monitoring and Diagnostics
# Enable Connection Monitor for end-to-end testing
az network watcher connection-monitor create \
--name "cm-onprem-to-azure" \
--source-resource "/subscriptions/.../virtualMachines/vm-onprem-test" \
--dest-resource "/subscriptions/.../virtualMachines/vm-azure-test" \
--dest-port 443 \
--monitor-interval 60
# Configure ExpressRoute diagnostics
az monitor diagnostic-settings create \
--name "er-diagnostics" \
--resource "/subscriptions/.../expressRouteCircuits/er-circuit-production" \
--workspace "/subscriptions/.../workspaces/law-networking" \
--metrics '[{
"category": "AllMetrics",
"enabled": true,
"retentionPolicy": {"enabled": false, "days": 0}
}]'
# Deploy Network Watcher extension for performance monitoring
$vm = Get-AzVM -Name "vm-monitoring-agent"
Enable-AzNetworkWatcherExtension -VM $vm
# Query BGP route stability in Log Analytics
# KQL query for route analysis
AzureNetworkAnalytics_CL
| where TimeGenerated > ago(1h)
| where SubType_s == "Topology"
| where Resource contains "ExpressRoute"
| summarize RouteCount = count() by bin(TimeGenerated, 5m)
| render timechart
| Monitoring Tool |
Metrics Collected |
Alerting Capability |
Retention |
| Connection Monitor |
Latency, packet loss, topology |
Threshold-based alerts |
30 days |
| ExpressRoute Insights |
Bandwidth, availability, QoS |
Performance degradation |
90 days |
| Network Watcher |
Flow logs, packet capture |
Anomaly detection |
Configurable |
| Log Analytics |
BGP events, performance |
Custom KQL queries |
2 years |
graph TB
subgraph Monitoring[Comprehensive Monitoring Architecture]
subgraph Collection[Data Collection]
ConnMon[Connection Monitor
E2E Testing]
NetWatch[Network Watcher
Flow Analysis]
Diagnostics[Diagnostic Settings
Metrics & Logs]
end
subgraph Analytics[Analytics & Storage]
LAW[Log Analytics Workspace
Centralized Logging]
Insights[ExpressRoute Insights
Performance Dashboard]
end
subgraph Alerting[Alerting & Response]
Alerts[Azure Monitor Alerts
Threshold-based]
Automation[Azure Automation
Response Actions]
Notifications[Action Groups
Email/SMS/Webhook]
end
end
Collection --> LAW
LAW --> Insights
LAW --> Alerts
Alerts --> Automation
Alerts --> Notifications
Key Performance Indicators
- Circuit Availability: 99.95% uptime target with alerting
- Latency Monitoring: Round-trip time baselines and anomaly detection
- Bandwidth Utilization: Threshold alerting at 80% capacity
- BGP Route Stability: Route flap detection and convergence monitoring
Performance Optimization
# Enable FastPath for maximum performance
az network vpn-connection update \
--name "conn-er-production" \
--enable-fastpath true
# Verify FastPath configuration
az network vpn-connection show \
--name "conn-er-production" \
--query "{fastPath:enableFastPath, status:connectionStatus}"
# Configure gateway for optimal performance
az network vnet-gateway update \
--name "ergw-prod-eastus" \
--sku "ErGw3AZ"
# TCP optimization commands for Windows
netsh int tcp set global autotuninglevel=normal
netsh int tcp set global chimney=enabled
netsh int tcp set global rsc=enabled
netsh int tcp set global netdma=enabled
# Linux TCP optimization parameters
echo 'net.core.rmem_default = 262144' >> /etc/sysctl.conf
echo 'net.core.rmem_max = 16777216' >> /etc/sysctl.conf
echo 'net.core.wmem_default = 262144' >> /etc/sysctl.conf
echo 'net.core.wmem_max = 16777216' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_rmem = 4096 65536 16777216' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_wmem = 4096 65536 16777216' >> /etc/sysctl.conf
sysctl -p
| Optimization Technique |
Performance Impact |
Implementation |
Requirement |
| FastPath |
50% latency reduction |
Gateway bypass |
ErGw3AZ or UltraPerformance |
| TCP Window Scaling |
3x throughput increase |
OS-level configuration |
Client/server optimization |
| Connection Coalescing |
20% overhead reduction |
Application optimization |
Application modification |
| Multiple Streams |
Near line-rate utilization |
Parallel connections |
Multi-threaded applications |
graph LR
subgraph Performance[Performance Optimization Stack]
subgraph Network[Network Layer - Azure]
FastPath[FastPath Enabled
Gateway Bypass]
Gateway[ErGw3AZ
10 Gbps Capacity]
Circuits[Multiple Circuits
Load Distribution]
end
subgraph Transport[Transport Layer - TCP]
WindowScaling[TCP Window Scaling
Large Buffers]
Coalescing[Connection Coalescing
Reduced Overhead]
Offload[TCP Offload Engine
Hardware Acceleration]
end
subgraph Application[Application Layer]
MultiStream[Multiple Streams
Parallel Processing]
Caching[Intelligent Caching
Reduced Latency]
Compression[Data Compression
Bandwidth Efficiency]
end
end
Network --> Transport
Transport --> Application
Performance Benchmarks
- Single TCP Stream: Up to 2 Gbps throughput with optimization
- Multiple Streams: Near line-rate utilization (8-9 Gbps on 10G)
- Latency with FastPath: 30-50% reduction compared to standard gateway
- Application Performance: 50-80% improvement with full optimization
Troubleshooting Methodology
# Physical layer diagnostics
az network express-route port show \
--name "er-direct-seattle-01" \
--query "links[*].{Name:name, State:operationalStatus, Power:rxLightLevel}"
# Circuit and BGP session diagnostics
az network express-route show \
--name "er-circuit-production" \
--query "{provisioningState:circuitProvisioningState, serviceState:serviceProviderProvisioningState}"
az network express-route peering show \
--circuit-name "er-circuit-production" \
--name "AzurePrivatePeering" \
--query "{state:peeringState, primaryStatus:primaryPeerAddressPrefix}"
# Network connectivity and performance testing
# Packet capture for traffic analysis
az network watcher packet-capture create \
--vm "/subscriptions/.../virtualMachines/vm-test" \
--name "er-troubleshooting-capture" \
--time-limit 300 \
--filters '[{"protocol":"TCP", "localPort":"443", "remotePort":"443"}]'
# Connection troubleshooting
az network watcher connection-troubleshoot start \
--source-resource "/subscriptions/.../virtualMachines/vm-source" \
--dest-resource "/subscriptions/.../virtualMachines/vm-dest" \
--dest-port 443
| Issue Category |
Symptoms |
Azure Diagnostic Tools |
Resolution Approach |
| Physical Layer |
Link down, optical power low |
Port status, rxLightLevel |
Fiber inspection, cross-connect verification |
| Circuit Provisioning |
Circuit not provisioned |
circuitProvisioningState |
Microsoft support ticket, configuration review |
| BGP Issues |
Routes missing, peering down |
peeringState, route tables |
BGP configuration verification, filter analysis |
| Connectivity |
Packet loss, timeouts |
Connection Monitor, packet capture |
Path analysis, NSG rules, routing verification |
graph TB
subgraph Troubleshooting[Systematic Troubleshooting Workflow]
subgraph Layer1[Physical Layer Verification]
PortStatus[Port Operational Status
az network express-route port show]
OpticalPower[Optical Power Levels
rxLightLevel > -15 dBm]
CrossConnect[Cross-Connect Verification
LOA Status Check]
end
subgraph Layer3[Circuit and BGP Analysis]
CircuitState[Circuit Provisioning State
az network express-route show]
BGPState[BGP Peering State
az network express-route peering show]
RouteAnalysis[Route Table Analysis
Effective routes verification]
end
subgraph Application[Connectivity Testing]
ConnMonitor[Connection Monitor
End-to-end testing]
PacketCapture[Packet Capture
Traffic flow analysis]
PerfTest[Performance Testing
Bandwidth and latency]
end
end
Layer1 --> Layer3
Layer3 --> Application
Common Issues and Azure Solutions
- Circuit Provisioning Delays: Monitor circuitProvisioningState and serviceProviderProvisioningState
- BGP Session Issues: Verify peeringState and peer address configuration
- Route Filtering Problems: Check route filter rules and community values
- Performance Issues: Enable FastPath and verify gateway SKU requirements
Automation and Infrastructure as Code
# ARM template for ExpressRoute Direct deployment
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"expressRoutePortName": {"type": "string"},
"location": {"type": "string", "defaultValue": "[resourceGroup().location]"},
"peeringLocation": {"type": "string"},
"bandwidthInGbps": {"type": "int", "allowedValues": [10, 100]}
},
"resources": [{
"type": "Microsoft.Network/expressRoutePorts",
"apiVersion": "2023-02-01",
"name": "[parameters('expressRoutePortName')]",
"location": "[parameters('location')]",
"properties": {
"peeringLocation": "[parameters('peeringLocation')]",
"bandwidthInGbps": "[parameters('bandwidthInGbps')]",
"encapsulation": "Dot1Q"
}
}]
}
# Bicep template for modern IaC deployment
param expressRoutePortName string
param location string = resourceGroup().location
param peeringLocation string
param bandwidthInGbps int
resource expressRoutePort 'Microsoft.Network/expressRoutePorts@2023-02-01' = {
name: expressRoutePortName
location: location
properties: {
peeringLocation: peeringLocation
bandwidthInGbps: bandwidthInGbps
encapsulation: 'Dot1Q'
}
}
output portResourceId string = expressRoutePort.id
output adminState string = expressRoutePort.properties.adminState
# Terraform configuration for multi-cloud scenarios
resource "azurerm_express_route_port" "main" {
name = var.express_route_port_name
resource_group_name = var.resource_group_name
location = var.location
peering_location = var.peering_location
bandwidth_in_gbps = var.bandwidth_in_gbps
encapsulation = "Dot1Q"
link1 {
admin_enabled = true
}
link2 {
admin_enabled = true
}
tags = var.tags
}
# PowerShell deployment automation
$deploymentParams = @{
Name = "er-direct-deployment"
ResourceGroupName = "rg-networking-prod"
TemplateFile = "expressroute-direct.bicep"
expressRoutePortName = "er-direct-prod-01"
peeringLocation = "Seattle"
bandwidthInGbps = 10
}
New-AzResourceGroupDeployment @deploymentParams
| IaC Tool |
Strengths |
Use Cases |
Learning Curve |
| ARM Templates |
Native Azure integration |
Azure-only deployments |
Medium |
| Bicep |
Simplified syntax, IntelliSense |
Modern Azure deployments |
Low |
| Terraform |
Multi-cloud, mature ecosystem |
Hybrid cloud scenarios |
Medium |
| PowerShell |
Scripting flexibility, Azure integration |
Operational automation |
Low |
graph LR
subgraph DevOps[Infrastructure DevOps Pipeline]
subgraph Source[Source Control]
Git[Git Repository
Templates & Scripts]
Branching[Feature Branches
Environment Configs]
end
subgraph Pipeline[CI/CD Pipeline]
Validate[Template Validation
Syntax & Policy Check]
Test[Infrastructure Testing
What-If Analysis]
Deploy[Staged Deployment
Dev → Test → Prod]
end
subgraph Environments[Target Environments]
Dev[Development
Sandbox Testing]
Test[Testing
Integration Validation]
Prod[Production
Live Workloads]
end
end
Source --> Pipeline
Pipeline --> Dev
Pipeline --> Test
Pipeline --> Prod
IaC Best Practices
- Version Control: All templates and parameters in Git with proper branching
- Validation Gates: Automated testing and policy compliance checks
- Environment Parity: Consistent configurations across environments
- Idempotency: Safe to run multiple times with predictable outcomes
Migration and Transition Strategies
# Parallel deployment strategy - Create new ExpressRoute Direct circuit
az network express-route create \
--name "er-circuit-direct-new" \
--bandwidth 5000 \
--express-route-port "/subscriptions/.../er-direct-seattle-01" \
--peering-location "Seattle" \
--provider "Microsoft" \
--sku-family "MeteredData" \
--sku-tier "Premium"
# Create connection with initially lower weight for testing
az network vpn-connection create \
--name "conn-er-direct-new" \
--vnet-gateway1 "ergw-prod-eastus" \
--express-route-circuit2 "/subscriptions/.../er-circuit-direct-new" \
--routing-weight 50
# Gradual traffic migration - Increase weight progressively
az network vpn-connection update \
--name "conn-er-direct-new" \
--routing-weight 100
# Monitor traffic distribution during migration
az network vpn-connection show \
--name "conn-er-direct-new" \
--query "{weight:routingWeight, status:connectionStatus, egress:egressBytesTransferred}"
# Final cutover - Set primary weight and reduce legacy
az network vpn-connection update \
--name "conn-er-direct-new" \
--routing-weight 200
az network vpn-connection update \
--name "conn-er-legacy" \
--routing-weight 10
# Cleanup phase - Remove legacy connection after validation
az network vpn-connection delete \
--name "conn-er-legacy" \
--no-wait
# Validate migration success
az network watcher connection-troubleshoot start \
--source-resource "/subscriptions/.../virtualMachines/vm-test" \
--dest-resource "/subscriptions/.../virtualMachines/vm-target" \
--dest-port 443
| Migration Phase |
Duration |
Risk Level |
Rollback Time |
Validation Method |
| Parallel Setup |
1-2 weeks |
Low |
Immediate |
Circuit provisioning state |
| BGP Configuration |
2-3 days |
Low |
Immediate |
Route table verification |
| Traffic Testing |
1 week |
Medium |
5 minutes |
Connection Monitor |
| Gradual Migration |
4-8 hours |
Medium |
2-5 minutes |
Real-time monitoring |
| Legacy Cleanup |
1-2 days |
Low |
1-2 hours |
End-to-end testing |
graph TB
subgraph Migration[Zero-Downtime Migration Strategy]
subgraph Phase1[Phase 1: Parallel Deployment]
Setup[ExpressRoute Direct Setup
New circuit provisioning]
BGPConfig[BGP Configuration
Peering establishment]
Testing[Connectivity Testing
Weight: 50]
end
subgraph Phase2[Phase 2: Gradual Migration]
Weight100[Increase Weight to 100
Monitor traffic distribution]
Weight200[Set Primary Weight 200
Reduce legacy to 10]
Validate[Continuous Validation
Performance monitoring]
end
subgraph Phase3[Phase 3: Completion]
Cleanup[Legacy Connection Removal
Resource decommissioning]
FinalTest[Final Validation
End-to-end testing]
Documentation[Update Documentation
Runbook completion]
end
end
Phase1 --> Phase2
Phase2 --> Phase3
Migration Success Criteria
- Zero Downtime: No service interruption during migration process
- Performance Validation: Baseline metrics maintained or improved
- Rapid Rollback: Ability to revert within 5 minutes if issues occur
- Complete Testing: End-to-end validation before legacy decommission