Update

Generally Available: Azure SQL updates for late-March 2026

Generally Available: Azure SQL updates for late-March 2026

In late-March 2026, the following updates and enhancements were made to Azure SQL:

  • Configure built-in SQL code analysis rules and severity settings without editing project XML.
  • Use Fabric connectivity and provisioning options to connect with your Fabric workspaces and databases from the MSSQL extension while also creating a new database.
  • Use Data-tier Application in the MSSQL extension to import and export .dacpac and .bacpac files.

Public Preview: Blue-green agent pool upgrade in AKS
In‑place node pool upgrades can introduce risk by applying changes directly to running environments. Blue‑green agent pool upgrades create a parallel node pool with the new configuration, allowing validation before workloads are shifted and providing a clear rollback path. This reduces upgrade risk and supports more controlled cluster lifecycle management.

Generally Available: Cosmos DB Mirroring in Microsoft Fabric with private endpoints
We're excited to announce that private endpoint support for Azure Cosmos DB Mirroring in Microsoft Fabric is now generally available. You can now unlock powerful analytics on your operational data while keeping your enhanced network security posture intact.

Now you can configure virtual network restrictions, set up private endpoints, and help ensure your data stays protected within private network boundaries as data replicates to your Fabric workspace. You can use the Azure Cosmos DB network access control list capability to authorize specific Fabric workspace IDs for trusted connections to your Cosmos DB account.

This capability is essential for regulated industries and organizations handling sensitive data where network isolation is required. Security teams can maintain strict network controls, while you have the freedom to build real-time dashboards, train AI models, and run advanced analytics on your operational data within a secure boundary.

Public Preview: Fabric Mirroring integration for Azure Database for MySQL
We are excited to announce the public preview of Fabric Mirroring integration for Azure Database for MySQL – Flexible Server. You can now replicate MySQL operational data into Microsoft Fabric in near real time, without building or maintaining ETL pipelines. Mirrored data lands in OneLake in Delta Parquet format and is immediately available for analytics across Fabric experiences.

Generally Available: Online migration now uses the pgoutput plugin
You can now leverage pgoutput for online (minimal-downtime) migrations with improved reliability and performance. By aligning with PostgreSQL’s native logical replication framework, this update improves ecosystem compatibility with modern PostgreSQL deployments and reduces dependency on legacy decoding mechanisms, helping ensure continued alignment as PostgreSQL versions evolve.

Generally Available: PostgreSQL migration service supports for Google AlloyDB into Azure Database for PostgreSQL
We’re excited to announce that Google AlloyDB is now supported as a source for migration into Azure Database for PostgreSQL. You can migrate and consolidate PostgreSQL estates from Google AlloyDB to Azure using secure, reliable workflows designed for minimal downtime, while maintaining native PostgreSQL compatibility end to end.

Generally Available: PostgreSQL migration service supports compatible EDB workloads into Azure Database for PostgreSQL
We’re excited to announce that EDB PostgreSQL is now supported as a source for migration into Azure Database for PostgreSQL. You can migrate and consolidate PostgreSQL estates from EDB Postgres Extended Server to Azure using secure, reliable workflows designed for minimal downtime, while maintaining native PostgreSQL compatibility end to end.

Generally Available: Custom time zone support for pg_cron via cron.timezone in Azure Database for PostgreSQL
You can now modify the cron.timezone server parameter in Azure Database for PostgreSQL. This parameter controls the time zone used by pg_cron when evaluating scheduled jobs. By configuring cron.timezone, you can ensure scheduled jobs run according to your preferred time zone rather than relying on the server’s default setting.

This update is especially useful for applications that need job execution aligned with regional business hours or specific operational time zones. You can update the parameter through the Azure portal, Azure CLI, Azure Resource Manager, or REST API, and the change will apply to newly scheduled pg_cron jobs.

Public Preview: Azure SQL Managed Instance change event streaming
You can now stream row‑level data changes—inserts, updates, and deletes—from Azure SQL Managed Instance to Azure Event Hubs in near real time with change event streaming (CES). SQL publishes changes as transactions commit, reducing latency while minimizing operational overhead on your workload.

CES simplifies real‑time integration by eliminating the need for change data capture, change tracking, custom polling, or external connectors. You configure streaming once at the database layer, and SQL handles reliability, retries, and log coordination. Events are emitted in structured JSON using the CloudEvents standard, allowing a single stream to fan out to multiple downstream consumers without increasing load on your SQL Managed Instance.

With CES, you can build event‑driven microservices, enable real‑time analytics, keep caches and search indexes in sync, and continuously ingest data into streaming and analytics platforms—all without changing your application code. We encourage you to try the public preview and share feedback as we continue to refine the feature.

Generally Available: Container network metrics filtering for AKS
Network observability can generate large volumes of metrics, making it difficult for teams to focus on data that is operationally relevant. Container network metrics filtering in Azure Container Networking Services (ACNS) allows operators to control which container‑level network metrics are collected using Kubernetes custom resources, with filters applied dynamically. This helps teams reduce monitoring noise, manage data volumes, and keep dashboards focused on actionable signals.

Public Preview: AI Agent for container networking troubleshooting
Troubleshooting Kubernetes networking issues is often slowed by logs and metrics scattered across multiple tools, forcing engineers to manually correlate signals during incidents. The container networking agent provides a lightweight, web‑based interface that translates natural‑language problem descriptions into read‑only diagnostics using live cluster telemetry and orchestrates safe diagnostic workflows. By consolidating networking insights and summarizing findings with structured remediation guidance, the agent reduces investigation time and improves troubleshooting consistency without making configuration

Public Preview: Cross-cluster networking in Azure Kubernetes Fleet Manager
Organizations running applications across multiple Kubernetes clusters often face challenges with performance, global service discovery, and observability due to the complexity of distributed microservice environments. Azure Kubernetes Fleet Manager now provides Cross Cluster Networking, delivering a managed Cilium cluster mesh that simplifies configuration and centralizes management of multi‑cluster networking. This capability enables unified connectivity across AKS clusters, creates a global service registry for cross‑cluster service discovery, and supports intelligent routing.

For technical practitioners, this reduces operational overhead by removing the need to manually configure a Cilium mesh, improves scalability and resilience for multi‑cluster deployments, and offers unified observability through shared network metrics and flow logs. It also provides efficient packet processing and reduced latency through eBPF‑based networking.

Public Preview: AKS managed GPU metrics in Azure Monitor
Teams running GPU‑backed workloads often lack integrated visibility into GPU utilization alongside Kubernetes metrics. AKS managed GPU metrics automatically expose performance and utilization data from NVIDIA GPU‑enabled node pools into managed Prometheus and Grafana environments. This brings GPU telemetry into the same observability stack as cluster metrics, supporting capacity planning and operational monitoring without manual exporter setup.

Generally Available: Container network logs in AKS
Networking issues in Kubernetes environments can be difficult to diagnose due to limited visibility into traffic flows and insufficient context around failures. Container network logs in Azure Kubernetes Service (AKS), now generally available, provide context‑rich visibility into network flows by capturing detailed per‑flow metadata such as IPs, ports, namespaces, pod and service names, flow direction, and policy verdicts across L3/L4 and supported Layer 7 protocols including HTTP, gRPC, and Kafka. This capability supports both stored logs for continuous, filter-based collection and on-demand logs for targeted snapshots.

The new network monitoring experience in Azure Monitor for AKS makes this data easier to consume through out-of-the-box visualizations, pre-built Azure Monitor and Grafana dashboards, and single‑click onboarding in the Azure portal. These enhancements help technical practitioners accelerate troubleshooting, improve insight into traffic patterns, and simplify analysis of issues such as packet loss, DNS failures, and connection errors.

Generally Available: Azure Container Storage v2.1.0 now with Elastic SAN integration and on demand installation
Containerized workloads often require higher and more consistent storage performance without managing large numbers of individual disks. Azure Container Storage integration with Elastic SAN allows Kubernetes clusters to consume storage from a shared pool of capacity and performance. This simplifies provisioning, improves utilization, and supports workloads with variable storage demands using a centrally managed storage model.

Public Preview: Microsoft Azure Kubernetes Application Network
As Kubernetes environments scale across regions and clusters, IP‑based networking becomes difficult to manage and provides limited application‑level visibility and security controls. Azure Kubernetes Application Network introduces application‑layer abstractions for Kubernetes traffic, including mutual TLS for pod‑to‑pod communication, application‑aware authorization policies, and detailed traffic telemetry across ingress and in‑cluster communication, with built‑in multi‑region connectivity configured in a single step. This enables teams to apply identity‑aware security and gain deeper traffic insight without deploying or operating a full service mesh, reducing operational overhead while improving consistency and auditability.

Public Preview: Application routing with meshless Istio in AKS
Following the deprecation of ingress‑nginx, Kubernetes operators need a supported, standards‑aligned migration path for ingress without the complexity of a full service mesh. Application Routing with Meshless Istio enables adoption of the Kubernetes Gateway API for ingress management while avoiding sidecar‑based architectures, and Microsoft is extending support for existing ingress‑nginx‑based Application Routing while contributing to the open‑source ingress2gateway project. This allows teams to modernize ingress configurations incrementally, maintain operational continuity, and align with evolving Kubernetes standards.

Generally Available: Azure Monitor Prometheus community recommended alerts for Azure Arc-enabled Kubernetes
Azure Monitor now offers one-click enablement of Prometheus recommended alerts directly in the Azure Portal for Azure Arc-enabled Kubernetes clusters. These alerts, based on enhanced Prometheus community rules, provide comprehensive coverage across cluster, node, and pod levels. Previously, enabling these alerts required manual template downloads and CLI deployment.

To use these alerts, your cluster must have Azure Monitor managed service for Prometheus enabled. They serve as the replacement for the legacy Container insights recommended alerts (custom metrics) (preview).

By enabling these alerts, customers will:

  • Receive timely notifications on critical cluster issues.
  • Accelerate triage and troubleshooting with preconfigured signal coverage.
  • Improve cluster reliability and performance with minimal configuration.

Public Preview: Ingest OTLP data into Azure Monitor with the OpenTelemetry Collector
Azure Monitor now supports native ingestion of OpenTelemetry Protocol (OTLP) signals, enabling you to send telemetry data directly from OpenTelemetry-instrumented applications and platforms to Azure Monitor. You can configure your OpenTelemetry Collector to send data directly to Azure Monitor cloud ingestion endpoints to ingest OTLP metrics, logs and traces using Microsoft Entra for authentication.

You can enable OTLP data ingestion in Azure Monitor using Application Insights OR by manually creating the required data collection endpoints, rules and workspaces. The Application Insights-based approach is recommended for most scenarios as it automates resource creation and includes built-in application performance management experiences.

Ingested OTLP metrics are stored in Azure Monitor Workspaces and can be queried and alerted upon with Prometheus query language (PromQL). OTLP logs and traces are stored in Log Analytics workspaces using new OpenTelemetry tables and semantics.

Receive Important Update Messages Stay tuned for upcoming Microsoft updates

Was the content helpful to you?

Advertisement Advertise here?
Banner Logitech