azure
7395 TopicsAnnouncing the Azure Databricks connector in Power Platform
We are ecstatic to announce the public preview of the Azure Databricks Connector for Power Platform. This native connector is specifically for Power Apps, Power Automation, and Copilot Studio within Power Platform and enables seamless, single click connection. With this connector, your organization can build data-driven, intelligent conversational experiences that leverage the full power of your data within Azure Databricks without any additional custom configuration or scripting – it's all fully built in! The Azure Databricks connector in power platform enables you to: Maintain governance: All access controls for data you set up in Azure Databricks are maintained in Power Platform Prevent data copy: Read and write to your data without data duplication Secure your connection: Connect Azure Databricks to Power Platform using Microsoft Entra user-based OAuth or service principals Have real time updates: Read and write data and see updates in Azure Databricks in near real time Build agents with context: Build agents with Azure Databricks as grounding knowledge with all the context of your data Instead of spending time copying or moving data and building custom connections which require additional manual maintenance, you can now seamlessly connect and focus on what matters – getting rich insights from your data – without worrying about security or governance. Let’s see how this connector can be beneficial across Power Apps, Power Automate, and Copilot Studio: Azure Databricks Connector for Power Apps – You can seamlessly connect to Azure Databricks from Power Apps to enable read/write access to your data directly within canvas apps enabling your organization to build data-driven experiences in real time. For example, our retail customers are using this connector to visualize different placements of items within the store and how they impact revenue. Azure Databricks Connector for Power Automate – You can execute SQL commands against your data within Azure Databricks with the rich context of your business use case. For example, one of our global retail customers is using automated workflows to track safety incidents, which plays a crucial role in keeping employees safe. Azure Databricks as a Knowledge Source in Copilot Studio – You can add Azure Databricks as a primary knowledge source for your agents, enabling them to understand, reason over, and respond to user prompts based on data from Azure Databricks. To get started, all you need to do in Power Apps or Power Automate is add a new connection – that's how simple it is! Check out our demo here and get started using our documentation today! This connector is available in all public cloud regions. You can also learn more about customer use cases in this blog. You can also review the connector reference here1.9KViews2likes2CommentsHow to deploy n8n on Azure App Service and leverage the benefits provided by Azure.
Lately, n8n has been gaining serious traction in the automation world—and it’s easy to see why. With its open-source core, visual workflow builder, and endless integration capabilities, it has become a favorite for developers and tech teams looking to automate processes without being locked into a single vendor. Given all the buzz, I thought it would be the perfect time to share a practical way to run n8n on Microsoft Azure using App Service. Why? Because Azure offers a solid, scalable, and secure platform that makes deployment easy, while still giving you full control over your container and configurations. Whether you're building a quick demo or setting up a production-ready instance, Azure App Service brings a lot of advantages to the table—like simplified scaling, integrated monitoring, built-in security features, and seamless CI/CD support. In this post, I’ll walk you through how to get your own n8n instance up and running on Azure—from creating the resource group to setting up environment variables and deploying the container. If you're into low-code automation and cloud-native solutions, this is a great way to combine both worlds. The first step is to create our Resource Group (RG); in my case, I will name it "n8n-rg". Now we proceed to create the App Service. At this point, it's important to select the appropriate configuration depending on your needs—for example, whether or not you want to include a database. If you choose to include one, Azure will handle the connections for you, and you can select from various types. In my case, I will proceed without a database. Proceed to configure the instance details. First, select the instance name, the 'Publish' option, and the 'Operating System'. In this case, it is important to choose 'Publish: Container', set the operating system to Linux, and most importantly select the region closest to you or your clients. Service Plan configuration. Here, you should select the plan based on your specific needs. Keep in mind that we are using a PaaS offering, which means that underlying compute resources like CPU and RAM are still being utilized. Depending on the expected workload, you can choose the most appropriate plan. Secondly—and very importantly—consider the features offered by each tier, such as redundancy, backup, autoscaling, custom domains, etc. In my case, I will use the Basic B1 plan. In the Database section, we do not select any option. Remember that this will depend on your specific requirements. In the Container section, under 'Image Source', select 'Other container registries'. For production environments, I recommend using Azure Container Registry (ACR) and pulling the n8n image from there. Now we will configure the Docker Hub options. This step is related to the previous one, as the available options vary depending on the image source. In our case, we will use the public n8n image from Docker Hub, so we select 'Public' and proceed to fill in the required fields: the first being the server, and the second the image name. This step is very important—use the exact same values to avoid issues. In the Networking section, we will select the values as shown in the image. This configuration will depend on your specific use case—particularly whether to enable Virtual Network (VNet) integration or not. VNet integration is typically used when the App Service needs to securely communicate with private resources (such as databases, APIs, or services) that reside within an Azure Virtual Network. Since this is a demo environment, we will leave the default settings without enabling VNet integration. In the 'Monitoring and Security' section, it is essential to enable these features to ensure traceability, observability, and additional security layers. This is considered a minimum requirement in production environments. At the very least, make sure to enable Application Insights by selecting 'Yes'. Finally, click on 'Create' and wait for the deployment process to complete. Now we will 'stop' our Web App, as we need to make some preliminary modifications. To do this, go to the main overview page of the Web App and click on 'Stop'. In the same Web App overview page, navigate through the left-hand panel to the 'Settings' section. Once there, click on it and select 'Environment Variables'. Environment variables are key-value pairs used to configure the behavior of your application without changing the source code. In the case of n8n, they are essential for defining authentication, webhook behavior, port configuration, timezone settings, and more. Environment variables within Azure specifically in Web Apps function the same way as they do outside of Azure. They allow you to configure your application's behavior without modifying the source code. In this case, we will add the following variables required for n8n to operate properly. Note: The variable APP_SERVICE_STORAGE should only be modified by setting it to true. Once the environment variables have been added, proceed to save them by clicking 'Apply' and confirming the changes. A confirmation dialog will appear to finalize the operation. Restart the Web App. This second startup may take longer than usual, typically around 5 to 7 minutes, as the environment initializes with the new configuration. Now, as we can see, the application has loaded successfully, and we can start using our own n8n server hosted on Azure. As you can observe, it references the host configured in the App Service. I hope you found this guide helpful and that it serves as a useful resource for deploying n8n on Azure App Service. If you have any questions or need further clarification, feel free to reach out—I'd be happy to help.667Views2likes5CommentsProblems configuring federation to SAML IdP
Hi. I'm trying to configure our Entra domain to federate to our existing IdP, following the guidance found here and am having real problems when it comes to using the Microsoft Graph API in PowerShell. After eventually working out what permissions I needed to request (more than what is stated in the doc), I ran the New-MgDomainFederationConfiguration cmdlet, and received the following error: "FederatedIdpMfaBehavior cannot be empty" This parameter is not mentioned in the doc either. So, then I added that parameter, and got the following: "Domain already has Federation Configuration set." But when I run Get-MgDomainFederationConfiguration, I get: "Resource 'federationConfiguration' does not exist or one of its queried reference-property objects are not present." When I run Get-MgDomain, AuthenticationType shows as "Federated", but I still see a managed login when I check. So I seem to be stuck with it seemingly half-configured, with no way to view or remove the configuration. Any ideas? Thanks, Nick4.4KViews0likes7CommentsAccelerate cloud migrations with confidence - Join the Dr. Migrate Webinar
Join us for a special Teams Live Event: Solution and Services Assessments with Dr. Migrate Eligible Microsoft Partners can now request access to Dr Migrate licenses to run customer environment assessments with customers across segments. Dr Migrate is an AI-Assisted Cloud Migration platform that helps customers optimize their end-to-end migration and modernization planning— from secure discovery to executive-ready business case and wave plans migration and modernization roadmaps. With the new Dr Migrate Collector (DMC), Dr Migrate delivers a comprehensive environment assessment without requiring intrusive agents or compromising customer data security. 🎓 Join this session to learn more about Azure Accelerate and how tools like Azure Migrate and Dr Migrate can help you have actionable data and deep solution insights from a customer's IT environment. In this session you’ll learn how to get your teams trained and certified, and how to request a Dr Migrate license, an alternative solution available to partners eligible for support through Azure Accelerate. Tues, Aug 5, 2025 - 9:00am-10:00am MDT Tues, Aug 5, 2025 - 6:00pm-7:00pm MDT *Dr Migrate is available to Azure Expert MSP, Infrastructure & Database, Kubernetes, Migrate to Enterprise Applications, SAP, Analytics, DW Migration, Build AI Apps, Accelerate Developer Productivity, and AI Platform specialized partners.52Views0likes0CommentsSwagger Auto-Generation on MCP Server
Would you like to generate a swagger.json directly on an MCP server on-the-fly? In many use cases, using remote MCP servers is not uncommon. In particular, if you're using Azure API Management (APIM), Azure API Center (APIC) or Copilot Studio in Power Platform, integrating with remote MCP servers is inevitable.Automating Microsoft Sentinel: Playbook Fundamentals
Welcome to the third entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel and Part 2: Automation Rules where we talked about automating the mundane away. In this post, we’re going to start talking about Playbooks which can be used for automating just about anything. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 Part I – Fundamentals [You are here] Part 4: Playbooks 2 Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 3: Playbooks - Fundamentals Pre-Built Playbooks in Content Hub Before we dive any deeper into Playbooks, I want to first point out that there are many pre-built playbooks available in the Content Hub. As of this writing, there are 484 playbooks available from 195 providers covering all manner of use cases like threat intelligence ingestion, incident response, operations integrations, and more in both first party Microsoft and third-party security tools. Before we dive into the internals of Playbooks and start creating our own, you really should do yourself a favor and take a look at the Content Hub and see if there isn’t already a Playbook doing what you want. You can also review the list of solutions at the Microsoft Sentinel GitHub page at Azure-Sentinel/Solutions at master · Azure/Azure-Sentinel Basic Structure of a Playbook Microsoft Sentinel Playbooks are built on Azure Logic Apps which is a low to no-code workflow automation platform. We’ll be diving into the details of how to create a Logic App from start to finish in the next installment of this series, but for now just know that there are two key “custom” features that Sentinel exposes for use in Playbooks: Triggers and Entities. Triggers The events or actions that can start a Playbook running are Triggers. These can be Incident, Alert, or Entity based. Incident Triggers Incident triggers are when an incident is either created or updated in Sentinel. Incident triggers can be tied to Automation Rules (which were covered in part 2 of this series) and can also be manually triggered by an analyst. Playbooks launched with Incident triggers receive all the incident objects, including any entities it contains as well as the alerts it is comprised of. Alert Triggers Alert triggers are similar to Incident triggers; except they trigger when an Alert is fired due to an Analytic Rule having a result. This is especially useful when you have an Alert that is not configured to create an Incident. Alert triggers can also be tied to Automation Rules Entity Triggers Entity triggers are different from Incident and Alert triggers as they cannot be tied to Automation Rules. Instead, they are triggered manually by an analyst. For example, let’s say that there is a user account that is part of an Incident and during the investigation the analyst decided they wanted to disable that user account in Entra. They could use an Entity Trigger to launch the Playbook, passing the Account Entity to the playbook for the account to be disabled. Entities We can’t really talk about Entity Triggers without talking about Entities themselves. So, what is an Entity in Sentinel? Entities are data elements that identify components in an alert or incident. There are many different types of entities within Sentinel, but for Playbooks we only need to focus on five key ones: IP Host Account URL FileHash (for more information on Entities in general, please see: https://learn.microsoft.com/azure/sentinel/entities ) How do you use Entity Triggers? When you are building an Analytic rule, you can identify the different Entities that it contains. These are then carried along as part of the Alert and exposed for further actions. This means that all you need to do is map the results of the Analytic Rule to the different Entity types using values returned from your query. For example, let’s say we are creating an Analytic Rule to alert on a new CloudShell user being created in Azure with the following query: let match_window = 3m; AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/listKeys/action") | where ActivityStatusValue =~ "Success" | extend TimeKey = bin(TimeGenerated, match_window), AzureIP = CallerIpAddress | join kind = inner (AzureActivity | where ResourceGroup has "cloud-shell" | where (OperationNameValue =~ "Microsoft.Storage/storageAccounts/write") | extend TimeKey = bin(TimeGenerated, match_window), UserIP = CallerIpAddress ) on Caller, TimeKey | summarize count() by TimeKey, Caller, ResourceGroup, SubscriptionId, TenantId, AzureIP, UserIP, HTTPRequest, Type, Properties, CategoryValue, OperationList = strcat(OperationNameValue, ' , ', OperationNameValue1) | extend Name = tostring(split(Caller,'@',0)[0]), UPNSuffix = tostring(split(Caller,'@',1)[0]) When we use this query as the basis for an Alert, we can then use Entity Mapping under Alert Enhancement to take the relevant fields returned and map them to Entity objects: This example maps the values "Caller", "Name", and "UPNSuffix" returned by the query to the "FullName", "Name", and "UPNSuffix" fields of an Account Entity. It also maps the UserIP result to the "Address" field of an IP Entity. When the Alert fires, it will include a collection of Account and IP Entities with the necessary values in its Entities field. Now if we wanted to, we could use a Playbook based on Entity Triggers to act on the Account or IP entities. What is a “strong” identifier versus a “weak” identifier and why is it important? Entities have fields that identify individual instances. Strong identifiers uniquely identify an entity, while weak identifiers may not. Often, combining weak identifiers can create a strong identifier. For example, Account entities can be identified by a strong identifier like a Microsoft Entra ID (GUID) or User Principal Name (UPN). Alternatively, a combination of weak identifiers like Name and NTDomain can be used. Different data sources might identify the same user differently. When Microsoft Sentinel recognizes two entities as the same based on their identifiers, it merges them into one for consistent handling. We’ll be covering more details on using Entities and Triggers in the next article when we start building Playbooks from scratch. Conclusion In this article we talked about the fundamentals of Playbooks in Sentinel , the Content Hub which is the home of pre-built Playbooks, as well as the different types of Triggers that can be used to launch a Playbook. In the next article we’ll be covering how to build a playbook from scratch and put these concepts to work. Additional Resources Supported triggers and actions in Microsoft Sentinel playbooks Entities in Microsoft Sentinel557Views0likes0CommentsCampusSphere: Building the Future of Campus AI with Microsoft's Agentic Framework
Project Overview We are a team of Imperial College Students committed to improving campus life through innovative multi-agent solutions. CampusSphere leverages Microsoft Azure AI capabilities to automate core university campus services. We created an end-to-end solution that allows both students and staff to access a multi-agent framework for room/gym booking, attendance tracking, calendar management, IoT monitoring and more. 🔭 Our Initial Vision: Reimagining Campus Technology When our team at Imperial College London embarked on the CampusSphere project as part of Microsoft's Agentic Campus initiative, we had one clear ambition: to create an intelligent campus ecosystem that would fundamentally change how students, faculty, and staff interact with university services. The inspiration came from a simple observation—despite living in an age of advanced AI, campus technology remained frustratingly fragmented. Students juggled multiple portals for course registration, room booking, dining services, and academic support. Faculty members navigated separate systems for teaching, research, and administrative tasks. The result? Countless hours wasted on mundane navigation tasks that could be better spent on learning, teaching, and innovation. Our vision was ambitious: create a single, intelligent interface that could understand natural language, anticipate user needs, and seamlessly integrate with existing campus infrastructure. We didn't just want to build another campus app—we wanted to demonstrate how Microsoft's agentic AI technologies could create a truly intelligent campus companion. 🧠 Enter CampusSphere CampusSphere is an intelligent campus assistant made up of multiple AI agents, each with a specific domain of expertise — all communicating seamlessly through a centralized architecture. Think of it as a digital concierge for campus life, where your calendar, attendance, IoT data, and facility bookings are coordinated by specialized GPT-powered agents. Here’s what we built: TriageAgent – the brain of the system, using Retrieval-Augmented Generation (RAG) to understand user intent CalendarAgent – handles scheduling, bookings, and reminders AttendanceAgent – tracks check-ins automatically IoTAgent – monitors real-time sensor data from classrooms and labs GymAgent – manages access and reservations for sports facilities 30+ MCP Tools – perform SQL queries, scrape web data, and connect with external APIs All of this is built on Microsoft Azure AI, Semantic Kernel, and Model Context Protocol (MCP) — making it scalable, secure, and lightning fast. 🖥️ The Tech Stack Our Azure-powered architecture showcases a modular and scalable approach to real-time data processing and intelligent agent coordination. The frontend is built using React with a Vite development server, providing a fast and responsive user interface. When users submit a prompt, it travels to a Flask backend server acting as the Triage agent, which intelligently delegates tasks to a FastAPI agent service. This FastAPI service asynchronously communicates with individual agents and handles responses efficiently. Complex queries are routed to MCP Tools, which interact with the CosmosDB-powered Campus Database. Simultaneously, real-time synthetic IoT data is pushed into the database via Azure Function Apps and Azure IoT Hub. Authentication is securely managed: users log in through the frontend, receive a token from the database API server, and use it for authorized access to MCP services, with permissions enforced based on user roles using our custom MCP server implementation. This robust architecture enables seamless integration, real-time data flow, and secure multi-agent collaboration across Azure services. Our system leverages a multi-agent architecture designed to intelligently coordinate task execution across specialized services. At the core is the TriageAgent, which uses Retrieval-Augmented Generation (RAG) to interpret user prompts, enrich them with relevant context, and determine the optimal response path. Based on the nature of the request, it may handle the response directly, seek clarification, or delegate tasks to specific agents via FastAPI. Each specialized agent has a clearly defined role: AttendanceAgent: Interfaces with CosmosDB-backed FastAPI endpoints to check student attendance, using filters like event name, student ID, or date. IoTAgent: Monitors room conditions (e.g., temperature, CO₂ levels) and flags anomalies using real-time data from Azure IoT Hub, processed via FastAPI. CalendarAgent: Handles scheduling, availability checks, and event creation by querying or updating CosmosDB through FastAPI. Future integration with Microsoft Graph API is planned for direct calendar syncing. Gym Slot Agent: Checks available times for gym sessions using dedicated MCP tools. The triage agent serves as the orchestrator, breaking down complex requests (like "Book a gym session") into subtasks. It consults relevant agents (e.g., calendar and gym slot agents), merges results, and then confirms the final action with the user. This distributed and asynchronous workflow reduces backend load and enhances both responsiveness and reliability of the system. 🔮 What’s Next? Integrating CampusSphere with live systems via Microsoft OAuth is crucial for enhancing its capabilities. This integration will grant the agent authenticated access to a wider range of student data, moving beyond synthetic datasets. This expanded access to real-world information will enable deeply personalized advice, such as tailored course selection, scholarship recommendations, event suggestions, and deadline reminders, transforming CampusSphere into a sophisticated, proactive personal assistant. 🤝Meet the Team Behind CampusSphere Our success stemmed from a diverse team of innovators who brought together expertise from multiple domains: Benny Liu - https://www.linkedin.com/in/zong-benny-liu-393a4621b/ Lucas Ng - https://www.linkedin.com/in/lucas-ng-11b317203/ Lu Ju - https://www.linkedin.com/in/lu-ju/ Bruno Duaso - https://www.linkedin.com/in/bruno-duaso-jimeno-744464262/ Martim Coutinho - https://www.linkedin.com/in/martim-pereira-coutinho-116308233/ Krischad Pourpongpan - https://www.linkedin.com/in/krischadpua/ Yixu Pan - https://www.linkedin.com/in/yixu-pan/ Our collaborative approach enabled us to create a sophisticated agentic AI system that demonstrates the powerful potential of Microsoft's AI technologies in educational environments. 🧑💻 Project Repository: GitHub - Imperial-Microsoft-Agentic-Campus/CampusSphere Contribute to Imperial-Microsoft-Agentic-Campus/CampusSphere development by creating an account on GitHub. github.com Have questions about implementing similar solutions at your institution? Connect with our team members on LinkedIn—we're always excited to share knowledge and collaborate on innovative campus technology projects. 📚Get Started with Microsoft's AI Tools Ready to explore the technologies that made CampusSphere possible? Here are essential resources: Microsoft Semantic Kernel: The core framework for building AI agent orchestration systems. Learn how to create, coordinate, and manage multiple AI agents working together seamlessly. AI Agents for Beginners: A comprehensive guide to understanding and building AI agents from the ground up. Perfect for getting started with agentic AI development. Model Context Protocol (MCP): Learn about the protocol that enables secure connections between AI models and external tools and services—essential for building integrated AI systems. Windows AI Toolkit: Microsoft's toolkit for developing AI applications on Windows, providing local AI model development capabilities and deployment tools. Azure Container Apps: Understand how to deploy and scale containerized AI applications in the cloud, perfect for hosting multi-agent systems. Azure Cosmos DB Security: Essential security practices for managing data in AI applications, covering encryption, access control, and compliance.109Views2likes0CommentsImpariamo a conoscere MCP: Introduzione al Model Context Protocol (MCP)
Non perderti il prossimo evento “Let’s Learn – MCP” su Microsoft Reactor il 24 di Luglio, pensato per chiunque voglia conoscere meglio il nuovo standard per agenti intelligenti (il Model Context Protocol) e imparare a metterlo in pratica. La sessione è in Italiano e le demo sono in Python, ma fa parte di una serie di live-streaming disponibili in tantissime lingue.August Calendar is here!
🌟 Community Spirit? CHECKED! 🌍 Amazing Members & Audiences? DOUBLE CHECK! 🎤 Phenomenal Speakers Locked In? CHECKED! 🚀 Global Live Sessions? YOU BET! The stage is set. The excitement is real. It’s that time again, time to ignite the community with another monthly calendar! 🔥✨ We’ve lined up a powerhouse of sessions packed with world-class content, covering the best of Microsoft, from Coding, Cloud, Migration, Data, Security, AI, and so much more! 💻☁️🔐🤖 But wait, that’s not all! For the first time ever, we’ve smashed through time zones! No matter where you are in the world, you can tune in LIVE and learn from extraordinary speakers sharing their insights, experiences, and passion. 🌏⏰ What do you need to do? It’s easy: 👉 Register for the sessions 👉 Mark your calendar 👉 Grab your coffee, tea, or ice-cold soda 👉 Join us and soak up the knowledge! We believe in what makes this community truly special, and that’s YOU. Let’s set August on fire together! 🔥 Are you ready to be inspired, to grow, and to connect with Microsoft Learn family? Don’t miss out, August is YOUR month! 💥🙌 📢 Shehan Perera 📖 Let There Be Cloud-Native Endpoints 📅 5 Aug 2025 (19:00 AEST) (11:00 CEST) 📢Shahab Ganji 📖 Create a Tic Tac Toe game and learn about Event Sourcing 📅 8 Aug 2025 18:00 CEST 📢 Ronak Vachhani 📖Data Intelligence at Your Fingertips: Fabric’s AI Functions & Data Agents 📅16 Aug 2025 (16:00 AEST) (08:00 CEST) 📢Laïla Bougriâ 📖Change is inevitable: versioning event-driven systems 📅22 Aug 2025 18:00 CEST 📢AJ Bajada 📖Revolutionising DevOps with AI: From Pipelines to Deployment 📅28 Aug 2025 (19:30 AEST) (11:30 CEST) 📢James Eastham 📖So, You Want To Build An Event Driven System? 📅29 Aug 2025 17:00 CEST50Views0likes0CommentsNew ESG study validates how fully managed PostgreSQL on Azure delivers economic wins
Migrating your PostgreSQL databases to Azure delivers cost, performance and productivity benefits, while laying a strong foundation for innovation. But don’t just take our word for it. We’ve worked with the Enterprise Strategy Group (ESG), now a part of Omdia, to validate how organizations benefit economically from moving their PostgreSQL databases to Azure. Whether you’re modernizing your mission-critical applications or developing the next groundbreaking feature, migrating to Azure gives you the freedom, flexibility and continuous improvements of open source backed by the reliability, security and efficiency of Azure. Read the full PostgreSQL report PostgreSQL is the preferred choice of developers building the next generation of intelligent applications, according to the 2025 Stack Overflow survey. However, many teams are finding that managing these open-source databases on-premises is increasingly challenging, especially as their innovation initiatives demand more and more resources. Because of this, organizations are rapidly modernizing their database infrastructure to better support these next-gen initiatives. At a glance – benefits of migrating to Azure Database for PostgreSQL Increasing complexity is nothing new to today’s IT and developer teams. Some of the key drivers contributing to this complexity include integrating emerging tech like AI and managing cybersecurity concerns—two things that the fully managed Azure Database for PostgreSQL service handles very well. Built-in GenAI capabilities, performance recommendations, and enterprise-grade security, scalability, compliance and availability make PostgreSQL on Azure a natural fit for teams looking to build intelligent enterprise applications. The ESG report highlights: 58% lower total cost of ownership 65% improvement in database performance $770K in savings from avoiding downtime “We have seen wins on both sides of the financial equation. Our costs are down across the board, and we have increased our revenue specifically because of the capabilities that moving our Azure Database for PostgreSQL.” Review the Azure Database for PostgreSQL Economic Validation Infographic A closer look – how fully managed PostgreSQL on Azure delivers economic wins for the enterprise Lower total cost of ownership Migration dramatically lowers the total cost of ownership of enterprise databases. By shifting from on-premises infrastructure to Azure’s managed service, enterprises eliminate many capital and operational expenses. Elimination of hardware and maintenance costs: On-premises PostgreSQL deployments require investing in servers, storage, networking hardware, as well as ongoing power, cooling, and data center space. Migrating to Azure removes these needs entirely. Companies no longer have to purchase or refresh hardware or pay for associated facilities and utilities, directly cutting capex and support costs. Reduced licensing and support expenses: Azure’s model also eliminates traditional database licensing fees, third-party support contracts, and expensive monitoring tools for on-premises systems. Organizations reported saving thousands on separate support agreements or software licenses for their PostgreSQL instances. Pay-as-you-go flexibility: Azure Database for PostgreSQL offers pay-as-you-go and reserved pricing models, so enterprises only pay for the compute and storage they actually use. There’s no more overprovisioning resources to handle peak loads, and dynamic scaling ensures capacity matches demand. Operational efficiency: By offloading database management to Microsoft, organizations also reduce administrative overhead, which indirectly lowers labor costs. In ESG’s study, moving to Azure cut the monthly DBA hours per database from 2.1 hours to just 0.6 hours, a ~70% decrease in effort, effectively saving payroll expenditure on routine upkeep. Improved performance and scalability Enterprises see substantial improvements in database performance and scalability after migrating to Azure. Because Azure Database for PostgreSQL runs on high-end cloud infrastructure with intelligent optimizations, applications can achieve faster response times and handle greater workloads. Higher throughput and lower latency: ESG’s interviews found average database performance improved by ~65%, and in one case a customer saw a 9× increase in throughput for its primary application after migration. Such gains come from Azure’s optimized compute, premium SSD storage options, and features like automatic performance tuning that are difficult to replicate on-premises. Elastic scaling on demand: In on-premises environments, supporting peak workloads often meant overprovisioning. Azure Database for PostgreSQL completely changes this paradigm with cloud elasticity. The ability to instantly right-size resources means applications always have the performance they need, and users experience responsive, low-latency service. Handling growth with ease: As an enterprise’s data and user base grows, Azure’s global infrastructure can seamlessly accommodate that expansion. This cloud scalability gives enterprises headroom to innovate and onboard more customers without performance bottlenecks. In contrast, scaling an on-premises PostgreSQL often requires complex sharding or hardware upgrades. Accelerated time to value: Improved performance and scalability directly impact business agility. Batch processes complete faster, reports generate sooner, and websites or applications can serve more customers per second. ESG noted that by removing infrastructure constraints, Azure empowered businesses to accelerate their time-to-value and respond faster to market demands. Operational agility and developer productivity By migrating to a fully managed service, enterprises gain agility and allow their IT/development teams to focus on innovation. Offloading database management to Azure not only saves costs but also frees up technical staff from mundane maintenance. This shift translates into faster project delivery and greater productivity: Less time spent “keeping the lights on”: ESG found that after migration, companies saw a major reduction in the effort required to manage databases. Administrators went from spending 2+ hours per database per month on upkeep to less than one hour. This over 70% drop in DBA workload means IT teams are no longer bogged down by routine chores. Faster development and release cycles: ESG observed that organizations enjoyed increased development velocity after migrating, since their engineers could devote time to coding and testing new features instead of managing database infrastructure. For example, one company in the study was able to increase its software release frequency significantly. Improved business agility: The combination of easier scaling, better performance, and less ops overhead means the organization can respond to opportunities faster. Some enterprises even credited the move to Azure with helping increase their revenue, because it allowed them to deliver new capabilities to market sooner. Focus on core competencies: After migration, organizations can let Azure handle the heavy lifting of database administration and instead concentrate on work that differentiates them in the marketplace. Developers spend more time building applications and analyzing areas that drive business value rather than performing software updates or fixing replication issues. Enhanced security, compliance, and reliability Azure Database for PostgreSQL provides enterprise-grade security and reliability features that far exceed what most companies can achieve on-premises. This results in a stronger risk posture, reducing the likelihood of breaches or downtime while also easing compliance burdens. Built-in high availability and disaster recovery: ESG’s modeled scenario saw annual PostgreSQL downtime drop from 10 hours on-premises to just 5 hours on Azure. With a 99.99% availability SLA for Azure Database for PostgreSQL, unplanned outages that used to disrupt business are largely a thing of the past. One ESG case study estimated about $770K in costs were avoided thanks to preventing downtime and the associated business disruptions. Strong security and data protection: PostgreSQL instances on Azure benefit from Microsoft’s massive investments in cybersecurity and compliance. One customer highlighted, “We are much more secure since we moved to Azure Database for PostgreSQL. We use Azure AI to set our security standards and get constant recommendations on how to increase our security even more.” Automated updates and governance: Azure takes care of updating PostgreSQL with the latest security fixes and can even upgrade the database engine version with minimal downtime. Furthermore, features like audit logging, advanced threat protection, and integration with Azure Security Center provide continuous oversight of database activity. Geo-redundancy and backup management: For disaster recovery, Azure allows geo-redundant backups and read replicas in different regions, improving an enterprise’s resilience to regional outages or disasters. Should data restoration be needed, it’s as simple as clicking a button. Azure Database for PostgreSQL offers enterprises a frictionless path to greater efficiency, innovation, and growth. By lowering costs and management burdens, it lets you redirect resources to strategic projects. By boosting performance and scalability, it ensures your applications can keep up with business demands. And by enhancing security and reliability, it safeguards one of your most precious assets—your data—while meeting the strict requirements of enterprise IT. The benefits outlined in the ESG study make a strong business case: migrating on-premises databases to Azure’s managed PostgreSQL can transform your IT operations and deliver tangible business value from day one. Tested, approved, trusted Migrating to a fully managed PostgreSQL service supports digital transformation. It allows enterprises to modernize their data estate without abandoning the familiarity of PostgreSQL. Developers can continue using the open-source tools and skills they know, but now with cloud-powered capabilities at their fingertips. Azure integrations (with AI services, analytics tools, etc.) further enable organizations to do more with their data. For example, companies can readily infuse AI or machine learning into their applications or take advantage of advanced analytics on their PostgreSQL data, since that data is easily accessible in the cloud. Read the full report for more details about the quantified benefits and customer testimonials. If you’re ready to start your journey, check out our migration guides. With Azure’s fully managed PostgreSQL, you can supercharge your data strategy, empower your developers, and ultimately accelerate your path to an AI-driven future.