azure
163 TopicsSwagger Auto-Generation on MCP Server
Would you like to generate a swagger.json directly on an MCP server on-the-fly? In many use cases, using remote MCP servers is not uncommon. In particular, if you're using Azure API Management (APIM), Azure API Center (APIC) or Copilot Studio in Power Platform, integrating with remote MCP servers is inevitable.Impariamo a conoscere MCP: Introduzione al Model Context Protocol (MCP)
Non perderti il prossimo evento “Let’s Learn – MCP” su Microsoft Reactor il 24 di Luglio, pensato per chiunque voglia conoscere meglio il nuovo standard per agenti intelligenti (il Model Context Protocol) e imparare a metterlo in pratica. La sessione è in Italiano e le demo sono in Python, ma fa parte di una serie di live-streaming disponibili in tantissime lingue.Introducing Azure AI Travel Agents: A Flagship MCP-Powered Sample for AI Travel Solutions
We are excited to introduce AI Travel Agents, a sample application with enterprise functionality that demonstrates how developers can coordinate multiple AI agents (written in multiple languages) to explore travel planning scenarios. It's built with LlamaIndex.TS for agent orchestration, Model Context Protocol (MCP) for structured tool interactions, and Azure Container Apps for scalable deployment. TL;DR: Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers and more. NOTE: This example uses mock data and is intended for demonstration purposes rather than production use. The Challenge: Scaling Personalized Travel Planning Travel agencies grapple with complex tasks: analyzing diverse customer needs, recommending destinations, and crafting itineraries, all while integrating real-time data like trending spots or logistics. Traditional systems falter with latency, scalability, and coordination, leading to delays and frustrated clients. The AI Travel Agents tackles these issues with a technical trifecta: LlamaIndex.TS orchestrates six AI agents for efficient task handling. MCP equips agents with travel-specific data and tools. Azure Container Apps ensures scalable, serverless deployment. This architecture delivers operational efficiency and personalized service at scale, transforming chaos into opportunity. LlamaIndex.TS: Orchestrating AI Agents The heart of The AI Travel Agents is LlamaIndex.TS, a powerful agentic framework that orchestrates multiple AI agents to handle travel planning tasks. Built on a Node.js backend, LlamaIndex.TS manages agent interactions in a seamless and intelligent manner: Task Delegation: The Triage Agent analyzes queries and routes them to specialized agents, like the Itinerary Planning Agent, ensuring efficient workflows. Agent Coordination: LlamaIndex.TS maintains context across interactions, enabling coherent responses for complex queries, such as multi-city trip plans. LLM Integration: Connects to Azure OpenAI, GitHub Models or any local LLM using Foundy Local for advanced AI capabilities. LlamaIndex.TS’s modular design supports extensibility, allowing new agents to be added with ease. LlamaIndex.TS is the conductor, ensuring agents work in sync to deliver accurate, timely results. Its lightweight orchestration minimizes latency, making it ideal for real-time applications. MCP: Fueling Agents with Data and Tools The Model Context Protocol (MCP) empowers AI agents by providing travel-specific data and tools, enhancing their functionality. MCP acts as a data and tool hub: Real-Time Data: Supplies up-to-date travel information, such as trending destinations or seasonal events, via the Web Search Agent using Bing Search. Tool Access: Connects agents to external tools, like the .NET-based customer queries analyzer for sentiment analysis, the Python-based itinerary planning for trip schedules or destination recommendation tools written in Java. For example, when the Destination Recommendation Agent needs current travel trends, MCP delivers via the Web Search Agent. This modularity allows new tools to be integrated seamlessly, future-proofing the platform. MCP’s role is to enrich agent capabilities, leaving orchestration to LlamaIndex.TS. Azure Container Apps: Scalability and Resilience Azure Container Apps powers The AI Travel Agents sample application with a serverless, scalable platform for deploying microservices. It ensures the application handles varying workloads with ease: Dynamic Scaling: Automatically adjusts container instances based on demand, managing booking surges without downtime. Polyglot Microservices: Supports .NET (Customer Query), Python (Itinerary Planning), Java (Destination Recommandation) and Node.js services in isolated containers. Observability: Integrates tracing, metrics, and logging enabling real-time monitoring. Serverless Efficiency: Abstracts infrastructure, reducing costs and accelerating deployment. Azure Container Apps' global infrastructure delivers low-latency performance, critical for travel agencies serving clients worldwide. The AI Agents: A Quick Look While MCP and Azure Container Apps are the stars, they support a team of multiple AI agents that drive the application’s functionality. Built and orchestrated with Llamaindex.TS via MCP, these agents collaborate to handle travel planning tasks: Triage Agent: Directs queries to the right agent, leveraging MCP for task delegation. Customer Query Agent: Analyzes customer needs (emotions, intents), using .NET tools. Destination Recommendation Agent: Suggests tailored destinations, using Java. Itinerary Planning Agent: Crafts efficient itineraries, powered by Python. Web Search Agent: Fetches real-time data via Bing Search. These agents rely on MCP’s real-time communication and Azure Container Apps’ scalability to deliver responsive, accurate results. It's worth noting though this sample application uses mock data for demonstration purpose. In real worl scenario, the application would communicate with an MCP server that is plugged in a real production travel API. Key Features and Benefits The AI Travel Agents offers features that showcase the power of MCP and Azure Container Apps: Real-Time Chat: A responsive Angular UI streams agent responses via MCP’s SSE, ensuring fluid interactions. Modular Tools: MCP enables tools like analyze_customer_query to integrate seamlessly, supporting future additions. Scalable Performance: Azure Container Apps ensures the UI, backend and the MCP servers handle high traffic effortlessly. Transparent Debugging: An accordion UI displays agent reasoning providing backend insights. Benefits: Efficiency: LlamaIndex.TS streamlines operations. Personalization: MCP’s data drives tailored recommendations. Scalability: Azure ensures reliability at scale. Thank You to Our Contributors! The AI Travel Agents wouldn’t exist without the incredible work of our contributors. Their expertise in MCP development, Azure deployment, and AI orchestration brought this project to life. A special shoutout to: Pamela Fox – Leading the developement of the Python MCP server. Aaron Powell and Justin Yoo – Leading the developement of the .NET MCP server. Rory Preddy – Leading the developement of the Java MCP server. Lee Stott and Kinfey Lo – Leading the developement of the Local AI Foundry Anthony Chu and Vyom Nagrani – Leading Azure Container Apps roadmap Matt Soucoup and Julien Dubois – Leading the ACA DevRel strategy Wassim Chegham – Architected MCP and backend orchestration. And many more! See the GitHub repository for all contributors. Thank you for your dedication to pushing the boundaries of AI and cloud technology! Try It Out Experience the power of MCP and Azure Container Apps with The AI Travel Agents! Try out live demo locally on your computer for free to see real-time agent collaboration in action. Conclusion Developers can explore today the open-source project on GitHub, with setup and deployment instructions. Share your feedback on our community forum. We’re already planning enhancements, like new MCP-integrated agents, enabling secure communication between the AI agents and MCP servers and more. This is still a work in progress and we also welcome all kind of contributions. Please fork and star the repo to stay tuned for updates! ◾️We would love your feedback and continue the discussion in the Azure AI Foundry Discord aka.ms/foundry/discord On behalf of Microsoft DevRel Team.European AI and Cloud Summit 2025
Dusseldorf , Germany added 3,000+ tech enthusiast and hosted the Microsoft for Startups Cloud AI Pitch Competition between May 26-28, 2025. We were pleased to attend the European AI Cloud and Collaboration Summits and Biz Apps Summit– to participate and to gain so much back from everyone. It was a packed week filled with insights, feedback, and fun. Below is a recap of various aspects of the event - across keynotes, general sessions, breakout sessions, and the Expo Hall. The event in a nutshell: 3,000+ attendees 237 speakers overall – 98 Microsoft Valued Professionals, 16 Microsoft Valued Professional Regional Directorss, 51 from Microsoft Product Groups and Engineering 306 sessions | 13 tutorials (workshops) 70 sponsors | One giant Expo Hall PreDay Workshop AI Beginner Development Powerclass This workshop is designed to give you a hands-on introduction to the core concepts and best practices for interacting with OpenAI models in Azure AI Foundry portal. Innovate with Azure OpenAI's GPT-4o multimodal model in this hands-on experience in Azure AI Foundry. Learn the core concepts and best practices to effectively generate with text, sound, and images using GPT-4o-mini, DALL-E and GPT-4o-realtime. Create AI assistants that enhance user experiences and drive innovation. Workshop for you azure-ai-foundry/ai-tutorials: This repo includes a collection of tutorials to help you get started with building Generative AI applications using Azure AI Foundry. Each tutorial is designed to be self-contained and provides step-by-step instructions to guide you in the development process. Keynotes & general sessions Day 1 Keynote The Future of AI Is Already Here, Marco Casalaina, VP Products of Azure AI and AI Futurist at Microsoft This session discussed the new and revolutionary changes that you're about to see in AI - and how many of them are available for you to try now. Marco shared how AI is becoming ubiquitous, multimodal, multilingual, and autonomous, and how it will change our lives and our businesses. This session covered: • Incredible advances in multilingual AI • How Copilot (and every AI) are grounded to data, and how we do it in Azure OpenAI • Responsible AI, including evaluation for correctness, and real time content safety • The rise of AI Agents • And how AI is going to move from question-answering to taking action Day 2 Keynote Leveraging Microsoft AI: Navigating the EU AI Act and Unlocking Future Opportunities, Azar Koulibaly, General Manager and Associate General Counsel This session was for developers and business decision makers, Azar set the stage for Microsoft’s advancements in AI and how they align with the latest regulatory framework. Exploring the EU AI Act, its key components, and its implications for AI development and deployment within the European Union. The audience gained a comprehensive understanding of the EU AI Act's objectives, including the promotion of trustworthy AI, the mitigation of risks, and the enhancement of transparency and accountability. Learning aboutMicrosoft's Cloud and AI Services provide robust support for compliance with these new regulations, ensuring that your AI projects are both innovative and legally sound and Microsoft trust center resources. He delved into the opportunities that come with using Microsoft’s state-of-the-art tools, services, and technologies. Discover how partnering with Microsoft can accelerate your AI initiatives, drive business growth, and create competitive advantages in an evolving regulatory landscape. Join us to unlock the full potential of AI while navigating the complexities of the EU AI Act with confidence. General Sessions It is crucial to ensure your organization is technically ready for the full potential of AI. The sessions focused on technical readiness and ensuring you have the latest guidance. Our experts will shared the best practices and provide guidance on how to leverage AI and Azure AI Foundry to maximize the benefits of Agents, LLM and Generative within your organization. Expo Hall + Cloud AI Statup Stage + tutorials The Expo Hall was Buzzing with demos, discussions, interviews, podcasts, lightning talks, popcorn, catering trucks, cotton candy, SWAG, prizes, and community. There was a busy Cloud AI StartupStage, a Business Stage, and shorter talks delivered in front of a shiny airstream trailer. Cloud AI Startup Stage This was a highly informative and engaging event focused on Artificial Intelligence (AI) and its potential for startups. The Microsoft for Startups is a platform to provide startups with the resources, tools, and support they need to succeed. This portion of the event offered value to budding entrepreneurs and established startups looking to scale. For example, on day 2, we focused on accelerating innovation with Microsoft Founders Hub and Azure AI. Startups could kickstart their journey with Azure credits and gain access to 30+ tools and services, benefiting from additional credits and offerings as they evolve. It’s a great way for Startups to navigate technical and business challenges. Cloud AI Startup Pitch The Microsoft for Startups AI Pitch Competition and Startup Stage at European Cloud Summit 2025 This was a highly informative and engaging event focused on Artificial Intelligence (AI) and its potential for startups. The Microsoft Startup Programme was introduced as a platform that provides startups with the resources, tools, and support they need to succeed. The AI Empowerment session provided an in-depth overview of the various AI services available through Microsoft Azure AI Foundry and how these cutting-edge technologies can be integrated into business operations. This was perfect for startups looking to get started with AI or those interested in joining the Microsoft Startup Programme. The Spotlight on Innovation session showcased innovative startups from the European Cloud Summit, giving attendees a unique insight into the cutting-edge ideas that are shaping our future. The Empowering Innovation session featured a panel of experts and successful startup founders sharing insights on leveraging Microsoft technologies, navigating the startup ecosystem, and securing funding. This was valuable for budding entrepreneurs or established startups looking to scale. Startup Showcases Holistic AI - End to End AI Governance Platform, Raj Bharat Patel, Securing Advantage in the Era of Agentic AI With this autonomy comes high variability: the difference between a minor efficiency and a major one could mean millions in savings. Conversely, a seemingly small misstep could cascade into catastrophic reputational or compliance risk. The stakes are high—but so is the potential. The next frontier introduces a new paradigm: AI managing AI. As organizations deploy swarms of autonomous agents across business functions, the challenge expands beyond governing human-AI interactions. Now, it's about ensuring that AI agents can monitor, evaluate, and optimize each other—in real time, at scale. This demands a shift in both architecture and mindset: toward native-AI platforms and a new human role, moving from human-in-the-loop to human-on-the-loop—strategically overseeing autonomous systems, not micromanaging them. In this session, Raj Bharat Patel will explore how forward-thinking enterprises can prepare for this emerging reality—building governance and orchestration infrastructures that enable scale, speed, and safety in the age of agentic AI. D-ID | The #1 Choice for AI Generated Video Creation Platform, Yaniv Levi In a world increasingly powered by AI, how do we make digital experiences feel more human? At D-ID we enable businesses to create lifelike, interactive avatars that are transforming the way users communicate with AI, making it more intuitive, personal, and memorable. In this session, we’ll share how our collaboration with Microsoft for Startups helped us scale and innovate, enabling seamless integration with Microsoft’s tools to enhance customer-facing AI agents, and LLM-powered solutions while adding a powerful and personalized new layer of humanlike expression. Startup Pitch Competition Day 2 of the event focused on accelerating innovation with Microsoft Starups and Azure AI Foundry. Startups can kickstart their journey with Azure credits and gain access to 30+ tools and services, benefiting from additional credits and offerings as they evolve. The Azure AI Foundry provides access to an extensive range of AI models, including OpenAI, Meta, Nvidia, Hugging Face. Moreover, startups can navigate through technical and business challenges with the help of free 1:1 sessions with Microsoft experts. The Cloud Startup Stage Competition showcased the most innovative startups in the Microsoft Azure and Azure OpenAI ecosystem, highlighting their groundbreaking solutions and business models. This was a celebration of innovation and success and provided insights into the experiences, challenges, and future plans of these startups The Judges The Pitches . The Winners 1 st Place Graia 2 nd Place iThink365 3 rd Place ShArc Overall, this event was highly informative, engaging, and valuable for anyone interested in AI and its potential for startups. The Microsoft Startup Programme and Azure AI Foundry are powerful tools that can help startups achieve success and transform their ideas into successful businesses. In the end... We are grateful for this year's active and engaging #CollabSummit & #CloudSummit #BizAppSummit— so much goodness, caring, learning, and fun! Great questions, stories, understanding of your concerns, and the sharing in fun. Thank you and see you next year! We look forward to seeing in Cololgne - May 5-7, 2025 – Collaboration Summit (@CollabSummit), Cloud Summit (@EUCloudSummit), and BizApps Summit (@BizAppsSummit) and continue the discussion around AI in our Azure AI Discord CommunityUsing DeepSeek-R1 on Azure with JavaScript
The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. Switching between Azure, GitHub Models, or local (Ollama) usage. Code patterns to start using DeepSeek-R1 with various libraries in TypeScript. Reference links DeepSeek on Azure - JavaScript demos repository Azure AI Foundry OpenAI Node SDK LangChain.js Ollama Requirements GitHub account. If you don't have one, you can create a free GitHub account. You can optionally use GitHub Copilot Free to help you write code and ship your application even faster. Azure account. If you're new to Azure, get an Azure account for free to get free Azure credits to get started. If you're a student, you can also get free credits with Azure for Students. Getting started We'll use GitHub Codespaces to get started quickly, as it provides a preconfigured Node.js environment for you. Alternatively, you can set up a local environment using the instructions found in the GitHub repository. Click on the button below to open our sample repository in a web-based VS Code, directly in your browser: Once the project is open, wait a bit to ensure everything has loaded correctly. Open a terminal and run the following command to install the dependencies: npm install Running the samples The repository contains several TypeScript files under the samples directory that demonstrate how to interact with DeepSeek-R1 models. You can run a sample using the following command: npx tsx samples/<sample>.ts For example, let's start with the first one: npx tsx samples/01-chat.ts Wait a bit, and you should see the response from the model in your terminal. You'll notice that it may take longer than usual to respond, and see a weird response that starts with a <think> tag. This is because DeepSeek-R1 is designed to be used for task that need complex reasoning, like solving problems or answering math questions, and not for you usual chat interactions. Model configuration By default, the repository is configured to use GitHub Models, so you can run any example using Codespaces without any additional setup. While it's great for quick experimentation, GitHub models limit the number of requests you can make in a day and the amount of data you can send in a single request. If you want to use the model more extensively, you can switch to Azure AI or even use a local model with Ollama. You can take a look at the samples/config.ts to see how the different configurations are set up. We'll not cover using Ollama models in this article, but you can find more information in the repository documentation. Deploying DeepSeek-R1 on Azure To experiment with the full capabilities of DeepSeek-R1, you can deploy it on Azure AI Foundry. Azure AI Foundry is a platform that allows you to deploy, manage and develop with AI models quickly. To use Azure AI Foundry, you need to have an Azure account. Let's start by deploying the model on Azure AI Foundry. First, follow this tutorial to deploy a serverless endpoint with the model. When it's time to choose the model, make sure to select the DeepSeek-R1 model in the catalog. Once your endpoint is deployed, you should be able to see your endpoint details and retrieve the URL and API key: Screenshot showing the endpoint details in Azure AI Foundry Then create a .env file in the root of the project and add the following content: AZURE_AI_BASE_URL="https://<your-deployment-name>.<region>.models.ai.azure.com/v1" AZURE_AI_API_KEY="<your-api-key>" Tip: if you're copying the endpoint from the Azure AI Foundry portal, make sure to add the /v1 at the end of the URL. Open the samples/config.ts file and update the default export to use Azure: export default AZURE_AI_CONFIG; Now all samples will use the Azure configuration. Explore reasoning with DeepSeek-R1 Now that you have the model deployed, you can start experimenting with it. Open the samples/08-reasoning.ts file to see how the model handles more complex tasks, like helping us understand a well-known weird piece of code. const prompt = ` float fast_inv_sqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = *(long*)&y; i = 0x5f3759df - ( i >> 1 ); y = *(float*)&i; y = y * ( threehalfs - ( x2 * y * y ) ); return y; } What is this code doing? Explain me the magic behind it. `; Now run this sample with the command: npx tsx samples/08-reasoning.ts You should see the model's response streaming piece by piece in the terminal, while describing its thought process before providing the actual answer to our question. Screenshot showing the model's response streaming in the terminal Brace yourself, as it might take a while to get the full response! At the end of the process, you should see the model's detailed explanation of the code, along with some context around it. Leveraging frameworks Most examples in this repository are built with the OpenAI Node SDK, but you can also use LangChain.js to interact with the model. This might be especially interested if you need to integrate other sources of data or want to build a more complex application. Open the file samples/07-langchain.ts to have a look at the setup, and see how you can reuse the same configuration we used with the OpenAI SDK. Going further Now it's your turn to experiment and discover the full potential of DeepSeek-R1! You can try more advanced prompts, integrate it into your larger application, or even build agents to make the most out of the model. To continue your learning journey, you can check out the following resources: Generative AI with JavaScript (GitHub): code samples and resources to learn Generative AI with JavaScript. Build a serverless AI chat with RAG using LangChain.js (GitHub): a next step code example to build an AI chatbot using Retrieval-Augmented Generation and LangChain.js.Getting Started with Azure MCP Server: A Guide for Developers
The world of cloud computing is growing rapidly, and Azure is at the forefront of this innovation. If you're a student developer eager to dive into Azure and learn about Model Context Protocol (MCP), the Azure MCP Server is your perfect starting point. This tool, currently in Public Preview, empowers AI agents to seamlessly interact with Azure services like Azure Storage, Cosmos DB, and more. Let's explore how you can get started! 🎯 Why Use the Azure MCP Server? The Azure MCP Server revolutionizes how AI agents and developers interact with Azure services. Here's a glimpse of what it offers: Exploration Made Easy: List storage accounts, databases, resource groups, tables, and more with natural language commands. Advanced Operations: Manage configurations, query analytics, and execute complex tasks like building Azure applications. Streamlined Integration: From JSON communication to intelligent auto-completion, the Azure MCP Server ensures smooth operations. Whether you're querying log analytics or setting up app configurations, this server simplifies everything. ✨ Installation Guide: One-Click and Manual Methods Prerequisites: Before you begin, ensure the following: Install either the Stable or Insiders release of VS Code. Add the GitHub Copilot and GitHub Copilot Chat extensions. Option 1: One-Click Install You can install the Azure MCP Server in VS Code or VS Code Insiders using NPX. Here's how: Simply run: npx -y /mcp@latest server start Option 2: Manual Install If you'd prefer manual setup, follow these steps: Create a .vscode/mcp.json file in your VS Code project directory. Add the following configuration: { "servers": { "Azure MCP Server": { "command": "npx", "args": ["-y", "@azure/mcp@latest", "server", "start"] } } } Here an example of the settings.json file Now, launch GitHub Copilot in Agent Mode to activate the Azure MCP Server. 🚀 Supercharging Azure Development Once installed, the Azure MCP Server unlocks an array of capabilities: Azure Cosmos DB: List, query, manage databases and containers. Azure Storage: Query blob containers, metadata, and tables. Azure Monitor: Use KQL to analyze logs and monitor your resources. App Configuration: Handle key-value pairs and labeled configurations. Test prompts like: "List my Azure Storage containers" "Query my Log Analytics workspace" "Show my key-value pairs in App Config" These commands let your agents harness the power of Azure services effortlessly. 🛡️ Security & Authentication The Azure MCP Server simplifies authentication using Azure Identity. Your login credentials are handled securely, with support for mechanisms like: Visual Studio credentials Azure CLI login Interactive Browser login For advanced scenarios, enable production credentials with: export AZURE_MCP_INCLUDE_PRODUCTION_CREDENTIALS=true Always perform a security review when integrating MCP servers to ensure compliance with regulations and standards. 🌟 Why Join the Azure MCP Community? As a developer, you're invited to contribute to the Azure MCP Server project. Whether it's fixing bugs, adding features, or enhancing documentation, your contributions are valued. Explore the Contributing Guide for details on getting involved. The Azure MCP Server is your gateway to leveraging Azure services with cutting-edge technology. Dive in, experiment, and bring your projects to life! What Azure project are you excited to build with the MCP Server? Let’s brainstorm ideas together!4.2KViews4likes2CommentsHow to Use Postgres MCP Server with GitHub Copilot in VS Code
GitHub Copilot has changed how developers write code, but when combined with an MCP (Model Copilot Protocol) server, it also connects your services. With it, Copilot can understand your database schema and generate relevant code for your API, data models, or business logic. In this guide, you'll learn how to use the Neon Serverless Postgres MCP server with GitHub Copilot in VS Code to build a sample REST API quickly. We'll walk through how to create an Azure Function that fetches data from a Neon database, all with minimal setup and no manual query writing. From Code Generation to Database Management with GitHub Copilot AI agents are no longer just helping write code—they’re creating and managing databases. When a chatbot logs a customer conversation, or a new region spins up in the Azure cloud, or a new user signs up, an AI agent can automatically create a database in seconds. No need to open a dashboard or call an API. This is the next evolution of software development: infrastructure as intent. With tools like database MCP servers, agents can now generate real backend services as easily as they generate code. GitHub Copilot becomes your full-stack teammate. It can answer database-related questions, fetch your database connection string, update environment variables in your Azure Function, generate SQL queries to populate tables with mock data, and even help you create new databases or tables on the fly. All directly from your editor, with natural language prompts. Neon has a dedicated MCP server that makes it possible for Copilot to directly understand the structure of your Postgres database. Let's get started with using the Neon MCP server and GitHub Copilot. What You’ll Need Node.js (>= v18.0.0) and npm: Download from nodejs.org. An Azure subscription (create one for free) Install either the stable or Insiders release of VS Code: Stable release Insiders release Install the GitHub Copilot, GitHub Copilot for Azure, and GitHub Copilot Chat extensions for VS Code Azure Functions Core Tools (for local testing) Connect GitHub Copilot to Neon MCP Server Create Neon Database Visit the Neon on Azure Marketplace page and follow the Create a Neon resource guide to deploy Neon on Azure for your subscription. Neon has free plan is more than enough to build proof of concept or kick off a new startup project. Install the Neon MCP Server for VS Code Neon MCP Server offers two options for connecting your VS Code MCP client to Neon. We will use the Remote Hosted MCP Server option. This is the simplest setup—no need to install anything locally or configure a Neon API key in your client. Add the following Neon MCP server configuration to your user settings in VS Code: { "mcp":{ "servers":{ "Neon":{ "command":"npx", "args":[ "-y", "mcp-remote", "https://mcp.neon.tech/sse" ] } } } } Click on Start on the MCP server. A browser window will open with an OAuth prompt. Just follow the steps to give your VS Code MCP client access to your Neon account. Generate an Azure Function REST API using GitHub Copilot Step 1: Create an empty folder (ex: mcp-server-vs-code) and open it in VS Code. Step 2: Open GitHub Copilot Chat in VS Code and switch to Agent mode. You should see the available tools. Step 3: Ask Copilot something like "Create an Azure function with an HTTP trigger”: Copilot will generate a REST API using Azure Functions in JavaScript with a basic setup to run the functions locally. Step 4: Next, you can ask to list existing Neon projects: Step 5: Try out different prompts to fetch the connection string for the chosen database and set it to the Azure Functions settings. Then ask to create a sample Customer table and so on. Or you can even prompt to create a new database branch on Neon. Step 6: Finally, you can prompt Copilot to update the Azure functions code to fetch data from the table: Combine the Azure MCP Server Neon MCP gives GitHub Copilot full access to your database schema, so it can help you write SQL queries, connect to the database, and build API endpoints. But when you add Azure MCP Server into the mix, Copilot can also understand your Azure services—like Blob Storage, Queues, and Azure AI. You can run both Neon MCP and Azure MCP at the same time to create a full cloud-native developer experience. For example: Use Neon MCP for serverless Postgres with branching and instant scale. Use Azure MCP to connect to other Azure services from the same Copilot chat. Even better: Azure MCP is evolving. In newer versions, you can spin up Azure Functions and other services directly from Copilot chat, without ever leaving your editor. Copilot pulls context from both MCP servers, which means smarter code suggestions and faster development. You can mix and match based on your stack and let Copilot help you build real, working apps in minutes. Final Thoughts With GitHub Copilot, Neon MCP server, and Azure Functions, you're no longer writing backend code line by line. It is so fast to build APIs. You're orchestrating services with intent. This is not the future—it’s something you can use today.Great Devs Don’t Build Alone: Welcome to Azure AI Foundry Developer Community
Great Devs Don’t Build Alone: Welcome to Azure AI Foundry Developer Community In the ever-evolving landscape of software development, speed, collaboration, and innovation are paramount. The days of solitary coding marathons, hunting for scattered resources, and struggling through complex challenges alone are fading. Instead, developers are embracing the power of community—where knowledge is shared, problems are tackled together, and breakthroughs happen faster than ever. That’s why we built the Azure AI Foundry Developer Community—your one-stop hub to accelerate learning, find support, and collaborate with fellow developers. 🚀 Why Join? 🌐 A Vibrant Forum for Q&A + Code Need quick answers? Our interactive forum is buzzing with experienced devs ready to help troubleshoot, optimize code, and share best practices. Facing an issue for have feedback, you'll find insights that push your projects forward here. https://aka.ms/azureaifoundry/forum 🎙️ DevBlogs with the Latest Updates Stay ahead of the curve, and up to date with the latest on Azure AI Foundry with https://devblogs.microsoft.com/foundry/. We bring you practical insights straight from the heart of the Azure AI Foundry team. 🎮 Real-time Discord Chats, Events & Learning Get instant support, network with top developers, and participate in exciting events that help hone your skills. Our real-time Discord community ensures you stay connected, inspired, and ready to tackle new challenges. 🔗 Ready to Connect? Join the Azure AI Foundry Developer Community today: 🔗 Join the Forum | 🔗 Hop on Discord Don’t let development slow you down. Build faster, smarter, and together. Because great devs don’t build alone.Mastering Query Fields in Azure AI Document Intelligence with C#
Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage. Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below in C#: BinaryData data = BinaryData.FromBytes(Content); var queryFields = new List<string> { "FullName", "CompanyName", "JobTitle" }; var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, data, "1-2", queryFields: queryFields, features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); One of the reasons this failed was that the developer was using `Azure.AI.DocumentIntelligence v1.0.0`, where `base64Source` and `urlSource` must be handled internally. Because the older examples using `AnalyzeDocumentContent` no longer apply and leading to errors. Practical Solution Using AnalyzeDocumentOptions. Alternative Method using manual JSON Payload. Using AnalyzeDocumentOptions The correct method involves using AnalyzeDocumentOptions, which streamlines the request construction using the below steps: Prepare the document content: BinaryData data = BinaryData.FromBytes(Content); Create AnalyzeDocumentOptions: var analyzeOptions = new AnalyzeDocumentOptions(modelId, data) { Pages = "1-2", Features = { DocumentAnalysisFeature.QueryFields }, QueryFields = { "FullName", "CompanyName", "JobTitle" } }; - `modelId`: Your trained model’s ID. - `Pages`: Specify pages to analyze (e.g., "1-2"). - `Features`: Enable `QueryFields`. - `QueryFields`: Define which fields to extract. Run the analysis: Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, analyzeOptions ); AnalyzeResult result = operation.Value; The reason this works: The SDK manages `base64Source` automatically. This approach matches the latest SDK standards. It results in cleaner, more maintainable code. Alternative method using manual JSON payload For advanced use cases where more control over the request is needed, you can manually create the JSON payload. For an example: var queriesPayload = new { queryFields = new[] { new { key = "FullName" }, new { key = "CompanyName" }, new { key = "JobTitle" } } }; string jsonPayload = JsonSerializer.Serialize(queriesPayload); BinaryData requestData = BinaryData.FromString(jsonPayload); var operation = await client.AnalyzeDocumentAsync( WaitUntil.Completed, modelId, requestData, "1-2", features: new List<DocumentAnalysisFeature> { DocumentAnalysisFeature.QueryFields } ); When to use the above: Custom request formats Non-standard data source integration Key points to remember Breaking changes exist between preview versions and v1.0.0 by checking the SDK version. Prefer `AnalyzeDocumentOptions` for simpler, error-free integration by using built-In classes. Ensure your content is wrapped in `BinaryData` or use a direct URL for correct document input: Conclusion In this article, we have seen how you can use AnalyzeDocumentOptions to significantly improves how you integrate query fields with Azure AI Document Intelligence in C#. It ensures your solution is up-to-date, readable, and more reliable. Staying aware of SDK updates and evolving best practices will help you unlock deeper insights from your documents effortlessly. Reference Official AnalyzeDocumentAsync Documentation. Official Azure SDK documentation. Azure Document Intelligence C# SDK support add-on query field.258Views0likes0Comments