genai
26 TopicsJS AI Build‑a‑thon: Wrapping Up an Epic June 2025!
After weeks of building, testing, and learning — we’re officially wrapping up the first-ever JS AI Build-a-thon 🎉. This wasn't your average coding challenge. This was a hands-on journey where JavaScript and TypeScript developers dove deep into real-world AI concepts — from local GenAI prototyping to building intelligent agents and deploying production-ready apps. Whether you joined from the start or hopped on midway, you built something that matters — and that’s worth celebrating. Replay the Journey No worries if you joined late or want to revisit any part of the journey. The JS AI Build-a-thon was designed to let you learn at your own pace, so whether you're starting now or polishing up your final project, here’s your complete quest map: Build-a-thon set up guide: https://aka.ms/JSAIBuildathonSetup Quest 1: 🔧 Build your first GenAI app locally with GitHub Models 👉🏽 https://aka.ms/JSAIBuildathonQuest1 Quest 2: ☁️ Move your AI prototype to Azure AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest Quest 3: 🎨 Add a chat UI using Vite + Lit 👉🏽 https://aka.ms/JSAIBuildathonQuest3 Quest 4: 📄 Enhance your app with RAG (Chat with Your Data) 👉🏽 https://aka.ms/JSAIBuildathonQuest4 Quest 5: 🧠 Add memory and context to your AI app 👉🏽 https://aka.ms/JSAIBuildathonQuest5 Quest 6: ⚙️ Build your first AI Agent using AI Foundry 👉🏽 https://aka.ms/JSAIBuildathonQuest6 Quest 7: 🧩 Equip your agent with tools from an MCP server 👉🏽 https://aka.ms/JSAIBuildathonQuest7 Quest 8: 💬 Ground your agent with real-time search using Bing 👉🏽 https://aka.ms/JSAIBuildathonQuest8 Quest 9: 🚀 Build a real-world AI project with full-stack templates 👉🏽 https://aka.ms/JSAIBuildathonQuest9 Link to our space in the AI Discord Community: https://aka.ms/JSAIonDiscord Project Submission Guidelines 📌 Quest 9 is where it all comes together. Participants chose a problem, picked a template, customized it, submitted it, and rallied their community for support! 🏅 Claim Your Badge! Whether you completed select quests or went all the way, we celebrate your learning. If you participated in the June 2025 JS AI Build-a-thon, make sure to Submit the Participation Form to receive your participation badge recognizing your commitment to upskilling in AI with JavaScript/ TypeScript. What’s Next? We’re not done. In fact, we’re just getting started. We’re already cooking up JS AI Build-a-thon v2, which will introduce: Running everything locally with Foundry Local Real-world RAG with vector databases Advanced agent patterns with remote MCPs And much more based on your feedback Want to shape what comes next? Drop your ideas in the participation form and in our Discord. In the meantime, add these resources to your JavaScript + AI Dev Pack: 🔗 Microsoft for JavaScript developers 📚 Generative AI for Beginners with JavaScript Wrap-Up This build-a-thon showed what’s possible when developers are empowered to learn by doing. You didn’t just follow tutorials — you shipped features, connected services, and created working AI experiences. We can’t wait to see what you build next. 👉 Bookmark the repo 👉 Join the community on Join the Azure AI Foundry Discord Server! 👉 Stay building Until next time — keep coding, keep shipping!Quest 5 - I want to add conversation memory to my app
In this quest, you’ll explore how to build GenAI apps using a modern JavaScript AI framework, LangChain.js. LangChain.js helps you orchestrate prompts, manage memory, and build multi-step AI workflows all while staying in your favorite language. Using LangChain.js you will make your GenAI chat app feel truly personal by teaching it to remember. In this quest, you’ll upgrade your AI prototype with conversation memory, allowing it to recall previous interactions making the conversation flow more naturally and human-like. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build A smarter, context-aware chat backend that: Remembers user conversations across multiple exchanges (e.g., knowing "Terry" after you introduced yourself as Terry) Maintains session-specific memory so each chat thread feels consistent and coherent Uses LangChain.js abstractions to streamline state management. 🚀 What You’ll Need ✅ A GitHub account ✅ Visual Studio Code ✅ Node.js ✅ A working chat app from previous quests (UI + Azure-based chat endpoint) 🛠️ Concepts You’ll Explore Integrating LangChain.js Learn how LangChain.js simplifies building AI-powered web applications by providing a standard interface to connect your backend with Azure’s language models. You’ll see how using this framework decouples your code and unlocks advanced features. Adding Conversation Memory Understand why memory matters in chatbots. Explore how conversation memory lets your app remember previous user messages within each session enabling more context-aware and coherent conversations. Session-based Message History Implement session-specific chat histories using LangChain’s memory modules (ChatMessageHistory and BufferMemory). Each user or session gets its own history, so previous questions and answers inform future responses without manual log management. Seamless State Management Experience how LangChain handles chat logs and memory behind the scenes, freeing you from manually stitching together chat history or juggling context with every prompt. 📖 Bonus Resources to Go Deeper Exploring Generative AI in App Development: LangChain.js and Azure: a video introduction to LangChain.js and how you can build a project with LangChain.js and Azure 🦜️🔗 Langchain: the official LangChain.js documentation. GitHub - Azure-Samples/serverless-chat-langchainjs: Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure: A GitHub sample that helps you build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure GitHub - Azure-Samples/langchainjs-quickstart-demo: Build a generative AI application using LangChain.js, from local to Azure: A GitHub sample that helps you build a generative AI application using LangChain.js, from local to Azure. Microsoft | 🦜️🔗 Langchain Official LangChain documentation on all functionalities related to Microsoft and Microsoft Azure. Quest 4 - I want to connect my AI prototype to external data using RAG | Microsoft Community Hub a link to the previous quest instructions.Quest 4 - I want to connect my AI prototype to external data using RAG
In Quest 4 of the JS AI Build-a-thon, you’ll integrate Retrieval-Augmented Generation (RAG) to give your AI apps access to external data like PDFs. You’ll explore embeddings, vector stores, and how to use the pdf-parse library in JavaScript to build more context-aware apps — with challenges to push you even further.Quest 6 - I want to build an AI Agent
Quest 6 of the JS AI Build-a-thon marks a major milestone — building your first intelligent AI agent using the Azure AI Foundry VS Code extension. In this quest, you’ll design, test, and integrate an agent that can use tools like Bing Search, respond to goals, and adapt in real-time. With updated instructions, real-world workflows, and powerful tooling, this is where your AI app gets truly smart.Quest 7: Create an AI Agent with Tools from an MCP Server
In Quest 7 of the JS AI Build-a-thon, developers explore how to create AI agents that use real tools through the Model Context Protocol (MCP). With the MCP TypeScript SDK and AI Toolkit in VS Code, you’ll connect your agent to a custom MCP server and give it real capabilities, like accessing your system's OS info. This builds on agentic development and introduces tooling practices that reflect how modern AI apps are built.Quest 9: I want to use a ready-made template
Building robust, scalable AI apps is tough, especially when you want to move fast, follow best practices, and avoid being bogged down by endless setup and configuration. In this quest, you’ll discover how to accelerate your journey from prototype to production by leveraging ready-made templates and modern cloud tools. Say goodbye to decision fatigue and hello to streamlined, industry-approved workflows you can make your own. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🚀 What You’ll Build A fully functional AI application deployed on Azure, customized to solve a real problem that matters to you. A codebase powered by a production-grade template, complete with all the necessary infrastructure-as-code, deployment scripts, and best practices already baked in. Your own proof-of-concept or MVP, ready to scale or show off to the world. 🛠️ What You Need ✅ GitHub account ✅ Visual Studio Code ✅ Node.js ✅ Azure subscription (free trials and student credits available) ✅ Azure Developer CLI (azd) ✅ The curiosity to solve a meaningful problem! 🧩 Concepts You’ll Explore Azure Developer CLI (azd) Learn how azd, the developer-first command-line tool, simplifies authentication, setup, deployment, and teardown for Azure apps. With intuitive commands like azd up and azd deploy, you can go from zero to running in the cloud no deep cloud expertise required. Production-Ready Templates Explore a gallery of customizable templates designed to get your app up and running fast. These templates aren’t just “hello world” they feature scalable architectures, sample code, and reusable infrastructure assets to launch everything from chatbots to RAG apps to full-stack solutions. Infrastructure as Code (IaC) See how every template bundle configuration files and scripts to automatically provision the cloud resources you need. You’ll get a taste of how top teams ship secure, repeatable, and maintainable systems without manually clicking through Azure dashboards. Best Practices by Default Templates incorporate industry best practices for code structure, deployment, and scalability. You’ll spend less time researching how to “do it right” and more time customizing your application to fit your unique use case. Customization for Real-World Problems Pick a template and make it yours! Whether you’re building a copilot, a chat-enabled app, or a serverless API, you’ll learn how to tweak the frontend, swap out backend logic, connect your own data sources, and shape the solution to solve a real-world problem you care about. 🌟 Bonus Resources Here are some additional resources to help you learn more about the Azure Developer CLI (azd) and the templates available: Kickstart JS/TS projects with azd Templates Kickstart your JavaScript projects with azd on YouTube ⏭️ What next? With production-ready templates and the Azure Developer CLI at your side, you’re ready to move from “just an idea” to a deployable, scalable solution without reinventing the wheel. Start with the right foundation, customize with confidence, and ship your next AI app like a pro! Once you have your project done, ensure you submit to GitHub - Azure-Samples/JS-AI-Build-a-thonAI Repo of the Week: Generative AI for Beginners with JavaScript
Introduction Ready to explore the fascinating world of Generative AI using your JavaScript skills? This week’s featured repository, Generative AI for Beginners with JavaScript, is your launchpad into the future of application development. Whether you're just starting out or looking to expand your AI toolbox, this open-source GitHub resource offers a rich, hands-on journey. It includes interactive lessons, quizzes, and even time-travel storytelling featuring historical legends like Leonardo da Vinci and Ada Lovelace. Each chapter combines narrative-driven learning with practical exercises, helping you understand foundational AI concepts and apply them directly in code. It’s immersive, educational, and genuinely fun. What You'll Learn 1. 🧠 Foundations of Generative AI and LLMs Start with the basics: What is generative AI? How do large language models (LLMs) work? This chapter lays the groundwork for how these technologies are transforming JavaScript development. 2. 🚀 Build Your First AI-Powered App Walk through setting up your environment and creating your first AI app. Learn how to configure prompts and unlock the potential of AI in your own projects. 3. 🎯 Prompt Engineering Essentials Get hands-on with prompt engineering techniques that shape how AI models respond. Explore strategies for crafting prompts that are clear, targeted, and effective. 4. 📦 Structured Output with JSON Learn how to guide the model to return structured data formats like JSON—critical for integrating AI into real-world applications. 5. 🔍 Retrieval-Augmented Generation (RAG) Go beyond static prompts by combining LLMs with external data sources. Discover how RAG lets your app pull in live, contextual information for more intelligent results. 6. 🛠️ Function Calling and Tool Use Give your LLM new powers! Learn how to connect your own functions and tools to your app, enabling more dynamic and actionable AI interactions. 7. 📚 Model Context Protocol (MCP) Dive into MCP, a new standard for organizing prompts, tools, and resources. Learn how it simplifies AI app development and fosters consistency across projects. 8. ⚙️ Enhancing MCP Clients with LLMs Build on what you’ve learned by integrating LLMs directly into your MCP clients. See how to make them smarter, faster, and more helpful. ✨ More chapters coming soon—watch the repo for updates! Companion App: Interact with History Experience the power of generative AI in action through the companion web app—where you can chat with historical figures and witness how JavaScript brings AI to life in real time. Conclusion Generative AI for Beginners with JavaScript is more than a course—it’s an adventure into how storytelling, coding, and AI can come together to create something fun and educational. Whether you're here to upskill, experiment, or build the next big thing, this repository is your all-in-one resource to get started with confidence. 🔗 Jump into the future of development—check out the repo and start building with AI today!Quest 8: I want to automate code reviews
Ever wished your code reviews could be faster, more consistent, and maybe even… automated? In this quest, you’ll build a smart code review system powered by AI, designed to catch issues and share feedback before committing your changes. GenAIScript is a modern JavaScript extension designed for seamless AI integration. GenAIScript lets you automate repetitive tasks, orchestrate prompts, and create multi-step AI workflows, all within your coding environment. By leveraging GenAIScript, you’ll transform your development workflow by adding AI-powered code reviews. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build An automated code review agent that analyzes your staged code changes and provides actionable, best-practice feedback right inside VS Code. A custom script (using GenAIScript) that plugs into your development workflow and delivers review comments every time you make a change. 🚀 What You’ll Need ✅ A GitHub account ✅ Visual Studio Code ✅ Node.js ✅ GenAIScript extension for VSCode (installation instructions provided in the quest) ✅ GitHub Models access using PAT (refer to Quest 1 for more details on GitHub Models) 🛠️ Concepts You’ll Explore GenAIScript for AI Automation Discover how GenAIScript extends JavaScript with simple AI scripting, letting you write powerful workflows that connect to AI models without the usual complexity. You’ll see how scripts can automate tasks that once required manual effort or custom bots. Automating Code Reviews with AI Understand how AI can analyze your code changes and provide valuable feedback. Explore how automated reviews help you catch mistakes early, enforce best practices, and maintain consistent code quality across your project. Using GitHub Tokens for Secure Integration Discover how to connect GenAIScript to GitHub’s AI models by configuring a secure personal access token. This unlocks AI features for code analysis and ensures your workflow remains both powerful and protected. 📖 Bonus Resources to Go Deeper GenAIScript sample collection – automation ideas: A curated collection of sample scripts and projects showcasing how to use GenAIScript for AI-powered automation, code reviews, and custom workflows. Generative AI with JavaScript on YouTube: A YouTube series hub for developers exploring how to integrate generative AI with JavaScript. GenAIScript official docs & extension: Extension and documentation for GenAIScript . Quest - I Want to Build a Local Gen AI Prototype: This kickoff quest in the JS AI Build‑a‑thon guides you through building your very first generative AI prototype locally and entirely in JavaScript. Quest 7: Create an AI Agent with Tools from an MCP Server | Microsoft Community Hub : a link to the previous quest.Use Prompty with Foundry Local
Prompty is a powerful tool for managing prompts in AI applications. Not only does it allow you to easily test your prompts during development, but it also provides observability, understandability and portability. Here's how to use Prompty with Foundry Local to support your AI applications with on-device inference. Foundry Local At the Build '25 conference, Microsoft announced Foundry Local, a new tool that allows developers to run AI models locally on their devices. Foundry Local offers developers several benefits, including performance, privacy, and cost savings. Why Prompty? When you build AI applications with Foundry Local, but also other language model hosts, consider using Prompty to manage your prompts. With Prompty, you store your prompts in separate files, making it easy to test and adjust them without changing your code. Prompty also supports templating, allowing you to create dynamic prompts that adapt to different contexts or user inputs. Using Prompty with Foundry Local The most convenient way to use Prompty with Foundry Local is to create a new configuration for Foundry Local. Using a separate configuration allows you to seamlessly test your prompts without having to repeat the configuration for every prompt. It also allows you to easily switch between different configurations, such as Foundry Local and other language model hosts. Install Prompty and Foundry Local To get started, install the Prompty Visual Studio Code extension and Foundry Local. Start Foundry Local from the command line by running foundry service start and note the URL on which it listens for requests, such as http://localhost:5272 or http://localhost:5273. Create a new Prompty configuration for Foundry Local If you don't have a Prompty file yet, create one to easily access Prompty settings. In Visual Studio Code, open Explorer, click right to open the context menu, and select New Prompty. This creates a basic.prompty file in your workspace. Create the Foundry Local configuration From the status bar, select default to open the Prompty configuration picker. When prompted to select the configuration, choose Add or Edit.... In the settings pane, choose Edit in settings.json. In the settings.json file, to the prompty.modelConfigurations collection, add a new configuration for Foundry Local, for example (ignore comments): { // Foundry Local model ID that you want to use "name": "Phi-4-mini-instruct-generic-gpu", // API type; Foundry Local exposes OpenAI-compatible APIs "type": "openai", // API key required for the OpenAI SDK, but not used by Foundry Local "api_key": "local", // The URL where Foundry Local exposes its API "base_url": "http://localhost:5272/v1" } Important: Be sure to check that you use the correct URL for Foundry Local. If you started Foundry Local with a different port, adjust the URL accordingly. Save your changes, and go back to the .prompty file. Once again, select the default configuration from the status bar, and choose Phi-4-mini-instruct-generic-gpu from the list. Since the model and API are configured, you can remove them from the .prompty file. Test your prompts With the newly created Foundry Local configuration selected, in the .prompty file, press F5 to test the prompt. The first time you run the prompt, it may take a few seconds because Foundry Local needs to load the model. Eventually, you should see the response from Foundry Local in the output pane. Summary Using Prompty with Foundry Local allows you to easily manage and test your prompts while running AI models locally. By creating a dedicated Prompty configuration for Foundry Local, you can conveniently test your prompts with Foundry Local models and switch between different model hosts and models if needed.Quest 3 - I want to add a simple chat interface to my AI prototype
In this quest, you’ll give your Gen AI prototype a polished chat interface using Vite and Lit. Along the way, you’ll also manage application infrastructure with Bicep and Azure Developer CLI (azd), making your prototype more structured and ready for deployment.