Building securely: Microsoft Build 2024

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

This year’s Microsoft Build event is shaping up to be a must-attend event. The high demand for secure software development continues to grow. And with the complexity of today’s digital world, developers are being asked to do even more to keep apps, AI, and code secure—with more focus on built-in security and more integrated security at every phase of design, development, and deployment. Developers who attend Microsoft Build can learn how to manage and govern AI, securely. Our commitment is to provide developers with the knowledge, tools, and practices needed to build safely. It’s a commitment to ensuring security isn’t an afterthought, but a fundamental component of the entire development lifecycle. And Microsoft Build is a great time and place to connect with other developers globally, grow your skills, and learn more about building secure copilots, generative AI, securing applications, and more. Register now for live keynotes, breakout sessions, demos, and social events. Or if you can’t make it in person, access sessions online and on-demand.  


Building on a trusted platform 

Building on a trusted platform is crucial in today's digital-first world. At Microsoft, our platforms—from Azure, GitHub, Visual Studio, PowerApps, and more—are designed with security at their core. This provides developers with the tools and integrations they need to innovate securely. And by building on a trusted platform, whether it be with Microsoft or multicloud strategy, developers can focus on innovation and great end-user experiences. 


How to build AI securely 

AI—the most transformative technology of our era—is rapidly reshaping our world, offering incredible opportunities but also introducing new risks. Microsoft is at the forefront, ensuring our AI is developed securely, by embedding security and safety in all our products and building on pioneering efforts like Microsoft’s Responsible AI Framework. Our goal is to make security a priority in AI development, empowering developers to create exciting and impactful AI tools without compromising safety. 


The conference attendee experience 

Connect with experts – Meet up with Microsoft security experts who are ready to answer your deep technical questions. Meet-up topics include GitHub advanced security, DevSecOps, Microsoft Entra, Microsoft Purview, Microsoft Defender, Microsoft Intune, and multicloud security. Stop by the expert meet-up area to connect and learn. 


Join demonstrations – Stop by The Hub to see live demos of topics including AI, Copilot, and low-code tools. You can learn how to fix security leaks at scale, combat fraud with real-time identity verification, and create secure apps in minutes. We’ll also be demonstrating simple and secure app authentication with authentication brokers, how to create pixel perfect authentication experiences, and more. 


AI for good – Connect with experts and peers about social impact and how AI can be utilized to make the world a better place for all. Find us right outside The Hub. 


Security-focused sessions – We’re preparing multiple in-depth sessions focused on how to build secure apps using Microsoft platforms. In our Microsoft Build sessions—which you can join in person or online—we aim to help all developers increase the security of their AI development by learning from our experienced engineers sharing real-life examples and first-hand accounts of how we embed security and safety into all our products.  


We’re really excited about our content this year. There will be sessions on workflows and API security testing to multiple sessions on LLMs; securing generative AI applications; AI red teaming, AI security; and more. You can find more about them all in the session catalog. Among the sessions you do not want to miss are: 

  • Inside AI Security with Mark Russinovich – Join Mark Russinovich, Chief Technology Officer and Technical Fellow for Microsoft Azure, as he explores the landscape of AI security, focusing on threat modeling, defense tactics, our red teaming approaches, and the path to confidential AI. 
  • How Microsoft approaches AI red teaming – AI Red Team (AIRT) serves as the independent red team for high-risk AI across Microsoft. This session will cover processes, techniques, and tools including PyRIT—AIRT's open-source automation framework. Presented by Tori Westerhoff, Principal Technical Program Management, Responsible AI Red Teaming, Microsoft and Pete Bryan, AI Security Researcher, Microsoft. 


Unwind with Microsoft experts and peers  

Develop the vibe. Join us at the Microsoft Security Developer Kickback on May 22 from 6:00pm – 9:00pm at the Seattle Collective. Celebrate the new era of security with Microsoft subject matter experts and peers, all while enjoying heavy appetizers and beverages. You must be registered to attend this event, so be sure to RSVP today.  


Please note that transportation to and from the event venue will not be provided. Kindly arrange for your own transportation to ensure a safe return to your hotel or accommodation. 


Learn more and register now 

Check out the session catalog to start building your own itinerary and maximize your Microsoft Build 2024 attendee experience. With 400+ sessions and 90+ focused on AI, you’re sure to find just the right mix of content tailored to your specific development interests. Register now. We’re excited for you to join us and hope to see you there! 

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.