Strategy, Risk and Culture Equipping the Next Generation of Cyber & AI Leaders

Cyber & AI Masterclass MES 1 2026

Education Partners

Launch into Australia's 2026 Cyber Leadership Education Series - Foundational Masterclass for Boards, CISOs, and Rising Executives

Leadership in cybersecurity and AI demands more than technical depth - it requires strategic vision, executive influence, and resilient culture.

MES 1 (Masterclass 1) kicks off Boostify Cyber's 2026 Education Series, transforming ambitious cyber security and technology professionals into board-ready leaders who align cyber/AI initiatives with business success, communicate risk with impact, and build trustworthy governance in an AI-accelerated era.

  • Date: Wednesday, 6 May 2026

  • Location: Sydney

  • Format: In-person interactive masterclass (limited capacity)

  • Instructors: Chirag Joshi and Branko Ninkovic

  • Cost: Complimentary for approved participants (selective admission)

  • Status: Expressions of Interest now open – limited spots

AI Governance and Security

Risk, Accountability and Practical Implementation - May 6, 2026

AI Adoption, Risk and Governance in Practice

An opening discussion on how AI adoption is accelerating across organisations and what this means for cyber security, data protection, and governance.

This segment explores how AI is moving rapidly from experimentation to operational use, and how this shift is creating new expectations for cyber and risk leaders. The discussion will also reflect on recent national developments in Australia, including evolving policy direction, regulatory expectations, and industry initiatives shaping organisational approaches to AI governance.

Discussion areas include

  • What Boards and executives should consider when promoting responsible and safe AI adoption within their organisations

  • Key cyber, data, and operational risks emerging from AI adoption

  • Reliance on third-party AI models and platforms

  • Establishing practical AI governance approaches, including policies, guardrails and AI inventorie

  • Clarifying accountability across business, technology, cyber and risk teams

This session establishes the foundation for understanding how organisations are approaching responsible AI adoption in practice.

Integrating AI into Cyber Security and Risk Programs

As AI capabilities expand across organisations, cyber and risk teams must adapt existing security and governance practices.

This session explores how organisations are incorporating AI into their cyber security and risk programs, including:

  • Protecting organisational data when interacting with AI systems

  • Managing risks associated with AI-enabled applications and automation

  • Aligning AI initiatives with enterprise security architecture and cyber frameworks

  • Avoiding fragmented controls as AI adoption expands across the organisation

Interactive Scenario – Exercising Decision-Making When the Stakes Are High

Participants will work through a practical scenario involving the deployment of an AI-enabled capability within an organisation.

The exercise will explore governance considerations, risk ownership, and the decisions cyber and risk leaders must make before allowing such capabilities to be deployed.

Participants will apply concepts discussed during the session and share perspectives based on their own organisational experience.

How You Will Learn

Interactive Exercises

Engage in real-world scenarios and problem-solving activities.

Actionable Frameworks

Take away proven methodologies you can implement immediately.

Industry Best Practices

Gain insights from seasoned cybersecurity leaders with hands-on experience.

Your Instructors

Led by renowned cybersecurity leaders and industry experts, including:

Chirag D Joshi

Multi-award-winning CISO, author, and security thought leader recognised for his work in shaping cyber resilience across critical sectors.

Branko Ninkovic

Cybersecurity Strategist & Board Advisor with 20+ years of experience guiding major organisations in security strategy and governance.