ABOUT THE VIRTUAL SUMMIT
The Operationalizing Cloud Native with Kubernetes Virtual Summit is an annual MediaOps virtual event brought to you by Container Journal — a MediaOps Community. This is a FREE, full day, multi-track conference, complete with a state-of-the-art virtual environment.
Attendees will hear from thought leaders, business executives, practitioners, engineers, and analysts in a variety of formats including keynotes, panels, interactive sessions and hands-on workshops. Our exhibit hall will showcase the latest solutions from leading Kubernetes companies with engineers standing by to answer questions in real-time. Almost all of our speakers will be available via chat during their sessions to answer your questions!
Register now and get free access to all the 36 sessions including the Q&A during the live event and over 200+ sponsor resources for download!
WHO SHOULD ATTEND
The Operationalizing Cloud Native with Kubernetes Virtual Summit is for anyone involved in software development, delivery, and operations — or those who would like to be. The purpose of this conference is to bring the cloud native community together to learn, connect, and collaborate.
- Executive Leaders
- Software Engineers
- Application Developers
- Technical Managers
- IT Operations
- Those looking to learn more
Thursday, October 1, 10am-7pm ET
The State of Kubernetes Security
Kubernetes has improved its security posture significantly over the last couple of years — gone are the days when the default settings could leave your cluster open to the internet, thankfully! But does that mean you can fire up a Kubernetes cluster and forget all about security? Liz reviews what you do and don’t need to worry about when running your cloud native applications.
Live Hacking! Practical security examples for AWS EKS/Fargate using Falco
Join Kubernetes expert and co-author of Cloud Native Infrastructure Kris Nóva for a live and interactive keynote. In this live coding demonstration, she will be talking about the nuances of running an EKS cluster on Fargate. She will explore how Falco is able to securely tap into the underlying infrastructure using kernel tracing components built around eBPF and ptrace(2) (CAP_SYS_PTRACE). She draws on her deep history of managing Kubernetes clusters, and discusses key aspects of the system to keep in mind while building out a cloud native application in AWS.
Operationalizing Kubernetes: Interview with Tim Hockin
Operationalizing Kubernetes: Are we there yet? Yes, there are some Unicorns leading the pack, but is the Kubernetes ecosystem mature enough to truly operationalize at scale? What are the telltale signs that we have reached — or will reach — this inflection point? What is on the short-term and long-term horizon that will better enable this? We will explore these questions and more with Tom Hockin, Principal Software Engineer at Google, and one of the originators of Kubernetes.
Using Jenkins, Jenkins X and Tekton with GitOps
This keynote will demonstrate how application developers can use the best tool for the job: Jenkins, Jenkins X and/or Tekton. These technologies can solve all of your cloud native CI/CD requirements using a simple GitOps approach that is easy to set up, use and manage from Git.
Building Sustainable Ecosystems for Cloud Native Software
This live keynote will share learnings and research by the Cloud Native Computing Foundation about the cloud native ecosystem. You will leave the session with a knowledge of the scope of cloud native, upcoming trends, and best practices to supercharge your developer organizations to benefit from Kubernetes and cloud native. Following the presentation, join Priyanka via Zoom for a live Q&A session!
Live Chat Session with AWS Container Experts
Get your questions about Kubernetes on AWS answered on the spot through an interactive Q&A session with our container gurus. You’ll be greeted by Brent Langston and Adam Keller, our enigmatic hosts from Containers on the Couch, and Mikhail Shapirov, our resident Kubernetes expert. Come prepared with tough questions and challenge the team.
Conway's Law: The Hidden Secret for a Successful Digital Transformation
As Melvin E. Conway stated back in 1967, "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure." In short, if a company's workforce is fragmented, the software it makes will be fragmented. If a company's communication structures are hierarchical and centralized, so too will be the software. In this talk, ASG Analyst Bob Reselman will discuss Conway's Law and illustrate how to apply it for a successful digital transformation.
Bob will answer the following questions:
- What are the characteristics of Conway's Law?
- How does Conway's Law manifest itself in the modern enterprise?
- How can a company make Conway's Law work to its advantage?
Cloud Data Platform: A Game Changer for DevOps Speed
Simple and powerful: A Cloud Data Platform gives you unprecedented control of your data-layer and makes the orchestration of mission-critical application feasible with Kubernetes. A Cloud Data Platform sits between your full application stack and your data infrastructure. It decouples your application data from your on-prem hardware and hybrid, public cloud infrastructure. This approach eliminates the risk of creating silos of data between multiple private and public clouds while providing a single access point for provisioning data across all stages of the software development lifecycle. The combination of these technologies increases DevOps throughput while lowering your overall costs to develop, maintain and manage applications in a multi-cloud environment.
Achieving Business Agility with Microservices
Some might wonder “what is the point of a Kubernetes architecture?” Besides the obvious benefits of 100% fault tolerance and auto scaling, business agility is the real reason for moving to a fully cloud native Kubernetes platform. We have talked about agile development practices for years...but we have never achieved agile practices beyond updating code. Everything beyond development — from builds to release — have continued to be monolithic. To achieve business agility, application teams must be able to update small functions of their solutions, quickly and safely, without the need to go through a full application lifecycle process. Pushing innovation to end users with minimal impact and time is core to achieving business agility. This session will explore how microservices is truly our last mile of agile practices and essential for the creation of a modern software architecture.
Panel: The Future of Cloud Native
Kube is no longer the “the new kid on the block." Quite the contrary - it has become the dominant form of application infrastructure. Yet some say it is still too hard, hence the many managed Kube services and similar options. Will we see Kube become more operationalized? What about the ecosystem around it? Service mesh? What does the near-term future hold for that? What about cloud native’s effect on the SDLM? The CI/CD world has been turned inside out by cloud native/Kube, what can we expect to see there? DataOps? GitOps? Cloud Native Sec? So many questions, so little time!
Operational Scaling with GitOps
Over the last twenty years, the way in which developers deploy and manage their applications has changed dramatically. Technology improvements in packaging, automation, and virtualization — as well as shifts in operations culture — have profoundly shaped the software deployment landscape. Most recently, a set of best practices for automated delivery of code has emerged in parallel with the growth in popularity of Kubernetes. In this talk, AWS Principal Open Source Engineer Jay Pipes covers how to evolve DevOps to GitOps, the why, the what, and where we are heading. What are the motivators, effective strategies, and deployment challenges?
DevOps, Waffles, and Superheroes - Oh my!
Microservices can be hard; understanding container best practices can be hard as those practices are still being discovered. This session helps you minimize the learning curve with container orchestration, specifically Kubernetes, by bringing DevOps best practices into the mix. This is not another Hello World session with quick tips. Instead, you can expect a deep dive into how you can truly go from zero to DevOps superhero by simply selecting container tooling specifically built for simplifying the process. In doing so, you will also learn how these tools can provide better orchestration for cloud services, abstraction and encapsulation for your microservices deployments and visibility into what runs where and why. You will not only walk away with a deeper understanding of this area, but also some hands-on material to help you get started.
The Devil is in the Deployments
Can you deploy your entire app from scratch with a Helm install? Or do you have cloud infra and hosted services that you rely on? The cloudy bits that make your app cloud native. Cloud Native Application Bundles, the CNAB spec, was designed to solve deployment problems that we all have been quietly battling with, mostly with hope and bash. Bundles come in handy when deploying applications that don't live neatly inside of just Kubernetes. Let's learn when bundles make sense, when they don't, and what your day could look like if you were using them:
- Install tools to manage your app: helm, aws/azure/gcloud, terraform.
- Deploy your app along with its infra: cloud storage, dns entry, load balancer, ssl cert.
- Get software and its dependencies into air gapped networks.
- Manage disparate operational tech, such as Helm or Terraform, across teams and departments.
- Secure your pipeline.
Scaling and Simplicity - Idea to Production in Kubernetes
The adage in computer science is that we are just moving complexity around like an abacus. When looking towards new architectures and especially with cloud-native workloads, we sometimes focus on the scaling aspect. When Harness migrated to Kubernetes recently, we chose to embody both simplicity and scaling as the driving pillars of our journey. With such critical workloads internally and externally, we took to heart several Site Reliability Engineering [SRE] principles when designing our Kubernetes platform and workloads for self-service. Join this session and learn tips and tricks to respect both simplicity and scaling. One approach certainly doesn’t fit all and being pragmatic to the workloads and teams you support is key for success.
- Simplicity has virtue especially when trouble-shooting
- Being pragmatic for your organization and team
- Don’t be insulted if the ecosystem moves quickly and you have to change your approach
The Business Benefits of GitOps
GitOps may very well be the center square of the buzzword bingo card these days, yet in spite of that fervor, there are real business benefits to be gained through its embrace. While leveraging existing skill sets of developers seems a good idea, there are many other elements of GitOps that can yield significant gains in metrics that have been tied to business outcomes - but only if they are done right. In this session, Cornelia Davis, Weaveworks CTO, will cover key GitOps principles and practices and tie them to measurable IT outcomes such as change failure rates and mean time to recovery; these IT metrics are intrinsically tied to digital transformation initiatives. She will also review research that has shown a correlation between these measures and where companies fall on the spectrum from high to low performers in terms of business outcomes.
Advanced Kubernetes Management
Most organizations have a need for multiple Kubernetes clusters to support multiple teams, projects, and different kinds of services. These clusters can be deployed using a variety of tools and techniques, including using cloud provider solutions (ie. EKS, AKS, or GKE), possibly even by different teams. This creates a difficulty in managing cloud native sprawl. In this talk, Chris Gaun will introduce the core concepts for gaining centralized insights and governance over multiple clusters, multiple teams, and across multiple infrastructure providers. In this talk, Gaun will cover Kubernetes concepts such as namespaces, roles and role bindings, single sign-on, Kubefed (federation), and centralized metrics with Thanos. Attendees should have a solid understanding of Kubernetes concepts prior to attending.
Delivering Kubernetes without Thinking of Kubernetes
Container and application sprawl on Kubernetes is creating relevant challenges when it comes to applying governance, security and control on production environments. Platform Engineering and DevOps teams are tasked with solving these challenges while ensuring Developer experience is achieved, integration into 3rd party solutions are available for the deployed applications and complexity is reduced, which always sound like an impossible task. During this session, we will cover how a few organizations are overcoming these challenges and delivering great results.
Advancing the Future of CI/CD Together
Delivering software is increasingly complex due to cloud native environments and tool fragmentation. This talk outlines how the Continuous Delivery Foundation drives open initiatives so we can all work together to accelerate CI/CD adoption in a rapidly changing tech landscape. The Continuous Delivery Foundation was launched in 2019 as the new home to FOSS projects Jenkins, Jenkins, Spinnaker and Tekton. The foundation is also a community to advance adoption of CI/CD best practices and tools.
This talk outlines the initiatives and ways to get involved so we can all work together to accelerate CI/CD adoption. The Continuous Delivery Foundation hosts key CI/CD projects. This talk gives a brief overview of those projects and how we are working toward interoperability between them. We also look at the goals of the CDF and key initiatives such as CI/CD landscape, security, diversity and MLOps. This talk will share how you can get involved so we can all work together in open source to drive forward the direction of CI/CD and make software delivery better for everyone.
Taking Kubernetes to the Edge: RPi + IaC + K3s = EdgeLab.Digital
Kubernetes automation on-premises and bare metal is hard, but it’s also critical. We wanted an inexpensive, multi-node environment for learning and practicing — where lessons learned and automation created could be applied more generally. And that’s why we brought full infrastructure as code (IaC) controls to Raspberry Pis. In this demo-focused session, we’ll show how EdgeLab.digital uses Digital Rebar to provide PXE boot, immutable O/S and API-driven automation control of RPis to build Kubernetes (k3s) and OpenFaaS. Designed for edge and enterprise operators, attendees will learn how this easy-to-duplicate-at-home demo creates a true desktop data center using the community edition of Digital Rebar to enable netboot for RPis and install CNCF sandbox project K3s. The same code learned in this demo session can run any COTS infrastructure and is currently running multinational banks in the EdgeLab.
Everything you Wanted to Know about Proxy Architectures: From Simple 2 Tier Ingress to Feature-rich Service Mesh
In this session, we will exhaustively discuss the pros and cons of following four proxy architecture choices for your microservices-based app delivery to balance simplicity and benefits. These are Unified Ingress, 2 Tier Ingress, Service Mesh Lite and Service Mesh.
We will evaluate these four architectures on seven key metrics: simplicity, security, scalability, observability, integration with open source tools, impact on CI/CD strategy, skill sets required and more. Both cloud native novices and experts will benefit from this session. Join this session and walk away with your own decision tree to select the right architecture.
How Red Hat OpenShift can Transform your Organization
The Red HatⓇ OpenShiftⓇ Container Platform is the leading Enterprise Container Orchestration platform according to Forrester (Bartoletti and Dai, The Forrester New WaveTM: Enterprise Container Platform Software Suites, Q4 2018, 8). In this talk, Mayur Shetty will discuss what makes this the leading Enterprise Container Orchestration platform.
In particular this talk will take a look at:
- Why Red Hat OpenShift is Enterprise Kubernetes
- The various Red Hat OpenShift consumption options available to customers on AWS
- How adopting containers with Red Hat OpenShift on AWS reduces the time spent managing, supporting, and securing environments, opening up resources to focus on building new applications
Cloud Native Security For Kubernetes In Practice
Securing Cloud Native Applications is a multi-objective and multi-constrained problem space spanning individuals, teams, processes, culture, infrastructure and tools. It is safe to assert that with cloud native applications nearly everything falls into security; from identity, through runtime and networking handling data in flight to storage handling data at rest and everything in between. The MITRE ATTACK® framework is a knowledge base of known tactics and techniques that are involved in cyberattacks, originally created for the IT computing environment and recently adapted for Kubernetes. In this talk, we will dive into the MITRE ATTACK Kubernetes Threat Matrix, review one of the recently published Kubernetes vulnerabilities within the context of the Kubernetes Threat Matrix. Throughout the talk, we will emphasize security practices that Kubernetes based cloud native application builders and operators can adopt for a secure day 2 Kubernetes.
Kubernetes Security - Defence in Depth
In this talk, the audience is taken through the layers to consider when securing a Kubernetes stack. This isn't about specific tools, but rather the areas that need attention. The attendee learns that managed or self-managed Kubernetes still have security needs and assuming things are secure by default is not an option! The audience is then introduced to the concept of a Cloud Native Application Protection Platform.
Cloud Native Everywhere and Security to Match
The pressure is on to deliver business applications at a lightning speed. This acceleration has demanded the need for DevOps processes to embrace Kubernetes®, containers, and cloud-native applications. When you’re going fast, application security and compliance doesn’t have to be one team's responsibility. Security must become a shared responsibility between developers, cloud architects, and security teams. Hear about the security challenges surrounding weak configurations, container and serverless threats in cloud-native environments, and strategies that help you build, secure, and ship fast on AWS.
How to Achieve Continuous Container Security in 4 Steps
Containers are shaping the way organizations are developing and managing applications nowadays. However, many are not always fully aware of the measures that need to be taken across the entire software development lifecycle, especially when it comes to open source security aspects. The mindset of securing our applications needs to be shifted – to continuous security. In this session, Jeff Martin, Senior Director of Product at WhiteSource, will discuss:
1) The main security challenges organizations face when using containers
2) The most common layers in a typical container deployment
3) Four simple steps to build security into each layer
Security lessons from the field to harden containerized applications on Kubernetes
Enterprise adoption of microservices deployed as containerized applications on Kubernetes is growing at a rapid pace. There has been a simultaneous growth in the adoption of open source software and packages. Consequently, security and devops teams are tasked with tackling attack vectors from 3 major threat dimensions. In this talk we describe these threat dimensions along with the battle tested techniques from the field, that are used to secure and protect these applications. Specifically we deep dive into techniques used to inject security natively throughout the application lifecycle: from build to deploy to run.
Top 10 Considerations for Selecting Data Protection for Your Kubernetes Applications
Are you a DevOps Engineer, Application Developer or IT Admin who needs backup and recovery, migration, DR or application mobility for Kubernetes or OpenShift-based applications? The extraordinary performance, scale and mobility challenges in these dynamic container environments demand a purpose-built platform that can support any public, private, and/or multi-cloud deployment. This session covers the 10 essential things you need (and why) when choosing the right data protection solution for your cloud-native environment.
Securing containers across EKS, ECS and Fargate environments
Your DevOps teams need to embed security as they ramp containers and Kubernetes in production. As cloud providers release new services constantly, you not only need visibility inside containers, but also the cloud infrastructure, applications and services used by your teams. With a secure DevOps workflow, your team can spend more time developing apps and less time reacting to issues. Running secure containers requires that security and DevOps work better together. Join us to understand how to:
● Automate scanning including for Fargate workloads within CI/CD pipelines (Jenkins,
Gitlab) and registries (ECR, GCR)
● Detect runtime threats with open-source tools like Falco and continuously monitor your
cloud using AWS CloudTrail
● Conduct incident response and forensics, even after the container is gone
● Continuously validate compliance against PCI, NIST, CIS. etc.
Security and Access for Kubernetes
The promise of elastic scale and cloud-native has driven the demand for K8s, but the developer now has the harder task of building applications in a secure manner. This talk will focus on best practices and potential pitfalls for securing K8s for the engineering team by using the K8s API server and control plane. This will be a how-to for implementing a robust Role Based Access Control (RBAC) tied into the corporate SSO/Identity provider using Github Teams and open source software.
Beyond metrics: Leveraging Tracing in K8s and Containerized Environments
Microservices systems running on Kubernetes and containerized environments are complex and hard to monitor and troubleshoot. Join us as we discuss the growth in adoption of K8s and containers and the challenges that they have presented us all, focusing on why standard metrics by themselves are leaving gaps in your observability strategy.
Observability for Kubernetes: Simplifying Complex Environments
With the rapid adoption of Kubernetes, engineers and operators need better visibility to understand and explore the performance of their cloud-native applications and infrastructure. DevOps teams also need observability to efficiently build and run modern software - especially when applications are running inside Kubernetes clusters. This session will demonstrate how easy it is to get multidimensional views into Kubernetes clusters with New Relic. You’ll learn how to set up and use the New Relic Kubernetes cluster explorer to drill down into Kubernetes data and metadata in a high-fidelity, curated UI that simplifies complex environments.
Kubernetes in the Midst of Unprecedented Change
In this talk, Al Sargent of InfluxData will talk about his company’s journey of operating Kubernetes across AWS, Azure, and Google with a small team of SREs in the midst of unprecedented change. He’ll also talk about how the team relies heavily on instrumentation, easily readable dashboards, and InfluxDB to synthesize and take action on diverse data sets.
In this webinar you will learn:
- The challenges a pandemic brings to existing remote-work organizations in general, and SREs in particular
- How to use technology to monitor production environments in general, and Kubernetes in particular, and alert on critical issues in a composable and easily accessible manner
- How to try out InfluxDB in your production environment
Whose Fault Is It When Kubernetes Breaks? How to Build Trust and Resolve Incidents Faster with Distributed Tracing
So, you've gone "cloud native". You're running apps in containers, you're scheduling them with Kubernetes, and now you're trying to create a better experience for your team and for your customers. But when things break — and they often do — it can be challenging to understand how to resolve an incident quickly, or even which service owner is responsible. Distributed tracing brings the code execution to the forefront, and gives a new view focused on service performance. In this presentation, we discuss:
- Why traditional logs and metrics can't answer the most important questions about K8s reliability
- How distributed tracing brings a service-centric view to the forefront of your monitoring teams
- How to instantly understand changes to services, pods, and containers
- How to share responsibility for incident response, and quickly engage the right team for resolution
- What complete system visibility actually means
- How you can take advantage of 'shipping your org chart
Infrastructure as Code with Kubernetes Operators
Kubernetes developers and operators work together to manage workloads and continuously ship software and infrastructure as code through CI/CD. These users have an affinity for automation and pipelines, and richer integration with Kubernetes is a growing theme across the cloud native ecosystem. Using the Kubernetes Operator pattern, in combination with Infrastructure as Code tools, provides a native GitOps experience for managing and delivering infrastructure on any cloud or Kubernetes cluster. In this session, you’ll learn how to leverage modern programming languages to overhaul the complexity of deploying and managing infrastructure, clusters and workloads. We’ll build stacks of cloud resources, showcase how to drive towards a desired state in our pipelines and how to integrate with Kubernetes for continuous delivery.
Declarative Network Security for Kubernetes with Calico
In this session, we will go over the core concepts in K8s network policies and calico network policies. Compare and contrast between the two models, and highlight when to use one versus the other.
Takeaways: Kubernetes Network Policy enables applications to declare segmentation controls to restrict access to authorized workloads. Calico Network Policy is a sophisticated superset of Kubernetes network policy that includes a number of advanced policy features facilitating real world use cases. Calico provides a scalable implementation of network policy and proven in very large-scale production deployments yet simple to use and operate.
Play SuperTuxKart on Kubernetes!
Running uninterruptible gaming workloads that need dedicated port allocations might seem like an odd choice for Kubernetes, but it doesn't have to be difficult. Agones, an open-source project, makes this possible for operators and developers alike. In this talk, I will demonstrate how to deploy a fleet of gameservers to multiple Kubernetes clusters running Agones using Google Cloud Game Servers (GCGS). I will deploy a server running SuperTuxKart across several clusters and allocate one of those servers for my client to use.
You will learn:
- How to install Agones in a Kubernetes cluster
- How Agones manages the lifecycle of gameservers
- How to use GCGS to deploy a fleet of gameservers across multiple clusters
- How to use Agones to allocate gameservers for dedicated game sessions
WHY SHOULD I ATTEND
Join us on Thursday, October 1 at 10 a.m. EDT to discover how business executives, practitioners, engineers and others are embracing containers and microservices environments. Kubernetes have crossed the chasm, moving from bleeding-edge status to a critical enabler for managing this new ecosystem. Don’t miss this opportunity to get insights and knowledge from those leading Kubernetes and cloud native communities.
Thank you for your interest in the Operationalizing Cloud Native with Kubernetes Virtual Summit. Register Now!