November 19, 2019

Announcing Hubble - Network, Service & Security Observability for Kubernetes

Hubble is a fully distributed networking and security observability platform for cloud native workloads. Hubble is open source software and built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

Hubble Architecture

August 22, 2019

eBPF at Linux Plumbers 2019, Lisbon, Portugal


The Linux Plumbers Conference 2019 is coming up September 9-11 in Lisbon, Portugal. There are several tracks featuring eBPF related topics:

August 20, 2019

Cilium 1.6: KVstore-free operation, 100% kube-proxy replacement, Socket-based load-balancing, Generic CNI Chaining, Native AWS ENI support, ...

Introduction graph

We are excited to announce the Cilium 1.6 release. A total of 1408 commits have been contributed by the community with many developers contributing for the first time. Cilium 1.6 introduces several exciting new features:

  • KVStore free operation: The addition of a new CRD-based backend for security identities now allows to operate Cilium entirely without a KVstore in the context of Kubernetes. (More details)
  • 100% Kube-proxy replacement: Operating a Kubernetes cluster without requiring to run kube-proxy has been a desire of many users. This release includes the final two features required to run a Kubernetes cluster without kube-proxy with Cilium fully replacing kube-proxy. (More details)
  • Socket-based load-balancing: Socket-based load-balancing combines the advantage of client-side and network-based load-balancing by providing fully transparent load-balancing using Kubernetes services with the translation from service IP to endpoint IP done once during connection establishment instead of translating each network packet for the lifetime of a connection. (More details)
  • Policy scalability improvements: The entire policy system has been improved to decouple handling of policy and identity definitions and moving to an entirely incremental model. This ensures that environments with high pod scheduling churn, e.g. several 100K pods across multiple clusters, can cope well in combination with constant policy definition changes. (More details)
  • Generic CNI chaining: The 1.6 release introduces a new CNI chaining framework allowing to run Cilium on top of the majority of other CNI plugins such as Weave, Calico, Flannel, AWS VPC CNI or the Lyft CNI plugin. This enables using advanced features such as eBPF-based security policy enforcement, visibility, multi-cluster, encryption, and load-balancing while continuing to run whatever CNI plugin is already in-use. (More details)
  • Native AWS ENI mode: A new datapath and IPAM mode allows to combine the efficiency of native AWS ENI routing with Cilium policy enforcement, encryption and multi-cluster. A new operator-based design, works around many problems known to large scale AWS ENI users with per node agents. (More details)
  • ... and much more: For the full list of changes, see the 1.6 Release Notes.
July 1, 2019

CVE-2019-13119: Policy bypass via nested encapsulation

On May 25 2019, a security relevant bug has been reported to us via the documented security disclosure channel. Thanks to l14n for the excellent bug report! It was soon identified that multiple vendors are affected by this vulnerability. This lead to an embargo period which is being lifted today.

The bug allows, under certain circumstances, to bypass network security policies. See below for details on the vulnerability and the mitigation.

Who is affected?: Users operating Cilium in encapsulation mode (VXLAN or Geneve) while hosting untrusted workloads with an egress policy that allows pods to emit UDP encapsulation traffic to other worker nodes.

The vulnerability is being tracked by CVE-2019-13119.

We are releasing Cilium 1.5.4, 1.4.5, and 1.3.7 to fix the security vulnerability.

June 24, 2019

License change and lack of attribution of Cilium eBPF code in Calico project

As with everything we do, we are fully transparent. As it becomes obvious that a simple resolution in this matter is not possible, we follow open source best practices and choose a public forum for the sake of transparency.

It was brought to our attention that some of the new eBPF code committed to the Calico repository is violating the license of source code in the Cilium repository.

The original report called out suspiciously similar code in both repositories. This by itself is of course not a problem if the open source licenses involved are respected. This includes, among other things, attribution and restrictions regarding the rights to re-license.

Upon closer inspection, it was identified that source code has been copied from the Cilium repository, modified to create derivative work, and then committed (commit) to the Calico repository with the license changed in a non-compatible manner. As part of this, the attribution required by the license was also omitted. The details of this can be found further down in this post.

Like the majority of the Linux kernel source code, the datapath portion of Cilium that runs as part of the Linux kernel is released under the GPL 2.0 license. The GPL license does not permit a license change to the Apache License without consent of the original authors.

This prompted us to contact the authors of the derivative work. As a result, an initial attempt was made to rewrite some sections of the code. After inspection, we concluded that the work is still a derivative of our original source code.

However, in order to resolve the situation as simply as possible, we offered to dual-license the respective code under the Apache license with the condition that attribution to the original authors is added. This resulted in the following pull request being proposed to the Calico repository to add the attribution. The pull request is currently waiting to be merged.

From our perspective, this would resolve all of our concerns. We obviously also accept any other resolution as long as it conforms to the respective open source licenses.

We are waiting for a reaction by the maintainers of the Calico project.

Update 2019-06-25: Some of the eBPF related code has now been removed from the Calico repository via this PR.

May 3, 2019

Cilium User Survey March 2019 - The Results

Back in March we have asked our users to provide feedback via our first ever user survey. Many of you have responded and the results are in!

next features

The survey was announced on our Slack channel and on Twitter. Participation was anonymous and did not require to leave behind contact information. Most questions had a set of predefined answers plus a field to add additional answers. All questions were optional.

April 29, 2019

Cilium 1.5: Scaling to 5k nodes and 100k pods, BPF-based SNAT, and Rolling Key Updates for Transparent Encryption

We are excited to announce the Cilium 1.5 release. Cilium 1.5 is the first release where we primarily focused on scalability with respect to number of nodes, pods and services. Our goal was to scale to 5k nodes, 20k pods and 10k services. We went well past that goal with the 1.5 release and are now officially supporting 5k nodes, 100k pods and 20k services. Along the way, we learned a lot, some expected, some unexpected, this blog post will dive into what we learned and how we improved.


Besides scalability, several significant features made its way into the release including: BPF templating, rolling updates for transparent encryption keys, transparent encryption for direct-routing, a new improved BPF based service load-balancer with improved fairness, BPF based masquerading/SNAT support, Istio 1.1.3 integration, policy calculation optimizations as well as several new Prometheus metrics to assist in operations and monitoring. For the full list of changes, see the 1.5 Release Notes.

March 18, 2019

Deep Dive into Cilium Multi-cluster

This is a deep dive into ClusterMesh, Cilium's multi-cluster implementation. In a nutshell, ClusterMesh provides:

  • Pod IP routing across multiple Kubernetes clusters at native performance via tunneling or direct-routing without requiring any gateways or proxies.

  • Transparent service discovery with standard Kubernetes services and coredns/kube-dns.

  • Network policy enforcement spanning multiple clusters. Policies can be specified as Kubernetes NetworkPolicy resource or the extended CiliumNetworkPolicy CRD.

  • Transparent encryption for all communication between nodes in the local cluster as well as across cluster boundaries.