Launching OVN-Kubernetes using Helm Charts¶
Introduction¶
This helm chart supports deploying OVN K8s CNI in a K8s cluster.
Open Virtual Networking (OVN) Kubernetes CNI is an open source networking and network security solution for Kubernetes workloads. It leverages a distributed OVN SDN control plane and per-node Open vSwitch (OVS) to provide network virtualization and network connectivity to K8s Pods. It does so by creating a logical network topology using logical constructs such as logical switches (Layer 2) and logical routers (Layer 3). The Pod interfaces are represented by logical ports on the logical switches. On these logical switch ports, one can specify IP network information (IP address and MAC address), anti-spoofing rules (MAC and IP), Security Groups, QoS configuration, and so on.
A port, either physical SR-IOV VF or virtual VETH, assigned to a Pod will be associated with a corresponding logical port, this will result in applying all the logical port configuration onto the physical port. The logical port becomes the API for configuring the physical port.
In addition to providing overlay network connectivity for Pods in the K8s cluster, OVN K8s CNI supports a plethora of advanced networking features, such as
- Optimized and Accelerated K8s Network Policy on Pod's traffic
- Optimized and Accelerated K8s Service Implementation (aka Load Balancers and NAT)
- Optimized and Accelerated Policy Based Routing
- Multi-Home Pods with an option for Secondary networks to be on a Layer-2
Overlay (flat network), Layer-2 Underlay (VLAN-based) on private or public
subnets.
- Optimized and Accelerated K8s Network Policy on Pod's secondary networks
Most of these services are distributed and implemented via a pipeline (series of OpenFlow tables with OpenFlow flows) on local OVS switches. These OVS pipelines are very amenable to offloading to NIC hardware, which should result in the best possible networking performance and CPU savings on the host.
The OVN K8s CNI architecture is a layered architecture with OVS at the bottom,
followed by OVN, and finally OVN K8s CNI at the top. Each layer has several
K8s components - deployments, daemonsets, and statefulsets. Each component at
every layer is a subchart by itself. Based on the deployment needs, all or
some of these subcharts are installed to provide the aforementioned OVN K8s
CNI features, this can be done by editing tags
section in values.yaml file.
Quickstart:¶
Run script helm/basic-deploy.sh
to set up a basic OVN/Kubernetes cluster.
Manual steps:¶
-
Disable IPv6 of
kind
docker network, otherwise ovnkube-node will fail to start# docker network rm kind (delete `kind` network if it already exists) # docker network create kind -o "com.docker.network.bridge.enable_ip_masquerade"="true" -o "com.docker.network.driver.mtu"="1500"
-
Launch a Kind cluster without CNI and kubeproxy (additional controle-plane or worker nodes can be added)
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker networking: disableDefaultCNI: true kubeProxyMode: none
-
Optional: build local image and load it into Kind nodes
# cd dist/images # make ubuntu # docker tag ovn-kube-ubuntu:latest ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-ubuntu:master # kind load docker-image ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-ubuntu:master
-
Run
helm install
with properyk8sAPIServer
,ovnkube-identity.replicas
, image repo and tag# cd helm/ovn-kubernetes # helm install ovn-kubernetes . -f values.yaml --set k8sAPIServer="https://$(kubectl get pods -n kube-system -l component=kube-apiserver -o jsonpath='{.items[0].status.hostIP}'):6443" --set ovnkube-identity.replicas=$(kubectl get node -l node-role.kubernetes.io/control-plane --no-headers | wc -l) --set global.image.repository=ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-ubuntu --set global.image.tag=master
Notes:¶
- Only following scenarios were tested with Kind cluster
- ovs-node + ovnkube-node + ovnkube-db + ovnkube-master, with/without ovnkube-identity
- ovs-node + ovnkube-node + ovnkube-db-raft + ovnkube-master, with/without ovnkube-identity
Following section describes the meaning of the values.
Values¶
Key | Type | Default | Description |
---|---|---|---|
global.aclLoggingRateLimit | int |
20
|
The largest number of messages per second that gets logged before drop @default 20 |
global.disableForwarding | string | false |
Controls if forwarding is allowed on OVNK controlled interfaces |
global.disableIfaceIdVer | bool |
false
|
Deprecated: iface-id-ver is always enabled |
global.disablePacketMtuCheck | string |
""
|
Disables adding openflow flows to check packets too large to be delivered to OVN due to pod MTU being lower than NIC MTU |
global.disableSnatMultipleGws | string |
""
|
Whether to disable SNAT of egress traffic in namespaces annotated with routing-external-gws |
global.egressIpHealthCheckPort | string |
""
|
Configure EgressIP node reachability using gRPC on this TCP port |
global.emptyLbEvents | string |
""
|
If set, then load balancers do not get deleted when all backends are removed |
global.enableAdminNetworkPolicy | string |
""
|
Whether or not to use Admin Network Policy CRD feature with ovn-kubernetes |
global.enableCompactMode | bool |
false
|
Indicate if ovnkube run master and node in one process |
global.enableConfigDuration | string |
""
|
Enables monitoring OVN-Kubernetes master and OVN configuration duration |
global.enableEgressFirewall | string |
""
|
Configure to use EgressFirewall CRD feature with ovn-kubernetes |
global.enableEgressIp | string |
""
|
Configure to use EgressIP CRD feature with ovn-kubernetes |
global.enableEgressQos | string |
""
|
Configure to use EgressQoS CRD feature with ovn-kubernetes |
global.enableEgressService | string |
""
|
Configure to use EgressService CRD feature with ovn-kubernetes |
global.enableHybridOverlay | string |
""
|
Whether or not to enable hybrid overlay functionality |
global.enableInterconnect | bool |
false
|
Configure to enable interconnecting multiple zones |
global.enableIpsec | bool |
false
|
Configure to enable IPsec |
global.enableLFlowCache | bool | true |
Indicates if ovn-controller should enable/disable the logical flow in-memory cache when processing Southbound database logical flow changes |
global.enableMetricsScale | string |
""
|
Enables metrics related to scaling |
global.enableMultiExternalGateway | bool |
false
|
Configure to use AdminPolicyBasedExternalRoute CRD feature with ovn-kubernetes |
global.enableMultiNetwork | bool |
false
|
Configure to use multiple NetworkAttachmentDefinition CRD feature with ovn-kubernetes |
global.enableMulticast | string |
""
|
Enables multicast support between the pods within the same namespace |
global.enableOvnKubeIdentity | bool |
true
|
Whether or not enable ovnkube identity webhook |
global.enableSsl | bool |
false
|
Use SSL transport to NB/SB db and northd |
global.enableStatelessNetworkPolicy | bool |
false
|
Configure to use stateless network policy feature with ovn-kubernetes |
global.enableSvcTemplate | bool |
true
|
Configure to use service template feature with ovn-kubernetes |
global.encapPort | int |
6081
|
GENEVE UDP port (default 6081) |
global.extGatewayNetworkInterface | string |
""
|
The interface on nodes that will be used for external gateway network traffic |
global.gatewayMode | string |
"shared"
|
The gateway mode (shared or local), if not given, gateway functionality is disabled |
global.gatewayOpts | string |
""
|
Optional extra gateway options |
global.hybridOverlayNetCidr | string |
""
|
A comma separated set of IP subnets and the associated hostsubnetlengths (eg, \"10.128.0.0/14/23,10.0.0.0/14/23\") to use with the extended hybrid network |
global.image.pullPolicy | string |
"IfNotPresent"
|
Image pull policy |
global.image.repository | string |
"ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-ubuntu"
|
Image repository for ovn-kubernetes components |
global.image.tag | string |
"master"
|
Specify image tag to run |
global.ipfixCacheActiveTimeout | string |
""
|
Maximum period in seconds for which an IPFIX flow record is cached and aggregated before being sent @default 60 |
global.ipfixCacheMaxFlows | string |
""
|
Maximum number of IPFIX flow records that can be cached at a time @default 0, meaning disabled |
global.ipfixSampling | string |
""
|
Rate at which packets should be sampled and sent to each target collector @default 400 |
global.ipfixTargets | string |
""
|
A comma separated set of IPFIX collectors to export flow data |
global.lFlowCacheLimit | string | unlimited |
Maximum number of logical flow cache entries ovn-controller may create when the logical flow cache is enabled |
global.lFlowCacheLimitKb | string |
""
|
Maximum size of the logical flow cache (in KB) ovn-controller may create when the logical flow cache is enabled |
global.libovsdbClientLogFile | string |
""
|
Separate log file for libovsdb client |
global.monitorAll | string |
""
|
Enable monitoring all data from SB DB instead of conditionally monitoring the data relevant to this node only @default true |
global.nbPort | int |
6641
|
Port of north bound ovsdb |
global.netFlowTargets | string |
""
|
A comma separated set of NetFlow collectors to export flow data |
global.nodeMgmtPortNetdev | string |
""
|
The net device to be used for management port, will be renamed to ovn-k8s-mp0 and used to allow host network services and pods to access k8s pod and service networks |
global.ofctrlWaitBeforeClear | string |
""
|
ovn-controller wait time in ms before clearing OpenFlow rules during start up @default 0 |
global.remoteProbeInterval | int |
100000
|
OVN remote probe interval in ms @default 100000 |
global.sbPort | int |
6642
|
Port of south bound ovsdb |
global.sflowTargets | string |
""
|
A comma separated set of SFlow collectors to export flow data |
global.unprivilegedMode | bool |
false
|
This allows ovnkube-node to run without SYS_ADMIN capability, by performing interface setup in the CNI plugin |
global.v4JoinSubnet | string |
""
|
The v4 join subnet used for assigning join switch IPv4 addresses |
global.v4MasqueradeSubnet | string |
""
|
The v4 masquerade subnet used for assigning masquerade IPv4 addresses |
global.v6JoinSubnet | string |
""
|
The v6 join subnet used for assigning join switch IPv6 addresses |
global.v6MasqueradeSubnet | string |
""
|
The v6 masquerade subnet used for assigning masquerade IPv6 addresses |
k8sAPIServer | string |
"https://172.25.0.2:6443"
|
Endpoint of Kubernetes api server |
mtu | int |
1400
|
MTU of network interface in a Kubernetes pod |
ovnkube-identity.replicas | int |
1
|
number of ovnube-identity pods, co-located with kube-apiserver process, so need to be the same number of control plane nodes |
podNetwork | string |
"10.128.0.0/14/23"
|
IP range for Kubernetes pods, /14 is the top level range, under which each /23 range will be assigned to a node |
serviceNetwork | string |
"172.30.0.0/16"
|
A comma-separated set of CIDR notation IP ranges from which k8s assigns service cluster IPs. This should be the same as the value provided for kube-apiserver "--service-cluster-ip-range" option |
skipCallToK8s | bool |
false
|
Whether or not call `lookup` Helm function, set it to `true` if you want to run `helm dry-run/template/lint` |
tags | object |
{
"ovn-ipsec": false,
"ovnkube-control-plane": false,
"ovnkube-db-raft": false,
"ovnkube-node-dpu": false,
"ovnkube-node-dpu-host": false,
"ovnkube-single-node-zone": false,
"ovnkube-zone-controller": false
}
|
list of dependent subcharts that need to be installed for the given deployment mode, these subcharts haven't been tested yet. |