This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.3.4. Configuring the log visualizer
OpenShift Container Platform uses Kibana to display the log data collected by cluster logging.
You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.
3.4.1. Configuring CPU and memory limits
The cluster logging components allow for adjustments to both the CPU and memory limits.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit ClusterLogging instance -n openshift-logging
$ oc edit ClusterLogging instance -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 curation: type: "curator" curator: resources: limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: resources: limits: memory: 736Mi requests: cpu: 200m memory: 736Mi
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources:
1 limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources:
2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources:
3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 curation: type: "curator" curator: resources:
4 limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: resources:
5 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi
- 1
- Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
- 2 3
- Specify the CPU and memory limits and requests for the log visualizer as needed.
- 4
- Specify the CPU and memory limits and requests for the log curator as needed.
- 5
- Specify the CPU and memory limits and requests for the log collector as needed.
3.4.2. Scaling redundancy for the log visualizer nodes
You can scale the pod that hosts the log visualizer for redundancy.
Procedure
Edit the
ClusterLogging
custom resource (CR) in theopenshift-logging
project:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit ClusterLogging instance
$ oc edit ClusterLogging instance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit ClusterLogging instance
$ oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: visualization: type: "kibana" kibana: replicas: 1
1 - 1
- Specify the number of Kibana nodes.
3.4.3. Using tolerations to control the log visualizer pod placement
You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods.
You apply tolerations to the log visualizer pod through the ClusterLogging
custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair
that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value
pair that is not on other pods ensures only the Kibana pod can run on that node.
Prerequisites
- Cluster logging and Elasticsearch must be installed.
Procedure
Use the following command to add a taint to a node where you want to schedule the log visualizer pod:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm taint nodes <node-name> <key>=<value>:<effect>
$ oc adm taint nodes <node-name> <key>=<value>:<effect>
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm taint nodes node1 kibana=node:NoExecute
$ oc adm taint nodes node1 kibana=node:NoExecute
This example places a taint on
node1
that has keykibana
, valuenode
, and taint effectNoExecute
. You must use theNoExecute
taint effect.NoExecute
schedules only pods that match the taint and remove existing pods that do not match.Edit the
visualization
section of theClusterLogging
CR to configure a toleration for the Kibana pod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow visualization: type: "kibana" kibana: tolerations: - key: "kibana" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000
visualization: type: "kibana" kibana: tolerations: - key: "kibana"
1 operator: "Exists"
2 effect: "NoExecute"
3 tolerationSeconds: 6000
4
This toleration matches the taint created by the oc adm taint
command. A pod with this toleration would be able to schedule onto node1
.