2 min read | by Jordi Prats
When running an OpenShift cluster we'll find that it exposes a web-based console that not only allows you to deploy applications, but also managing the cluster. However, since it is an additional way to access the cluster we might have some concerns about it, specially from the security perspective. Specifically, the console can be a potential attack vector to gain unauthorized access to the cluster. Let's see how to disable it.
We can find the console deployed, by default, in the openshift-console namespace:
$ kubectl get pods -n openshift-console
NAME READY STATUS RESTARTS AGE
console-7c7f7979c7-vbgq8 1/1 Running 0 1d
console-7c7f7979c7-jprxx 1/1 Running 0 1d
downloads-54f4dcfcd-9dpb5 1/1 Running 0 2d
downloads-54f4dcfcb-b5nnm 1/1 Running 0 2d
$ kubectl get route -n openshift-console
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
console console-openshift-console.apps.test-rosa.abcd.p1.openshiftapps.com console https reencrypt/Redirect None
downloads downloads-openshift-console.apps.test-rosa.abcd.p1.openshiftapps.com downloads http edge/Redirect None
In OpenShift, there's an operator for everything: the web console couldn't be an exception. Using the console object that it's name is cluster we can configure it. If we retrieve it, default, there's not much configured:
$ kubectl get console cluster -n openshift-console -o yaml
apiVersion: config.openshift.io/v1
kind: Console
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
release.openshift.io/create-only: "true"
creationTimestamp: "2022-01-15T22:31:19Z"
generation: 1
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: 29c60660-ded7-4fdd-b41e-a236a57bea4d
resourceVersion: "56372107"
uid: 7f679be4-72ff-4f3d-a4f2-e35fd038e936
spec: {}
status:
consoleURL: https://console-openshift-console.apps.test-rosa.abcd.p1.openshiftapps.com
To disable it, we'll need to set the spec.managementState attribute to Removed. We can do se with kubectl edit:
kubectl edit console cluster -n openshift-console
Adding the attribute to it:
$ kubectl get console cluster -n openshift-console -o yaml
apiVersion: config.openshift.io/v1
kind: Console
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
release.openshift.io/create-only: "true"
creationTimestamp: "2022-01-15T22:31:19Z"
generation: 1
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: 29c60660-ded7-4fdd-b41e-a236a57bea4d
resourceVersion: "56372107"
uid: 7f679be4-72ff-4f3d-a4f2-e35fd038e936
spec:
managementState: Removed
status:
consoleURL: https://console-openshift-console.apps.test-rosa.abcd.p1.openshiftapps.com
Posted on 26/01/2023