环境

  • 版本:Kubernetes 1.23
  • 系统:ubuntu20.04

Question 1 | Contexts

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all context names into /opt/course/1/contexts, one per line.

From the kubeconfig extract the certificate of user restricted@infra-prod and write it decoded to /opt/course/1/cert.

解答:

1
2
3
4
5
# 1.很简单获取所有集群的上下文名称
kubectl config get-contexts --no-headers | awk '{print $2}' > /opt/course/1/contexts
# 2.使用(kubectl config view --raw --help)可以查询到该命令
# 显示合并的kubecconfig设置和原始证书数据
kubectl config view --raw -ojsonpath="{.users[2].user.client-certificate-data}" | base64 -d > /opt/course/1/cert

Question 2 | Runtime Security with Falco

Task weight: 4%

Falco is installed with default configuration on node cluster1-worker1. Connect using ssh cluster1-worker1. Use it to:

  1. Find a Pod running image nginx which creates unwanted package management processes inside its container.
  2. Find a Pod running image httpd which modifies /etc/passwd.

Save the Falco logs for case 1 under /opt/course/2/falco.log in format:

1
time-with-nanosconds,container-id,container-name,user-name

No other information should be in any line. Collect the logs for at least 30 seconds.

Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0.

解答:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# 首先检查calico服务是否启动,并检查默认配置是否将日志写入了syslog
service falco status
cat /etc/falco/falco.yaml
...
syslog_output:
enabled: true
...
# 1.查找镜像是nginx的pod,它在容器中创建了不需要的包管理进程。
cat /var/log/syslog | grep falco | grep nginx | grep process
# 2.找到一个Pod运行的镜像httpd,修改/etc/passwd。
cat /var/log/syslog | grep falco | grep httpd | grep passwd
# 将对应的deployment副本数调0,方法:
# 只需要根据上面获取到的container_id,查找到对应的Pod ID,再根据pod id 获取pod名称,最后通过kubectl获取deployment
crictl ps | grep 7a5ea6a080d1
7a5ea6a080d1b 6f715d38cfe0e nginx ... 7a864406b9794
crictl pods ls | grep 7a864406b9794
7a864406b9794 ... webapi-6cfddcd6f4-ftxg4 team-blue ...
kubectl get pod -A | grep webapi
team-blue webapi-6cfddcd6f4-ftxg4 1/1 Running
# 最后将查询到实例副本调0 ,这里只记录一个,另外一个同理。
kubectl -n team-blue scale deploy webapi --replicas 0

# 3.保存第一个pod日志到 /opt/course/2/falco.log,按照预定的格式来配置。(根据第一个pod可以看到是在容器中创建了不需要的包管理进程。)
# 这里修改配置文件
cd /etc/falco/
cp falco_rules.yaml falco_rules.yaml_ori
vim falco_rules.yaml
# 搜索关键字:Launch Package Management Process in Container
- rule: Launch Package Management Process in Container
desc: Package management process ran inside container
...
output: >
Package management process launched in container %evt.time,%container.id,%container.name,%user.name
...
# 重启falco
service falco restart
# 收集日志30s,过滤其他日志并输出文件到/opt/course/2/falco.log
sleep 35s ; falco | grep "Package management" > /tmp/falco.log
06:38:28.077150666: Error Package management process launched in container 06:38:28.077150666,090aad374a0a,nginx,root
06:38:33.058263010: Error Package management process launched in container 06:38:33.058263010,090aad374a0a,nginx,root
06:38:38.068693625: Error Package management process launched in container 06:38:38.068693625,090aad374a0a,nginx,root
06:38:43.066159360: Error Package management process launched in container 06:38:43.066159360,090aad374a0a,nginx,root
06:38:48.059792139: Error Package management process launched in container 06:38:48.059792139,090aad374a0a,nginx,root
06:38:53.063328933: Error Package management process launched in container 06:38:53.063328933,090aad374a0a,nginx,root
# 截取日志,去除其他多余字段信息。
cat /tmp/falco.log | cut -d" " -f 9 > /opt/course/2/falco.log

# 日志输出格式falco参考:
<https://falco.org/docs/reference/rules/supported-fields/>

Question 3 | Apiserver Security

Task weight: 3%

You received a list from the DevSecOps team which performed a security investigation of the k8s cluster1 (workload-prod). The list states the following about the apiserver setup:

  • Accessible through a NodePort Service

Change the apiserver setup so that:

  • Only accessible through a ClusterIP Service

参考: https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/

解答

1
2
3
4
5
6
7
8
9
10
# 解题思路:修改APIserver的配置,--kubernetes-service-node-port=0,或者删除该配置参数,删除API的service
# 1.找到静态pod配置,并设置参数为0
vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
- --kubernetes-service-node-port=0 # delete or set to 0
...
# 2.查看pod,并删除APIserver的service
kubectl -n kube-system get pod | grep apiserver
kubectl delete svc kubernetes
kubectl get svc

Question 4 | Pod Security Policies

Task weight: 8%

There is Deployment container-host-hacker in Namespace team-red which mounts /run/containerd as a hostPath volume on the Node where its running. This means that the Pod can access various data about other containers running on the same Node .

You’re asked to forbid this behavior by:

  1. Enabling Admission Plugin PodSecurityPolicy in the apiserver
  2. Creating a PodSecurityPolicy named psp-mount which allows hostPath volumes only for directory /tmp
  3. Creating a ClusterRole named psp-mount which allows to use the new PSP
  4. Creating a RoleBinding named psp-mount in Namespace team-red which binds the new ClusterRole to all ServiceAccounts in the Namespace team-red

Restart the Pod of Deployment container-host-hacker afterwards to verify new creation is prevented.

NOTE: PSPs can affect the whole cluster. Should you encounter issues you can always disable the Admission Plugin again.

解答

注意:PodSecurityPolicy (PSP) 在 Kubernetes 1.21 中被弃用,在Kubernetes v1.25彻底移除。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 1.kube-apiserver 开启PSP
cp /etc/kubernetes/manifests/kube-apiserver.yaml{,.bak}
vi /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 增加PodSecurityPolicy参数
# 2.从k8s官方复制一个psp配置文件进行修改
vim 4_psp.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp-mount
spec:
privileged: true
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
allowedHostPaths: # task requirement
- pathPrefix: "/tmp" # task requirement
kubectl apply -f 4_psp.yaml

# 3.授予 Pods ServiceAccounts 使用它的 RBAC 权限,创建clusterrole
kubectl -n team-red create clusterrole psp-mount --verb=use \\
--resource=podsecuritypolicies --resource-name=psp-mount
# 要添加*RoleBingding,并绑定到*team-red 名称空间下左右serviceaccounts
kubectl -n team-red create rolebinding psp-mount --clusterrole=psp-mount --group system:serviceaccounts

# 测试PSP,重启发现会报错,需要修改deployment
kubectl -n team-red rollout restart deploy container-host-hacker
kubectl -n team-red edit deploy container-host-hacker
...
volumes:
- hostPath:
path: /tmp # change
type: ""

Question 5 | CIS Benchmark

Task weight: 3%

You’re ask to evaluate specific settings of cluster2 against the CIS Benchmark recommendations. Use the tool kube-bench which is already installed on the nodes.

Connect using ssh cluster2-master1 and ssh cluster2-worker1.

On the master node ensure (correct if necessary) that the CIS recommendations are set for:

  1. The -profiling argument of the kube-controller-manager
  2. The ownership of directory /var/lib/etcd

On the worker node ensure (correct if necessary) that the CIS recommendations are set for:

  1. The permissions of the kubelet configuration /var/lib/kubelet/config.yaml
  2. The -client-ca-file argument of the kubelet

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# master 配置
# 1.查找kube-controller-manager
kube-bench run --targets=master | grep kube-controller -A 3
1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
on the master node and set the below parameter.
--profiling=false
# 根据修改建议添加参数:--profiling=false
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
- --profiling=false # add
# 重启再扫一次发现合规了。
# 2.etcd数据目录权限设置
ls -lh /var/lib | grep etcd
drwx------ 3 root root 4.0K Sep 11 20:08 etcd
kube-bench run --targets=master | grep "/var/lib/etcd" -B5
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
# 设置权限
chown etcd:etcd /var/lib/etcd

# work node配置
# 1.设置kubelet配置文件权限
kube-bench run --targets=node | grep /var/lib/kubelet/config.yaml -B2
4.1.9 Run the following command (using the config file location identified in the Audit step)
chmod 644 /var/lib/kubelet/config.yaml # 执行该命令即可

# 2.配置kubelet client-ca-file参数,发现该参数是通过的忽略
kube-bench run --targets=node | grep client-ca-file
[PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)

Question 6 | Verify Platform Binaries

Task weight: 2%

There are four Kubernetes server binaries located at /opt/course/6/binaries. You’re provided with the following verified sha512 values for these:

kube-apiserver f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c

kube-controller-manager 60100cc725e91fe1a949e1b2d0474237844b5862556e25c2c655a33boa8225855ec5ee22fa4927e6c46a60d43a7c4403a27268f96fbb726307d1608b44f38a60

kube-proxy 52f9d8ad045f8eee1d689619ef8ceef2d86d50c75a6a332653240d7ba5b2a114aca056d9e513984ade24358c9662714973c1960c62a5cb37dd375631c8a614c6

kubelet 4be40f2440619e990897cf956c32800dc96c2c983bf64519854a3309fa5aa21827991559f9c44595098e27e6f2ee4d64a3fdec6baba8a177881f20e3ec61e26c

Delete those binaries that don’t match with the sha512 values above.

解答:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
ls /opt/course/6/binaries
kube-apiserver kube-controller-manager kube-proxy kubelet
# 这个题目很简单,使用到sha512sum 命令,先获取所有二进制的指纹存在compare,再将上面放入compare1
ls /opt/course/6/binaries | xarges sha512sum >> /tmp/compare
vim /tmp/compare1 # 按顺复制
diff /tmp/compare /tmp/compare1
diff compare1 compare
1,2c1,2
< a9d60ae18eef79754d1a085e29e00b54b90b6ce42e05d1c452b81491092a02aeee00ce240573440c5b6e75344a4aa356155c55342fe2dc3b98f805c62d60afc8 kubelet
< 03ad459d28dd2c762b7f522f0d6a4c5d4e23b9cca83e6850b89ca92d81ec917e917dc6f74bf821dc5c61526a714dec45fdde3d54ad0039f05e4dad590bfa5861 kube-controller-manager
---
> f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c kubelet
> f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c kube-controller-manager

#这里比对出 kubelet kube-controller-manager 文件指纹不同将其删除
rm kubelet kube-controller-manager

Question 7 | Open Policy Agent

Task weight: 6%

The Open Policy Agent and Gatekeeper have been installed to, among other things, enforce blacklisting of certain image registries. Alter the existing constraint and/or template to also blacklist images from very-bad-registry.com.

Test it by creating a single Pod using image very-bad-registry.com/image in Namespace default, it shouldn’t work.

You can also verify your changes by looking at the existing Deployment untrusted in Namespace default, it uses an image from the new untrusted source. The OPA contraint should throw violation messages for this one.

参考:****OPA Gatekeeper****

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 1.查看OPA资源
# 查看 约束
kubectl get constraint
NAME AGE
blacklistimages.constraints.gatekeeper.sh/pod-trusted-images 10m
# 查看blacklistimages 资源
kubectl get blacklistimages pod-trusted-images -o yaml | less
# 查看约束模板,并修改镜像黑名单,并添加very-bad-registry.com
constrainttemplates blacklistimages
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
...
spec:
crd:
spec:
names:
kind: BlacklistImages
targets:
- rego: |
package k8strustedimages

images {
image := input.review.object.spec.containers[_].image
not startswith(image, "docker-fake.io/")
not startswith(image, "google-gcr-fake.com/")
not startswith(image, "very-bad-registry.com/") # 添加这行
}

violation[{"msg": msg}] {
not images
msg := "not trusted image!"
}
target: admission.k8s.gatekeeper.sh
# 创建pod测试验证是否生效
kubectl run opa-test --image=very-bad-registry.com/image
Error from server ([denied by pod-trusted-images] not trusted image!): admission webhook "validation.gatekeeper.sh" denied the request: [denied by pod-trusted-images] not trusted image!

Question 8 | Secure Kubernetes Dashboard

Task weight: 3%

The Kubernetes Dashboard is installed in Namespace kubernetes-dashboard and is configured to:

  1. Allow users to “skip login”
  2. Allow insecure access (HTTP without authentication)
  3. Allow basic authentication
  4. Allow access from outside the cluster

You are asked to make it more secure by:

  1. Deny users to “skip login”
  2. Deny insecure access, enforce HTTPS (self signed certificates are ok for now)
  3. Add the -auto-generate-certificates argument
  4. Enforce authentication using a token (with possibility to use RBAC)
  5. Allow only cluster internal access

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
# 查看dashboard信息
kubect -n kubernetes-dashboard get pod,svc
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-7b59f7d4df-fbpd9 1/1 Running 0 24m
pod/kubernetes-dashboard-6d8cd5dd84-w7wr2 1/1 Running 0 24m
NAME TYPE ... PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP ... 8000/TCP 24m
service/kubernetes-dashboard NodePort ... 9090:32520/TCP,443:31206/TCP 24m
# 获取dashboard deployment配置,按要求修改配置
# 备份配置
kubectl -n kubernetes-dashboard get deploy kubernetes-dashboard -oyaml > 8_deploy_kubernetes-dashboard.yaml
# 直接修改
kubectl -n kubernetes-dashboard edit deploy kubernetes-dashboard
template:
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --authentication-mode=token # 修改认证模式为"token" ,或者直接删除该行,默认是token认证模式
- --auto-generate-certificates # 增加该行,自动生成证书
#- --enable-skip-login=true # 删除该行,取消跳过登录
#- --enable-insecure-login # 删除该行,取消安全认证
image: kubernetesui/dashboard:v2.0.3
imagePullPolicy: Always
name: kubernetes-dashboard

# 修改service模式为集群内部访问,即将nodePort模式改成ClusterIP
kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard
spec:
clusterIP: 10.107.176.19
externalTrafficPolicy: Cluster # 删除
internalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32513 # 删除
port: 9090
protocol: TCP
targetPort: 9090
- name: https
nodePort: 32441 # 删除
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP # 修改成ClusterIP,或直接删除
status:
loadBalancer: {}

# 测试
kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP ... PORT(S)
dashboard-metrics-scraper ClusterIP 10.111.171.247 ... 8000/TCP
kubernetes-dashboard ClusterIP 10.100.118.128 ... 9090/TCP,443/TCP
curl <http://192.168.100.11:32520> # 发现原来的nodePort端口访问不了。

Question 9 | AppArmor Profile

Task weight: 3%

Some containers need to run more secure and restricted. There is an existing AppArmor profile located at /opt/course/9/profile for this.

  1. Install the AppArmor profile on Node cluster1-worker1. Connect using ssh cluster1-worker1.

  2. Add label security=apparmor to the Node

  3. Create a Deployment named apparmor in Namespace default with:

    • One replica of image nginx:1.19.2
    • NodeSelector for security=apparmor
    • Single container named c1 with the AppArmor profile enabled

    The Pod might not run properly with the profile enabled. Write the logs of the Pod into /opt/course/9/logs so another team can work on getting the application running.

参考

使用 AppArmor 限制容器对资源的访问

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

# 1.拷贝profile文件到cluster1-worker1
scp /opt/course/9/profile cluster1-worker1:~/
# 登录cluster1-worker1,加载配置
apparmor_parser -q ./profile
# 2.给node添加标签
kubectl label node cluster1-worker1 security=apparmor
# 3.创建deploy
kubectl create deploy apparmor --image=nginx:1.19.2 --dry-run=cilent > 9_deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: apparmor
name: apparmor
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: apparmor
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: apparmor
annotations: # 增加配置,参考官网:<https://kubernetes.io/zh-cn/docs/tutorials/security/apparmor/>
container.apparmor.security.beta.kubernetes.io/c1: localhost/very-secure # 增加配置
spec:
nodeSelector: # 增加标签
security: apparmor # 增加标签
containers:
- image: nginx:1.19.2
name: c1 # 修改容器名字为c1
resources: {}
# 创建deploy,并写入日志。
kubectl apply -f 9_deploy.yaml
kubectl get pod -owide | grep apparmor
kubectl logs apparmor-85c65645dc-w852p > /opt/course/9/logs # 将日志写入/opt/course/9/logs
/docker-entrypoint.sh: 13: /docker-entrypoint.sh: cannot create /dev/null: Permission denied
/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration
2021/09/15 11:51:57 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)

Question 10 | Container Runtime Sandbox gVisor

Task weight: 4%

Team purple wants to run some of their workloads more secure. Worker node cluster1-worker2 has container engine containerd already installed and its configured to support the runsc/gvisor runtime.

Create a RuntimeClass named gvisor with handler runsc.

Create a Pod that uses the RuntimeClass . The Pod should be in Namespace team-purple, named gvisor-test and of image nginx:1.19.2. Make sure the Pod runs on cluster1-worker2.

Write the dmesg output of the successfully started Pod into /opt/course/10/gvisor-test-dmesg.

参考

容器运行时类

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# 1、创建一个运行时类
cat 10_runtimeclass.yaml
# RuntimeClass 定义于 node.k8s.io API 组
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
# 用来引用 RuntimeClass 的名字
# RuntimeClass 是一个集群层面的资源
name: gvisor
# 对应的 CRI 配置的名称
handler: runsc

kubectl -n team-purple apply -f 10_runtimeclass.yaml
# 2、创建一个pod 使用gvisor runtimeClass
kubectl -n team-purple run gvisor-test --image=nginx:1.19.2 --dry-run=client -o yaml > 10_gvisor-test.yaml
vim 10_gvisor-test.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: gvisor-test
name: gvisor-test
namespace: team-purple
spec:
runtimeClassName: gvisor #增加这行
nodeName: cluster1-worker2 #指定node节点
containers:
- image: nginx:1.19.2
name: gvisor-test
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

# 3、写日志到/opt/course/10/gvisor-test-dmesg
kubectl -n team-purple exec gvisor-test -- dmesg > /opt/course/10/gvisor-test-dmesg
[ 0.000000] Starting gVisor...
[ 0.417740] Checking naughty and nice process list...
[ 0.623721] Waiting for children...
[ 0.902192] Gathering forks...
[ 1.258087] Committing treasure map to memory...
[ 1.653149] Generating random numbers by fair dice roll...
[ 1.918386] Creating cloned children...
[ 2.137450] Digging up root...
[ 2.369841] Forking spaghetti code...
[ 2.840216] Rewriting operating system in Javascript...
[ 2.956226] Creating bureaucratic processes...
[ 3.329981] Ready!

Question 11 | Secrets in ETCD

Task weight: 7%

There is an existing Secret called database-access in Namespace team-green.

Read the complete Secret content directly from ETCD (using etcdctl) and store it into /opt/course/11/etcd-secret-content. Write the plain and decoded Secret’s value of key “pass” into /opt/course/11/database-password.

参考

静态加密Secret数据

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 这里只需要找到etcd的秘钥,通过命令重定向即可。
ETCDCTL_API=3 etcdctl \\
--cacert=/opt/course/11/etcd-secret-contenca.crt \\
--cert=/opt/course/11/etcd-secret-contenserver.crt \\
--key=/opt/course/11/etcd-secret-contenserver.key \\
get /registry/secrets/team-green/database-access > /opt/course/11/etcd-secret-content
# /opt/course/11/etcd-secret-content
/registry/secrets/team-green/database-access
k8s

v1Secret

database-access
team-green"*$3e0acd78-709d-4f07-bdac-d5193d0f2aa32bB
0kubectl.kubernetes.io/last-applied-configuration{"apiVersion":"v1","data":{"pass":"Y29uZmlkZW50aWFs"},"kind":"Secret","metadata":{"annotations":{},"name":"database-access","namespace":"team-green"}}
z
kubectl-client-side-applyUpdatevFieldsV1:
{"f:data":{".":{},"f:pass":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}}
pass
confidentialOpaque"
# 解密pass明文写入
echo Y29uZmlkZW50aWFs | base64 -d > /opt/course/11/database-password

Question 12 | Hack Secrets

Task weight: 8%

You’re asked to investigate a possible permission escape in Namespace restricted. The context authenticates as user restricted which has only limited permissions and shouldn’t be able to read Secret values.

Try to find the password-key values of the Secrets secret1, secret2 and secret3 in Namespace restricted. Write the decoded plaintext values into files /opt/course/12/secret1, /opt/course/12/secret2 and /opt/course/12/secret3.

参考

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 这道就是查找泄漏的秘钥
# 秘钥1,通过查看所有pod过滤secret
kubectl -n restricted get pod -o yaml | grep -i secret
kubectl -n restricted exec pod1-fd5d64b9c-pcx6q -- cat /etc/secret-volume/password > /opt/course/12/secret1
# 秘钥2 通过查看所有pod env 过滤PASS
kubectl -n restricted exec pod2-6494f7699b-4hks5 -- env | grep PASS
PASSWORD=an-amazing
echo an-amazing > /opt/course/12/secret2
# 秘钥3 尝试了是否有挂在了ServiceAccount,通过默认引用的证书文件访问API获取secret
kubectl -n restricted exec -it pod3-748b48594-24s76 -- sh

mount | grep serviceaccount
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
ls /run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
# curl 接口查询restricted命名空间的秘钥
# 这里参考API访问集群:<https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/access-cluster-api/>
# 另外参考了静态加密secret数据:<https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/encrypt-data/>
curl <https://kubernetes.default/api/v1/namespaces/restricted/secrets> -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" -k
...
{
"metadata": {
"name": "secret3",
"namespace": "restricted",
...
}
]
},
"data": {
"password": "cEVuRXRSYVRpT24tdEVzVGVSCg=="
},
"type": "Opaque"
}
...
# 看到密码字段进行解密
echo cEVuRXRSYVRpT24tdEVzVGVSCg== | base64 -d > /opt/course/12/secret3

Question 13 | Restrict access to Metadata Server

Task weight: 7%

There is a metadata service available at http://192.168.100.21:32000 on which Nodes can reach sensitive data, like cloud credentials for initialisation. By default, all Pods in the cluster also have access to this endpoint. The DevSecOps team has asked you to restrict access to this metadata server.

In Namespace metadata-access:

  • Create a NetworkPolicy named metadata-deny which prevents egress to 192.168.100.21 for all Pods but still allows access to everything else
  • Create a NetworkPolicy named metadata-allow which allows Pods having label role: metadata-accessor to access endpoint 192.168.100.21

There are existing Pods in the target Namespace with which you can test your policies, but don’t change their labels.

参考

网络策略

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

vim 13_metadata-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: metadata-deny
namespace: metadata-access
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 192.168.100.21/32

vim 13_metadata-allow.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: metadata-allow
namespace: metadata-access
spec:
podSelector:
matchLabels:
role: metadata-accessor
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 192.168.100.21/32

# 应用如上2个配置文件即可

Question 14 | Syscall Activity

Task weight: 4%

There are Pods in Namespace team-yellow. A security investigation noticed that some processes running in these Pods are using the Syscall kill, which is forbidden by a Team Yellow internal policy.

Find the offending Pod(s) and remove these by reducing the replicas of the parent Deployment to 0.

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 为容器进程限制这些是有意义的,Docker/Containerd 已经默认限制了一些,比如reboot系统调用。甚至可以限制更多,例如使用 Seccomp 或 AppArmor
# 该题目只要找到名称空间下pod所在主机,登录主机,使用crictl 查看容器信息过滤出来程序PID,然后strace 查看系统调用
kubectl -n team-yellow get pod -owide
NAME ... NODE NOMINATED NODE ...
collector1-7585cc58cb-n5rtd 1/1 ... cluster1-worker1 <none> ...
collector1-7585cc58cb-vdlp9 1/1 ... cluster1-worker1 <none> ...
collector2-8556679d96-z7g7c 1/1 ... cluster1-worker1 <none> ...
collector3-8b58fdc88-pjg24 1/1 ... cluster1-worker1 <none> ...
collector3-8b58fdc88-s9ltc 1/1 ... cluster1-worker1 <none> ...
# 登录主机
ssh cluster1-worker1
crictl pods ls | grep collector1
POD ID CREATED STATE NAME ...
21aacb8f4ca8d 17 minutes ago Ready collector1-7585cc58cb-vdlp9 ...
186631e40104d 17 minutes ago Ready collector1-7585cc58cb-n5rtd ...
crictl ps | grep 21aacb8f4ca8d
CONTAINER ID IMAGE CREATED ... POD ID
9ea02422f8660 5d867958e04e1 12 minutes ago ... 21aacb8f4ca8d
# crictl 查看具体的程序名,或可以找到PID
crictl inspect 9ea02422f8660 | grep args -A1
"args": [
"./collector1-process"
# ps 过滤进程PID
ps aux | grep collector1-process
root 35039 0.0 0.1 702208 1044 ? Ssl 13:37 0:00 ./collector1-process
root 35059 0.0 0.1 702208 1044 ? Ssl 13:37 0:00 ./collector1-process

# 使用strace 查看调用
strace: Process 35039 attached
futex(0x4d7e68, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
kill(666, SIGTERM) = -1 ESRCH (No such process)
epoll_pwait(3, [], 128, 999, NULL, 1) = 0
kill(666, SIGTERM) = -1 ESRCH (No such process)
epoll_pwait(3, [], 128, 999, NULL, 1) = 0
kill(666, SIGTERM) = -1 ESRCH (No such process)
epoll_pwait(3, ^Cstrace: Process 35039 detached
<detached ...>
...
# 可以看到 kill(666, SIGTERM) 所以collector1有违规系统调用
# 同样的方式检测 collector2、collector3,只发现collector1存在,所以副本调0
kubectl -n team-yellow scale deploy collector1 --replicas 0

Question 15 | Configure TLS on Ingress

Task weight: 4%

In Namespace team-pink there is an existing Nginx Ingress resources named secure which accepts two paths /app and /api which point to different ClusterIP Services .

From your main terminal you can connect to it using for example:

  • HTTP: curl -v <http://secure-ingress.test:31080/app>
  • HTTPS: curl -kv <https://secure-ingress.test:31443/app>

Right now it uses a default generated TLS certificate by the Nginx Ingress Controller.

You’re asked to instead use the key and certificate provided at /opt/course/15/tls.key and /opt/course/15/tls.crt. As it’s a self-signed certificate you need to use curl -k when connecting to it.

参考

ingress配置,里面的TLS配置

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# 这道题很简单,主要是想考nginx-ingress SSL证书配置
# 首先使用题目给的证书创建一个secret秘钥
kubectl -n team-pink create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
# 查看ingress配置
kubectl -n team-pink edit ing secure
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
...
generation: 1
name: secure
namespace: team-pink
...
spec:
tls: # 增加
- hosts: # 增加
- secure-ingress.test # 增加
secretName: tls-secret # 增加
rules:
- host: secure-ingress.test
http:
paths:
- backend:
service:
name: secure-app
port: 80
path: /app
pathType: ImplementationSpecific
- backend:
service:
name: secure-api
port: 80
path: /api
pathType: ImplementationSpecific
...

# 测试验证
kubectl -n team-pink get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
secure <none> secure-ingress.test 192.168.100.12 80, 443 25m
curl -k <https://secure-ingress.test:31443/api>
curl -kv <https://secure-ingress.test:31443/api>
...
* Server certificate:
* subject: CN=secure-ingress.test; O=secure-ingress.test
* start date: Sep 25 18:22:10 2020 GMT
* expire date: Sep 20 18:22:10 2040 GMT
* issuer: CN=secure-ingress.test; O=secure-ingress.test
* SSL certificate verify result: self signed certificate (18), continuing anyway.
...

Question 16 | Docker Image Attack Surface

Task weight: 7%

There is a Deployment image-verify in Namespace team-blue which runs image registry.killer.sh:5000/image-verify:v1. DevSecOps has asked you to improve this image by:

  1. Changing the base image to alpine:3.12
  2. Not installing curl
  3. Updating nginx to use the version constraint >=1.18.0
  4. Running the main process as user myuser

Do not add any new lines to the Dockerfile, just edit existing ones. The file is located at /opt/course/16/image/Dockerfile.

Tag your version as v2. You can build, tag and push using:

1
cd /opt/course/16/imagepodman build -t registry.killer.sh:5000/image-verify:v2 .podman run registry.killer.sh:5000/image-verify:v2 # to test your changespodman push registry.killer.sh:5000/image-verify:v2

Make the Deployment use your updated image tag v2.

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 按要求修改基础镜像
# /opt/course/16/image/Dockerfile

# 修改基础镜像版本
FROM alpine:3.12

# 修改nginx版本
RUN apk update && apk add vim nginx>=1.18.0

RUN addgroup -S myuser && adduser -S myuser -G myuser
COPY ./run.sh run.sh
RUN ["chmod", "+x", "./run.sh"]

# 运行进程的用户
USER myuser

ENTRYPOINT ["/bin/sh", "./run.sh"]

# 修改后的镜像,测试并push
podman build -t registry.killer.sh:5000/image-verify:v2 .
podman push registry.killer.sh:5000/image-verify:v2

# 修改deployment中镜像名称
kubectl -n team-blue edit deploy image-verify
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
- image: registry.killer.sh:5000/image-verify:v2 # 修改镜像

Question 17 | Audit Log Policy

Task weight: 7%

Audit Logging has been enabled in the cluster with an Audit Policy located at /etc/kubernetes/audit/policy.yaml on cluster2-master1.

Change the configuration so that only one backup of the logs is stored.

Alter the Policy in a way that it only stores logs:

  1. From Secret resources, level Metadata
  2. From “system:nodes” userGroups, level RequestResponse

After you altered the Policy make sure to empty the log file so it only contains entries according to your changes, like using truncate -s 0 /etc/kubernetes/audit/logs/audit.log.

NOTE: You can use jq to render json more readable. cat data.json | jq

参考

审计

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 这道题主要是修改审计配置,并保存审计日志
# 1.修改API审计配置保存最大备份数为1
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.100.21:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit/policy.yaml
- --audit-log-path=/etc/kubernetes/audit/logs/audit.log
- --audit-log-maxsize=5
- --audit-log-maxbackup=1 # 修改
- --advertise-address=192.168.100.21
- --allow-privileged=true
...
# 2.修改审计的policy配置文件
# /etc/kubernetes/audit/policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:

# 只记录secrets Metadata级别的日志
- level: Metadata
resources:
- group: ""
resources: ["secrets"]

# 只记录node用户组的RequestResponse日志
- level: RequestResponse
userGroups: ["system:nodes"]

# 除了以上记录的日志其他日志均不记录
- level: None

# 3.保存好配置之后,清空日志,只保存修改配置以后的日志。
cd /etc/kubernetes/manifests/
# 先删除kube-apiserver的静态pod
mv kube-apiserver.yaml ..
crictl ps
# 清空日志
truncate -s 0 /etc/kubernetes/audit/logs/audit.log
# 再还原kube-apiserver配置
# 测试验证日志是否符合预期
cat audit.log | tail | jq
...

Question 18 | Investigate Break-in via Audit Log

Task weight: 4%

Namespace security contains five Secrets of type Opaque which can be considered highly confidential. The latest Incident-Prevention-Investigation revealed that ServiceAccount p.auster had too broad access to the cluster for some time. This SA should’ve never had access to any Secrets in that Namespace .

Find out which Secrets in Namespace security this SA did access by looking at the Audit Logs under /opt/course/18/audit.log.

Change the password to any new string of only those Secrets that were accessed by this SA .

NOTE: You can use jq to render json more readable. cat data.json | jq

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# 该题目主要就是查找p.auster SA访问了哪些secret,并修改对应的secret密码
# 获取security下的secret
kubectl -n security get secret | grep Opaque
kubeadmin-token Opaque 1 37m
mysql-admin Opaque 1 37m
postgres001 Opaque 1 37m
postgres002 Opaque 1 37m
vault-token Opaque 1 37m
# 过滤p.auster 用户访问的Secret资源
cat /opt/course/18/audit.log | grep "p.auster" | grep Secret | wc -l
2
# jq解析日志,可以看到p.auster SA访问了vault-token、mysql-admin 2个secret
cat audit.log | grep "p.auster" | grep Secret | grep get | jq

{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "RequestResponse",
"auditID": "74fd9e03-abea-4df1-b3d0-9cfeff9ad97a",
"stage": "ResponseComplete",
"requestURI": "/api/v1/namespaces/security/secrets/vault-token",
"verb": "get",
"user": {
"username": "system:serviceaccount:security:p.auster",
"uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:security",
"system:authenticated"
]
},
...
"userAgent": "curl/7.64.0",
"objectRef": {
"resource": "secrets",
"namespace": "security",
"name": "vault-token",
"apiVersion": "v1"
},
...
}
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "RequestResponse",
"auditID": "aed6caf9-5af0-4872-8f09-ad55974bb5e0",
"stage": "ResponseComplete",
"requestURI": "/api/v1/namespaces/security/secrets/mysql-admin",
"verb": "get",
"user": {
"username": "system:serviceaccount:security:p.auster",
"uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:security",
"system:authenticated"
]
},
...
"userAgent": "curl/7.64.0",
"objectRef": {
"resource": "secrets",
"namespace": "security",
"name": "mysql-admin",
"apiVersion": "v1"
},
...
}

# 修改vault-token、mysql-admin的secret密码
echo new-vault-pass | base64
bmV3LXZhdWx0LXBhc3MK

kubectl -n security edit secret vault-token

echo new-mysql-pass | base64
bmV3LW15c3FsLXBhc3MK

kubectl -n security edit secret mysql-admin

Question 19 | Immutable Root FileSystem

Task weight: 2%

The Deployment immutable-deployment in Namespace team-purple should run immutable, it’s created from file /opt/course/19/immutable-deployment.yaml. Even after a successful break-in, it shouldn’t be possible for an attacker to modify the filesystem of the running container.

Modify the Deployment in a way that no processes inside the container can modify the local filesystem, only /tmp directory should be writeable. Don’t modify the Docker image.

Save the updated YAML under /opt/course/19/immutable-deployment-new.yaml and update the running Deployment .

参考:

容器pod安全上下文

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# 默认情况下,容器中的进程可以写入本地文件系统。当非恶意进程被劫持时,
# 这会增加攻击面。阻止应用程序写入磁盘或只允许某些目录可以降低风险。
# 可以在 Docker 镜像本身或 Pod 声明中将根文件系统设置为只读
# /opt/course/19/immutable-deployment-new.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: team-purple
name: immutable-deployment
labels:
app: immutable-deployment
spec:
replicas: 1
selector:
matchLabels:
app: immutable-deployment
template:
metadata:
labels:
app: immutable-deployment
spec:
containers:
- image: busybox:1.32.0
command: ['sh', '-c', 'tail -f /dev/null']
imagePullPolicy: IfNotPresent
name: busybox
securityContext: # add
readOnlyRootFilesystem: true # add
volumeMounts: # add
- mountPath: /tmp # add
name: temp-vol # add
volumes: # add
- name: temp-vol # add
emptyDir: {} # add
restartPolicy: Always
# 应用
kubectl apply -f /opt/course/19/immutable-deployment-new.yaml
# 测试
kubectl -n team-purple exec immutable-deployment-5b7ff8d464-j2nrj -- touch /abc.txt
touch: /abc.txt: Read-only file system
command terminated with exit code 1
kubectl -n team-purple exec immutable-deployment-5b7ff8d464-j2nrj -- touch /tmp/abc.txt

Question 20 | Update Kubernetes

Task weight: 8%

The cluster is running Kubernetes 1.22.4, update it to 1.23.1.

Use apt package manager and kubeadm for this.

Use ssh cluster3-master1 and ssh cluster3-worker1 to connect to the instances.

参考

kubeadm upgrade 升级

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 升级集群,参考官方文档即可。
# 一台驱逐后升级后再做另一台
kubectl get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready control-plane,master 58m v1.22.4
cluster3-worker1 Ready <none> 54m v1.22.4
# 1.登录机器
ssh cluster3-master1
kubectl drain cluster3-master1 --ignore-daemonsets
# 2.升级kubelet (replace x in 1.23.x-00 with the latest patch version)
apt-mark unhold kubelet kubectl && \\
apt-get update && apt-get install -y kubelet=1.23.1-00 kubectl=1.23.1-00 && \\
apt-mark hold kubelet kubectl
# 3.重启kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# 4.开启调度
kubectl uncordon cluster3-master1
# 另一台操作步骤重复如上步骤(1、2、3、4)

Question 21 | Image Vulnerability Scanning

Task weight: 2%

The Vulnerability Scanner trivy is installed on your main terminal. Use it to scan the following images for known CVEs:

  • nginx:1.16.1-alpine
  • k8s.gcr.io/kube-apiserver:v1.18.0
  • k8s.gcr.io/kube-controller-manager:v1.18.0
  • docker.io/weaveworks/weave-kube:2.7.0

Write all images that don’t contain the vulnerabilities CVE-2020-10878 or CVE-2020-1967 into /opt/course/21/good-images.

解答

1
2
3
4
5
6
7
8
9
10
11
# 通过执行trivy命令获取CVE漏洞(CVE-2020-10878、CVE-2020-1967),将不存在漏洞的镜像写入/opt/course/21/good-images
trivy nginx:1.16.1-alpine | grep -E 'CVE-2020-10878|CVE-2020-1967'
| libcrypto1.1 | CVE-2020-1967 | MEDIUM
| libssl1.1 | CVE-2020-1967 |
trivy k8s.gcr.io/kube-apiserver:v1.18.0 | grep -E 'CVE-2020-10878|CVE-2020-1967'
| perl-base | CVE-2020-10878 | HIGH
trivy k8s.gcr.io/kube-controller-manager:v1.18.0 | grep -E 'CVE-2020-10878|CVE-2020-1967'
| perl-base | CVE-2020-10878 | HIGH
trivy docker.io/weaveworks/weave-kube:2.7.0 | grep -E 'CVE-2020-10878|CVE-2020-1967'

echo 'docker.io/weaveworks/weave-kube:2.7.0' > /opt/course/21/good-images

Question 22 | Manual Static Security Analysis

Task weight: 3%

The Release Engineering Team has shared some YAML manifests and Dockerfiles with you to review. The files are located under /opt/course/22/files.

As a container security expert, you are asked to perform a manual static analysis and find out possible security issues with respect to unwanted credential exposure. Running processes as root is of no concern in this task.

Write the filenames which have issues into /opt/course/22/security-issues.

NOTE: In the Dockerfile and YAML manifests, assume that the referred files, folders, secrets and volume mounts are present. Disregard syntax or logic errors.

解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
# 该问题,主要就是分析清单中的文件是否存在安全风险,并将风险的文件写入到/opt/course/22/security-issues
#
# ls -la /opt/course/22/files
total 48
drwxr-xr-x 2 k8s k8s 4096 Sep 16 19:08 .
drwxr-xr-x 3 k8s k8s 4096 Sep 16 19:08 ..
-rw-r--r-- 1 k8s k8s 692 Sep 16 19:08 Dockerfile-go
-rw-r--r-- 1 k8s k8s 897 Sep 16 19:08 Dockerfile-mysql
-rw-r--r-- 1 k8s k8s 743 Sep 16 19:08 Dockerfile-py
-rw-r--r-- 1 k8s k8s 341 Sep 16 19:08 deployment-nginx.yaml
-rw-r--r-- 1 k8s k8s 705 Sep 16 19:08 deployment-redis.yaml
-rw-r--r-- 1 k8s k8s 392 Sep 16 19:08 pod-nginx.yaml
-rw-r--r-- 1 k8s k8s 228 Sep 16 19:08 pv-manual.yaml
-rw-r--r-- 1 k8s k8s 188 Sep 16 19:08 pvc-manual.yaml
-rw-r--r-- 1 k8s k8s 211 Sep 16 19:08 sc-local.yaml
-rw-r--r-- 1 k8s k8s 902 Sep 16 19:08 statefulset-nginx.yaml
# 第一个文件,在X层复制,在Y层引用,Z层删除,看似正常,每个层都保留在映像中,即便删除它仍然包含在 X 层和 Y 层的图像中
/opt/course/22/files/Dockerfile-mysql
FROM ubuntu

# Add MySQL configuration
COPY my.cnf /etc/mysql/conf.d/my.cnf
COPY mysqld_charset.cnf /etc/mysql/conf.d/mysqld_charset.cnf

RUN apt-get update && \\
apt-get -yq install mysql-server-5.6 &&

# Add MySQL scripts
COPY import_sql.sh /import_sql.sh
COPY run.sh /run.sh

# Configure credentials
COPY secret-token . # LAYER X
RUN /etc/register.sh ./secret-token # LAYER Y
RUN rm ./secret-token # delete secret token again # LATER Z

EXPOSE 3306
CMD ["/run.sh"]
# echo Dockerfile-mysql >> /opt/course/22/security-issues
# 第二个文件,文件 deployment-redis.yaml 正在从名为 mysecret 的 Secret 中获取凭证并将其写入环境变量。
# /opt/course/22/files/deployment-redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: mycontainer
image: redis
command: ["/bin/sh"]
args:
- "-c"
- "echo $SECRET_USERNAME && echo $SECRET_PASSWORD && docker-entrypoint.sh" # NOT GOOD
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
# echo deployment-redis.yaml >> /opt/course/22/security-issues
# 第三个存在风险的文件如下,密码明文。
# /opt/course/22/files/statefulset-nginx.yaml
...
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
env:
- name: Username
value: Administrator
- name: Password
value: MyDiReCtP@sSw0rd # NOT GOOD
ports:
- containerPort: 80
name: web
..
# echo statefulset-nginx.yaml >> /opt/course/22/security-issues