1
這裡允許您檢視這個會員的所有文章。請注意, 您只能看見您有權限閱讀的文章。
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: cert-manager
type: Opaque
stringData:
api-token: 1txxxxqP-xxxxxxxxxxxxxxTp5JIxxxxxnf
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: k8slab-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cert-k8slab
solvers:
- dns01:
cloudflare:
email: xxx@xxxx.xxxx.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: k8slab-cert
namespace: default
spec:
secretName: k8slab-cert-tls
issuerRef:
name: k8slab-issuer
kind: ClusterIssuer
commonName: '*.k8slab.example.com'
dnsNames:
- k8slab.example.com
- "*.k8slab.example.com"
cd /var/named/dynamic
dnssec-keygen -a hmac-sha256 -b 128 -n HOST externaldns-key
chown named.named Kexternaldns-key.*
cat Kexternaldns-key.*.key | awk '{print $NF}'
確定輸出類似 y+gUcHxLWqzg3JcBU2bbgw== 的結果,並複製結果。key "externaldns-key" {
algorithm hmac-sha256;
secret "y+gUcHxLWqzg3JcBU2bbgw==";
};
zone "k8s.example.org" {
type master;
file "/var/named/dynamic/named.k8s.example.org";
allow-transfer {
key "externaldns-key";
};
update-policy {
grant externaldns-key zonesub ANY;
};
};
注意:secret 內容請用複製的key貼上。$TTL 60 ; 1 minute
@ IN SOA k8s.example.org. root.k8s.example.org. (
16 ; serial
60 ; refresh (1 minute)
60 ; retry (1 minute)
60 ; expire (1 minute)
60 ; minimum (1 minute)
)
NS ns.k8s.example.org.
ns A 192.168.100.1
注意:BIND server 的 IP 請修改實際 IP。systemctl restart named
systemctl status -l named
確定沒有 error,並且 k8s.example.org 的 serial 是正確的。wget https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
sed -i '/^apiVersion: apps/s/beta2//' metallb.yaml
kubectl apply -f metallb.yaml
因為我們這裡的 k8s 版本已經升級到 v1.16, 因此需要調整 api 的版本。若您的環境是 v1.15 或之前的版本, 請略過 sed 那行指令不要執行。apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.100.240-192.168.100.249
請將 ip range 修改爲實際的網段,這是分配給 k8s service 資源用的 IP。完成後套用即可:apiVersion: v1
kind: Namespace
metadata:
name: external-dns
labels:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
namespace: external-dns
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- watch
- list
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: external-dns
spec:
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:v0.5.17
args:
- --provider=rfc2136
- --registry=txt
- --txt-owner-id=k8s
- --source=service
- --source=ingress
- --domain-filter=k8s.example.org
- --rfc2136-host=192.168.100.1
- --rfc2136-port=53
- --rfc2136-zone=k8s.example.org
- --rfc2136-tsig-secret=y+gUcHxLWqzg3JcBU2bbgw==
- --rfc2136-tsig-secret-alg=hmac-sha256
- --rfc2136-tsig-keyname=externaldns-key
- --rfc2136-tsig-axfr
#- --interval=10s
#- --log-level=debug
最後兩行是方便 debug 時用的,需要的時候才移除掉 # 註解符號。設定中比較關鍵的是 dns server 與 key 的正確性,請特別留意。kubectl apply -f external-dns.yaml
如果沒有 error 就表示就緒了。apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
annotations:
external-dns.alpha.kubernetes.io/hostname: nginx.k8s.example.org
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
External-dns 最關鍵的部分就是要從 service 的 annotations 取得 hostname,然後再抓取 Load Balancer 分配的 IP 進行 dns update。kubectl apply -f nginx.yaml
如果沒有 error 就表示就緒了。kubectl -n external-dns logs external-dns-5d986694c9-5n9wm
請注意實際運行的 pod 名稱或許不太一樣,請調整。如果一切順利,從輸出結果中可以看到類似如下的內容:time="2019-12-04T16:26:44Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
time="2019-12-04T16:26:49Z" level=info msg="Configured RFC2136 with zone 'k8s.example.org.' and nameserver '192.168.100.1:53'"
time="2019-12-04T16:26:49Z" level=info msg="Adding RR: nginx.k8s.example.org 0 A 192.168.100.245"
time="2019-12-04T16:26:49Z" level=info msg="Adding RR: nginx.k8s.example.org 0 TXT \"heritage=external-dns,external-dns/owner=k8s,external-dns/resource=service/default/nginx-svc\""
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: ingress.k8s.example.org
http:
paths:
- path: /
backend:
serviceName: nginx-svc
servicePort: 80
請留意使用 ingress 比 service 會稍有不同:12月 05 00:43:49 srv1.localdomain named[6533]: client 192.168.100.56#58519/key externaldns-key: updating zone 'k8s.example.org/IN': deleting rrset at 'nginx.k8s.example.org' A
12月 05 00:43:49 srv1.localdomain named[6533]: zone k8s.example.org/IN: sending notifies (serial 21)
12月 05 00:43:49 srv1.localdomain named[6533]: client 192.168.100.56#45520/key externaldns-key: updating zone 'k8s.example.org/IN': deleting rrset at 'ingress.k8s.example.org' A
12月 05 00:43:49 srv1.localdomain named[6533]: client 192.168.100.56#49731/key externaldns-key: updating zone 'k8s.example.org/IN': deleting rrset at 'nginx.k8s.example.org' TXT
12月 05 00:43:49 srv1.localdomain named[6533]: client 192.168.100.56#56605/key externaldns-key: updating zone 'k8s.example.org/IN': deleting rrset at 'ingress.k8s.example.org' TXT
mkdir descheduler-yaml
cd descheduler-yaml
cat > cluster_role.yaml << END
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: descheduler
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
END
kubectl apply -f cluster_role.yaml
kubectl create sa descheduler -n kube-system
kubectl create clusterrolebinding descheduler \
-n kube-system \
--clusterrole=descheduler \
--serviceaccount=kube-system:descheduler
cat > config_map.yaml << END
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler
namespace: kube-system
data:
policy.yaml: |-
apiVersion: descheduler/v1alpha1
kind: DeschedulerPolicy
strategies:
RemoveDuplicates:
enabled: true
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
cpu: 20
memory: 20
pods: 20
targetThresholds:
cpu: 50
memory: 50
pods: 50
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
RemovePodsViolatingNodeAffinity:
enabled: true
params:
nodeAffinityType:
- requiredDuringSchedulingIgnoredDuringExecution
END
kubectl apply -f config_map.yaml
cat > cron_job.yaml << END
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: descheduler
namespace: kube-system
spec:
schedule: "*/30 * * * *"
jobTemplate:
metadata:
name: descheduler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: "true"
spec:
template:
spec:
serviceAccountName: descheduler
containers:
- name: descheduler
image: komljen/descheduler:v0.6.0
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- /bin/descheduler
- --v=4
- --max-pods-to-evict-per-node=10
- --policy-config-file=/policy-dir/policy.yaml
restartPolicy: "OnFailure"
volumes:
- name: policy-volume
configMap:
name: descheduler
END
kubectl apply -f cron_job.yaml
kubectl get cronjobs -n kube-system
確定可以看到類似如下的結果:kubectl get pods -n kube-system | grep Completed
會看到類似如下的結果:kubectl -n kube-system logs descheduler-1564671000-g69nc
如果沒觸發任何作動的話,最後一行會類似如下:kubectl drain worker03.localdomain --ignore-daemonsets --delete-local-data --grace-period=0 --force
kubectl get nodes worker03.localdomain
確認節點狀態類似如下:kubectl uncordon worker03.localdomain
kubectl get nodes
確認所有節點都處於 ready 狀態:kubectl get pods -n kube-system | grep Completed
看到類似如下的結果:kubectl -n kube-system logs descheduler-1564672200-sq5zw
將會發現有相當數量的 pods 被 evicted 了:controller
speaker
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.100.240-192.168.100.249
apiVersion: v1
kind: Namespace
metadata:
name: metallb-test
labels:
app: metallb
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: metallb-test
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
namespace: metallb-test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-deployment LoadBalancer 10.103.250.239 192.168.100.240 80:16656/TCP 2m51s app=nginx
ceph osd pool create kube 32 32 # 具體的 PG number 請根據現況調整
ceph osd pool application enable 'kube' 'rbd'
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube'
ceph auth get-key client.admin | base64 # 輸出將對應到 k8s 的 ceph-secret-admin secret
ceph auth get-key client.kube | base64 # 輸出將對應到 k8s 的 ceph-secret-kube secret
yum install -y ceph-common
scp 192.168.100.21:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
mkdir ~/kube-ceph
cd ~/kube-ceph
cat > kube-ceph-secret.yaml << END
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
key: QVFEYzF0SmNMaVpkRmhBQWlKbUhNbndaR2tCdldFcThXWDhaaXc9PQ==
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-kube
type: "kubernetes.io/rbd"
data:
key: QVFDSFdUaGROcC9LT2hBQUpkVG5XVUpQUOYrZGtvZ2k3S0Zwc0E9PQ==
END
cat > kube-ceph-sc.yaml << END
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-rbd
#provisionen: kubernetes.io/rbd
provisioner: ceph.com/rbd
parameters:
monitors: 192.168.100.21:6789,192.168.100.22:6789,192.168.100.23:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: default
pool: kube
userId: kube
userSecretName: ceph-secret-kube
userSecretNamespace: default
imageFormat: "2"
imageFeatures: layering
END
cat > kube-ceph-pvc.yaml << END
metadata:
name: ceph-k8s-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 1Gi
END
git clone https://github.com/kubernetes-incubator/external-storage
cd external-storage/ceph/rbd/deploy/rbac/
kubectl apply -f ./
kubectl get pods
# 確定可以看到類似如下的 pod 在 Running 的狀態:cd ~/kube-ceph
kubectl apply -f ./
kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get StorageClass
# 確認 ceph-rbd 是唯一的 defaultkubectl get pvc
# 確認 pvc 有正確的 bound 起來rbd list -p kube
# 可以看到類似如下的結果:cat > kube-ceph-pod.yaml << END
apiVersion: v1
kind: Pod
metadata:
name: kube-ceph-pod
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-volume
mountPath: /usr/share/ceph-rbd
readOnly: false
volumes:
- name: ceph-volume
persistentVolumeClaim:
claimName: ceph-k8s-claim
END
[Service]
Environment="ALL_PROXY=socks://127.0.0.1:8080/" "FTP_PROXY=ftp://127.0.0.1:8080/" "HTTPS_PROXY=http://127.0.0.1:8080/" "HTTP_PROXY=http://127.0.0.1:8080/" "NO_PROXY=localhost,127.0.0.0/8,127.0.0.1/16,192.168.0.0./16" "all_proxy=socks://127.0.0.1:8080/" "ftp_proxy=ftp://127.0.0.1:8080/" "http_proxy=http://127.0.0.1:8080/" "https_proxy=http://127.0.0.1:8080/" "no_proxy=localhost,127.0.0.0/8,172.16.0.0/16,192.168.0.0./16"
RUN https_proxy=http://127.0.0.1:8080/ pip install -r requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
apt-get install git cmake build-essential libgcrypt11-dev libyajl-dev libboost-all-dev libcurl4-openssl-dev libexpat1-dev libcppunit-dev binutils-dev pkg-config
mkdir ~/grive
cd ~/grive
git clone https://github.com/vitalif/grive2.git
mkdir grive2/build
cd grive2/build
cmake ..
make -j4
sudo make install
mkdir ~/mydir
cd ~/mydir
/usr/local/bin/grive -a
Copy & Paste the URL in your browser and get the auth code (40 chars), copy the code and paste back to the console...#!/bin/bash
export LANG=zh_TW.Big5
in_file=1.txt
# case 1
lines=$(cat $in_file | awk -F, '{print$2,$3}')
echo "$lines"
# case 2
awk -F, '{print $2,$3}' $in_file | while read line
do
echo $line
done
本以為兩個case的輸出會是一樣的...[kenny@vmtest-linux tmp]$ locale
LANG=zh_TW.Big5
LC_CTYPE="zh_TW.Big5"
LC_NUMERIC="zh_TW.Big5"
LC_TIME="zh_TW.Big5"
LC_COLLATE="zh_TW.Big5"
LC_MONETARY="zh_TW.Big5"
LC_MESSAGES="zh_TW.Big5"
LC_PAPER="zh_TW.Big5"
LC_NAME="zh_TW.Big5"
LC_ADDRESS="zh_TW.Big5"
LC_TELEPHONE="zh_TW.Big5"
LC_MEASUREMENT="zh_TW.Big5"
LC_IDENTIFICATION="zh_TW.Big5"
LC_ALL=
[kenny@vmtest-linux tmp]$ file 1.txt
1.txt: ISO-8859 text
[kenny@vmtest-linux tmp]$ cat 1.txt
x1230,葉小姐,usa@xxx.com.tw,89,0,16/06/01,
x1978,許小姐,ally@xxx.com.tw,90,0,16/06/01,
x8657,陳先生,cbk@xxx.com.tw,3,0,16/06/01,
x1467,鄭成功,cck@xxx.com.tw,3,0,16/06/01,
[kenny@vmtest-linux tmp]$ ./1.sh
葉小姐 usa@xxx.com.tw
許小姐 ally@xxx.com.tw
陳先生 cbk@xxx.com.tw
鄭成功 cck@xxx.com.tw
葉小姐 usa@xxx.com.tw
酗p姐 ally@xxx.com.tw
陳先生 cbk@xxx.com.tw
鄭成?cck@xxx.com.tw
external_url 'https://gitlab.example.com'
...
nginx['redirect_http_to_https'] = true
nginx['ssl_client_certificate'] = "/etc/gitlab/ssl/ca.crt" # Most root CA's are included by default
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.example.com.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.example.com.key"
...
nginx['custom_gitlab_server_config'] = "location ^~ /.well-known {\n allow all;\n}\n"
...
gitlab-ctl startrsa-key-size = 4096
email = root@example.com
domains = gitlab.example.com
webroot-path = /opt/gitlab/embedded/service/gitlab-rails/public
cd /opt/letsencrypt/#!/bin/bash
date
web_service='nginx'
config_file="/usr/local/etc/le-renew-webroot.ini"
...
chmod +x /usr/local/sbin/le-renew-webroot30 2 * * 1 root /usr/local/sbin/le-renew-webroot >> /var/log/le-renewal.log
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx
ldapadd -Y EXTERNAL -H ldapi:/// -f chrootpw.ldifdn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
read by dn.base="cn=Manager,dc=example,dc=com" read by * none
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=example,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=example,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}xxxxxxxxxxxxxxxxxxxxxxxx
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=example,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=example,dc=com" write by * read
ldapmodify -Y EXTERNAL -H ldapi:/// -f chdomain.ldifdn: dc=example,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: Example dot Com
dc: Example
dn: cn=Manager,dc=example,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager
dn: ou=People,dc=example,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Group,dc=example,dc=com
objectClass: organizationalUnit
ou: Group
ldapadd -x -D cn=Manager,dc=example,dc=com -W -f basedomain.ldifdn: cn=config
changetype: modify
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/openldap/certs/ca.crt
-
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/openldap/certs/ldap.example.com.crt
-
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/openldap/certs/ldap.example.com.key
ldapmodify -Y EXTERNAL -H ldapi:/// -f mod_ssl.ldifSLAPD_URLS="ldapi:/// ldap:/// ldaps:///"
#-- end of TLS configuration --#dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: uidNumber eq
olcDbIndex: gidNumber eq
olcDbIndex: loginShell eq
olcDbIndex: uid eq,pres,sub
olcDbIndex: memberUid eq,pres,sub
olcDbIndex: uniqueMember eq,pres
olcDbIndex: sambaSID eq
olcDbIndex: sambaPrimaryGroupSID eq
olcDbIndex: sambaGroupType eq
olcDbIndex: sambaSIDList eq
olcDbIndex: sambaDomainName eq
olcDbIndex: default sub
ldapmodify -Y EXTERNAL -H ldapi:/// -f samba_indexes.ldif[global]
workgroup = EXAMPLE
netbios name = ldap
deadtime = 10
log level = 1
log file = /var/log/samba/log.%m
max log size = 5000
debug pid = yes
debug uid = yes
syslog = 0
utmp = yes
security = user
domain logons = yes
os level = 64
logon path =
logon home =
logon drive =
logon script =
passdb backend = ldapsam:"ldap://ldap.example.com/"
ldap ssl = no
ldap admin dn = cn=Manager,dc=example,dc=com
ldap delete dn = no
ldap password sync = yes
ldap suffix = dc=example,dc=com
ldap user suffix = ou=People
ldap group suffix = ou=Group
ldap machine suffix = ou=Computers
ldap idmap suffix = ou=Idmap
add user script = /usr/sbin/smbldap-useradd -m '%u' -t 1
rename user script = /usr/sbin/smbldap-usermod -r '%unew' '%uold'
delete user script = /usr/sbin/smbldap-userdel '%u'
set primary group script = /usr/sbin/smbldap-usermod -g '%g' '%u'
add group script = /usr/sbin/smbldap-groupadd -p '%g'
delete group script = /usr/sbin/smbldap-groupdel '%g'
add user to group script = /usr/sbin/smbldap-groupmod -m '%u' '%g'
delete user from group script = /usr/sbin/smbldap-groupmod -x '%u' '%g'
add machine script = /usr/sbin/smbldap-useradd -w '%u' -t 1
admin users = domainadmin
[NETLOGON]
path = /var/lib/samba/netlogon
browseable = no
share modes = no
[PROFILES]
path = /var/lib/samba/profiles
browseable = no
writeable = yes
create mask = 0611
directory mask = 0700
profile acls = yes
csc policy = disable
map system = yes
map hidden = yes
[homes]
comment = Home Directories
browseable = no
writable = yes
mkdir /var/lib/samba/{netlogon,profiles}Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManWorkstation\Parameters]
"DomainCompatibilityMode"=dword:00000001
"DNSNameResolutionRequired"=dword:00000000
# Double click the file to import the registryport 1194
proto udp
dev tap
ca ca.crt
cert vpnserver.example.com.crt
key vpnserver.example.com.key # This file should be kept secret
dh dh2048.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status openvpn-status.log
verb 3
*note: I don't use tun here, instead of tap for deviceexport EASY_RSA="`pwd`"
export OPENSSL="openssl"
export PKCS11TOOL="pkcs11-tool"
export GREP="grep"
export KEY_CONFIG=`$EASY_RSA/whichopensslcnf $EASY_RSA`
export KEY_DIR="$EASY_RSA/keys"
echo NOTE: If you run ./clean-all, I will be doing a rm -rf on $KEY_DIR
export PKCS11_MODULE_PATH="dummy"
export PKCS11_PIN="dummy"
export KEY_SIZE=2048
export CA_EXPIRE=3650
export KEY_EXPIRE=3650
export KEY_COUNTRY="TW"
export KEY_PROVINCE="Taiwan"
export KEY_CITY="Tainan"
export KEY_ORG="ExampleDotCom"
export KEY_EMAIL="root@example.com"
export KEY_OU="IT"
export KEY_NAME="EasyRSA"
cp /etc/openvpn/easy-rsa/openssl-1.0.0.cnf /etc/openvpn/easy-rsa/openssl.cnfclient
dev tap
proto udp
remote 1.2.3.4 1194
resolv-retry infinite
nobind
persist-key
persist-tun
comp-lzo
verb 3
ca /etc/openvpn/ca.crt
cert /etc/openvpn/linuxclient.crt
key /etc/openvpn/linuxclient.key
* Note: 1.2.3.4 is the IP of serverLoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
Listen 443
<VirtualHost *:443>
RequestReadTimeout header=0,MinRate=500 body=0,MinRate=500
ServerName proxy.example.com:443
DocumentRoot /var/www/proxytunnel
ServerAdmin root@example.com
RewriteEngine On
RewriteCond %{REQUEST_METHOD} !^CONNECT [NC]
RewriteRule ^/(.*)$ - [F,L]
ProxyRequests On
ProxyBadHeader Ignore
ProxyVia Full
AllowCONNECT 22
<Proxy *>
Order deny,allow
#Allow from all
Deny from all
</Proxy>
<ProxyMatch (proxy\.example\.com)>
Order allow,deny
Allow from all
</ProxyMatch>
LogLevel warn
ErrorLog logs/proxy.example.com-proxy_error_log
CustomLog logs/proxy.example.com-proxy_request_log combined
</VirtualHost>
cp -a /var/www/html /var/www/proxytunnelHost proxy.example.com
Hostname proxy.example.com
ProxyCommand /usr/bin/proxytunnel -p localproxy:3128 -r proxy.example.com:443 -d %h:%p -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)"
ServerAliveInterval 30
TCPKeepAlive no
*Note: write the ProxyCommand in a single line, do not use \ to break lines.docker push 1.2.3.4:5000/test
The push refers to a repository [1.2.3.4:5000/test] (len: 1)
unable to ping registry endpoint https://1.2.3.4:5000/v0/
v2 ping attempt failed with error: Get https://1.2.3.4:5000/v2/: dial tcp 1.2.3.4:5000: connection refused
v1 ping attempt failed with error: Get https://1.2.3.4:5000/v1/_ping: dial tcp 1.2.3.4:5000: connection refused
OPTIONS='--selinux-enabled --insecure-registry 1.2.3.4:5000
systemctl restart dockersudo yum install curl openssh-server
sudo systemctl enable sshd
sudo systemctl start sshd
sudo yum install postfix
sudo systemctl enable postfix
sudo systemctl start postfix
sudo firewall-cmd --permanent --add-service=http
sudo systemctl reload firewalld
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce
*note: curl may fails if behide proxy/firewall.sudo gitlab-ctl reconfigure
*note: you may want to change the URL if your servername is localhost.localdomain.