1660346040
讓我們看看如何將 Node/Express 微服務(連同 Postgres)部署到 Google Kubernetes Engine (GKE) 上的 Kubernetes 集群。
依賴項:
在本教程結束時,您應該能夠:
當您從在單台機器上部署容器轉移到在多台機器上部署它們時,您將需要一個編排工具來管理(和自動化)容器在整個系統中的安排、協調和可用性。
編排工具有助於:
This is where Kubernetes fits in along with a number of other orchestration tools, like Docker Swarm, ECS, Mesos, and Nomad.
Which one should you use?
Tool | Pros | Cons |
---|---|---|
Kubernetes | large community, flexible, most features, hip | complex setup, high learning curve, hip |
Docker Swarm | easy to set up, perfect for smaller clusters | limited by the Docker API |
ECS | fully-managed service, integrated with AWS | vendor lock-in |
There's also a number of managed Kubernetes services on the market:
For more, review the Choosing the Right Containerization and Cluster Management Tool blog post.
Before diving in, let's look at some of the basic building blocks that you have to work with from the Kubernetes API:
dev
, test
,prod
beta
,1.2.1
client
, server
,db
有關更多信息,請查看學習 Kubernetes 基礎教程。
首先從https://github.com/testdrivenio/node-kubernetes repo 克隆應用程序:
$ git clone https://github.com/testdrivenio/node-kubernetes
$ cd node-kubernetes
構建鏡像並啟動容器:
$ docker-compose up -d --build
應用遷移並為數據庫播種:
$ docker-compose exec web knex migrate:latest
$ docker-compose exec web knex seed:run
測試以下端點...
獲取所有待辦事項:
$ curl http://localhost:3000/todos
[
{
"id": 1,
"title": "Do something",
"completed": false
},
{
"id": 2,
"title": "Do something else",
"completed": false
}
]
添加一個新的待辦事項:
$ curl -d '{"title":"something exciting", "completed":"false"}' \
-H "Content-Type: application/json" -X POST http://localhost:3000/todos
"Todo added!"
獲取一個待辦事項:
$ curl http://localhost:3000/todos/3
[
{
"id": 3,
"title": "something exciting",
"completed": false
}
]
更新所有人:
$ curl -d '{"title":"something exciting", "completed":"true"}' \
-H "Content-Type: application/json" -X PUT http://localhost:3000/todos/3
"Todo updated!"
全部刪除:
$ curl -X DELETE http://localhost:3000/todos/3
在繼續之前快速瀏覽一下代碼:
├── .dockerignore
├── .gitignore
├── Dockerfile
├── README.md
├── docker-compose.yml
├── knexfile.js
├── kubernetes
│ ├── node-deployment-updated.yaml
│ ├── node-deployment.yaml
│ ├── node-service.yaml
│ ├── postgres-deployment.yaml
│ ├── postgres-service.yaml
│ ├── secret.yaml
│ ├── volume-claim.yaml
│ └── volume.yaml
├── package-lock.json
├── package.json
└── src
├── db
│ ├── knex.js
│ ├── migrations
│ │ └── 20181009160908_todos.js
│ └── seeds
│ └── todos.js
└── server.js
在本節中,我們將——
Before beginning, you'll need a Google Cloud Platform (GCP) account. If you're new to GCP, Google provides a free trial with a $300 credit.
Start by installing the Google Cloud SDK.
If you’re on a Mac, we recommend installing the SDK with Homebrew:
$ brew update $ brew install google-cloud-sdk --cask
Test:
$ gcloud --version
Google Cloud SDK 365.0.1
bq 2.0.71
core 2021.11.19
gsutil 5.5
Once installed, run gcloud init
to configure the SDK so that it has access to your GCP credentials. You'll also need to either pick an existing GCP project or create a new project to work with.
Set the project:
$ gcloud config set project <PROJECT_ID>
Finally, install kubectl
:
$ gcloud components install kubectl
Next, let's create a cluster on Kubernetes Engine:
$ gcloud container clusters create node-kubernetes \
--num-nodes=3 --zone us-central1-a --machine-type g1-small
這將創建一個名為machinenode-kubernetes
的us-central1-a
區域中的三節點集群。旋轉需要幾分鐘。g1-small
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-node-kubernetes-default-pool-139e0343-0hbt Ready <none> 75s v1.21.5-gke.1302
gke-node-kubernetes-default-pool-139e0343-p4s3 Ready <none> 75s v1.21.5-gke.1302
gke-node-kubernetes-default-pool-139e0343-rxnc Ready <none> 75s v1.21.5-gke.1302
將客戶端連接kubectl
到集群:
$ gcloud container clusters get-credentials node-kubernetes --zone us-central1-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-kubernetes.
如需 Kubernetes Engine 的幫助,請查看官方文檔。
使用gcr.io/<PROJECT_ID>/<IMAGE_NAME>:<TAG>
Docker 標記格式,構建本地 Docker 映像,然後將 Node API 推送到Container Registry:
$ gcloud auth configure-docker
$ docker build -t gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1 .
$ docker push gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
請務必替換
<PROJECT_ID>
為您的項目的 ID。
kubernetes/node-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node
labels:
name: node
spec:
replicas: 1
selector:
matchLabels:
app: node
template:
metadata:
labels:
app: node
spec:
containers:
- name: node
image: gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
env:
- name: NODE_ENV
value: "development"
- name: PORT
value: "3000"
restartPolicy: Always
同樣,請務必替換
<PROJECT_ID>
為您的項目的 ID。
這裡發生了什麼事?
metadata
name
字段定義部署名稱 -node
labels
定義部署的標籤 -name: node
spec
replicas
定義要運行的 pod 數量 -1
selector
指定 pod 的標籤(必須匹配.spec.template.metadata.labels
)template
metadata
labels
指示應為 pod 分配哪些標籤 -app: node
spec
containers
定義與每個 pod 關聯的容器restartPolicy
定義重啟策略-Always
因此,這將啟動一個node
通過gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
我們剛剛推送的圖像命名的 pod。
創造:
$ kubectl create -f ./kubernetes/node-deployment.yaml
核實:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
node 1/1 1 1 32s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
node-59646c8856-72blj 1/1 Running 0 18s
您可以通過以下方式查看容器日誌kubectl logs <POD_NAME>
:
$ kubectl logs node-6fbfd984d-7pg92
> start
> nodemon src/server.js
[nodemon] 2.0.15
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node src/server.js`
Listening on port: 3000
您還可以從 Google Cloud 控制台查看這些資源:
要從外部訪問您的 API,讓我們通過service創建一個負載均衡器。
kubernetes/node-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: node
labels:
service: node
spec:
selector:
app: node
type: LoadBalancer
ports:
- port: 3000
這將創建一個名為 的服務node
,它將找到任何帶有標籤的 podnode
並將端口暴露給外界。
創造:
$ kubectl create -f ./kubernetes/node-service.yaml
這將在 Google Cloud 上創建一個新的負載平衡器:
獲取外部IP:
$ kubectl get service node
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
node LoadBalancer 10.40.10.162 35.222.45.193 3000:31315/TCP 78s
測試一下:
"Something went wrong."
由於數據庫尚未設置,您應該會看到何時到達第二個端點。
Secrets用於管理敏感信息,例如密碼、API 令牌和 SSH 密鑰。我們將使用一個秘密來存儲我們的 Postgres 數據庫憑據。
Kubernetes/secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: c2FtcGxl
password: cGxlYXNlY2hhbmdlbWU=
用戶和密碼字段是 base64 編碼的字符串:
$ echo -n "pleasechangeme" | base64
cGxlYXNlY2hhbmdlbWU=
$ echo -n "sample" | base64
c2FtcGxl
創建秘密:
$ kubectl apply -f ./kubernetes/secret.yaml
核實:
$ kubectl describe secret postgres-credentials
Name: postgres-credentials
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 14 bytes
user: 6 bytes
由於容器是臨時的,我們需要通過PersistentVolume和PersistentVolumeClaim配置一個卷,以將 Postgres 數據存儲在 pod 之外。如果沒有捲,當 pod 出現故障時,您將丟失數據。
創建永久磁盤:
$ gcloud compute disks create pg-data-disk --size 50GB --zone us-central1-a
Kubernetes/volume.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
name: postgres-pv
spec:
capacity:
storage: 50Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: pg-data-disk
fsType: ext4
此配置將創建一個 50 GB 的 PersistentVolume,其訪問模式為ReadWriteOnce,這意味著該卷可以由單個節點以讀寫方式掛載。
創建卷:
$ kubectl apply -f ./kubernetes/volume.yaml
檢查狀態:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
postgres-pv 50Gi RWO Retain Available standard 6s
kubernetes/volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
volumeName: postgres-pv
這將在 PersistentVolume(我們剛剛創建)上創建一個聲明,Postgres pod 將能夠使用它來附加一個卷。
創造:
$ kubectl apply -f ./kubernetes/volume-claim.yaml
看法:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Bound postgres-pv 50Gi RWO standard 6s
將數據庫憑據與卷一起設置後,我們現在可以配置 Postgres 數據庫本身。
kubernetes/postgres-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
name: database
spec:
replicas: 1
selector:
matchLabels:
service: postgres
template:
metadata:
labels:
service: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
restartPolicy: Always
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
在這裡,除了通過postgres:14-alpine
鏡像啟動一個新的 pod 之外,此配置還將 PersistentVolumeClaim 從該volumes
部分安裝到該部分中定義的“/var/lib/postgresql/data”目錄volumeMounts
。
創造:
$ kubectl create -f ./kubernetes/postgres-deployment.yaml
核實:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
node-59646c8856-72blj 1/1 Running 0 20m
postgres-64d485d86b-vtrlh 1/1 Running 0 25s
創建todos
數據庫:
$ kubectl exec <POD_NAME> --stdin --tty -- createdb -U sample todos
kubernetes/postgres-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
service: postgres
spec:
selector:
service: postgres
type: ClusterIP
ports:
- port: 5432
這將創建一個ClusterIP服務,以便其他 pod 可以連接到它。它在集群外部不可用。
創建服務:
$ kubectl create -f ./kubernetes/postgres-service.yaml
接下來,將數據庫憑據添加到節點部署:
kubernetes/node-deployment-updated.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node
labels:
name: node
spec:
replicas: 1
selector:
matchLabels:
app: node
template:
metadata:
labels:
app: node
spec:
containers:
- name: node
image: gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1 # update
env:
- name: NODE_ENV
value: "development"
- name: PORT
value: "3000"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
restartPolicy: Always
創造:
$ kubectl delete -f ./kubernetes/node-deployment.yaml
$ kubectl create -f ./kubernetes/node-deployment-updated.yaml
核實:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
node-64c45d449b-9m7pf 1/1 Running 0 9s
postgres-64d485d86b-vtrlh 1/1 Running 0 4m7s
使用節點 pod,更新數據庫:
$ kubectl exec <POD_NAME> knex migrate:latest
$ kubectl exec <POD_NAME> knex seed:run
再次測試一下:
您現在應該看到待辦事項:
[
{
"id": 1,
"title": "Do something",
"completed": false
},
{
"id": 2,
"title": "Do something else",
"completed": false
}
]
在這篇文章中,我們研究瞭如何使用 GKE 在 Kubernetes 上運行基於節點的微服務。您現在應該對 Kubernetes 的工作原理有了基本的了解,並且能夠將運行應用程序的集群部署到 Google Cloud。
確保在完成後關閉資源(集群、永久性磁盤、容器註冊表上的映像)以避免產生不必要的費用:
$ kubectl delete -f ./kubernetes/node-service.yaml
$ kubectl delete -f ./kubernetes/node-deployment-updated.yaml
$ kubectl delete -f ./kubernetes/secret.yaml
$ kubectl delete -f ./kubernetes/volume-claim.yaml
$ kubectl delete -f ./kubernetes/volume.yaml
$ kubectl delete -f ./kubernetes/postgres-deployment.yaml
$ kubectl delete -f ./kubernetes/postgres-service.yaml
$ gcloud container clusters delete node-kubernetes --zone us-central1-a
$ gcloud compute disks delete pg-data-disk --zone us-central1-a
$ gcloud container images delete gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
您可以在 GitHub 上的node-kubernetes 存儲庫中找到代碼。
#nodejs #kubernetes #googlecloud
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1594162500
A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.
Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.
By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.
However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.
Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.
Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.
Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.
Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.
The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.
For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.
#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market
1597833840
If you looking to learn about Google Cloud in depth or in general with or without any prior knowledge in cloud computing, then you should definitely check this quest out, Link.
Google Could Essentials is an introductory level Quest which is useful to learn about the basic fundamentals of Google Cloud. From writing Cloud Shell commands and deploying my first virtual machine, to running applications on Kubernetes Engine or with load balancing, Google Cloud Essentials is a prime introduction to the platform’s basic features.
Let’s see what was the Quest Outline:
A Tour of Qwiklabs and Google Cloud was the first hands-on lab which basically gives an overview about Google Cloud. There were few questions to answers that will check your understanding about the topic and the rest was about accessing Google cloud console, projects in cloud console, roles and permissions, Cloud Shell and so on.
**Creating a Virtual Machine **was the second lab to create virtual machine and also connect NGINX web server to it. Compute Engine lets one create virtual machine whose resources live in certain regions or zones. NGINX web server is used as load balancer. The job of a load balancer is to distribute workloads across multiple computing resources. Creating these two along with a question would mark the end of the second lab.
#google-cloud-essentials #google #google-cloud #google-cloud-platform #cloud-computing #cloud
1601051854
Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.
This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.
Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.
In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.
Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.
In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.
The Compelling Attributes of Multi Cloud Kubernetes
Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.
Stability
In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.
#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud
1620921300
In this Lab, we will configure Cloud Content Delivery Network (Cloud CDN) for a Cloud Storage bucket and verify caching of an image. Cloud CDN uses Google’s globally distributed edge points of presence to cache HTTP(S) load-balanced content close to our users. Caching content at the edges of Google’s network provides faster delivery of content to our users while reducing serving costs.
For an up-to-date list of Google’s Cloud CDN cache sites, see https://cloud.google.com/cdn/docs/locations.
Cloud CDN content can originate from different types of backends:
In this lab, we will configure a Cloud Storage bucket as the backend.
#google-cloud #google-cloud-platform #cloud #cloud storage #cloud cdn