小泉  晃

小泉 晃

1660346040

將 NodeJS 應用程序部署到 Google Cloud 上的 Kubernetes

讓我們看看如何將 Node/Express 微服務(連同 Postgres)部署到 Google Kubernetes Engine (GKE) 上的 Kubernetes 集群。

依賴項:

  • 碼頭工人 v20.10.10
  • Kubectl v1.20.8
  • 谷歌云 SDK v365.0.1

目標

在本教程結束時,您應該能夠:

  1. 解釋什麼是容器編排以及您可能需要使用編排工具的原因
  2. 討論使用 Kubernetes 與 Docker Swarm 和 AWS Elastic Container Service (ECS) 等其他編排工具相比的優缺點
  3. 解釋以下 Kubernetes 原語:Node、Pod、Service、Label、Deployment、Ingress 和 Volume
  4. 使用 Docker Compose 在本地啟動基於節點的微服務
  5. 配置 Kubernetes 集群以在 Google Cloud Platform (GCP) 上運行
  6. 在 Kubernetes 集群中設置一個卷來保存 Postgres 數據
  7. 使用 Kubernetes Secrets 管理敏感信息
  8. 在 Kubernetes 上運行 Node 和 Postgres
  9. 通過負載均衡器向外部用戶公開節點 API

什麼是容器編排?

當您從在單台機器上部署容器轉移到在多台機器上部署它們時,您將需要一個編排工具來管理(和自動化)容器在整個系統中的安排、協調和可用性。

編排工具有助於:

  1. 跨服務器容器通信
  2. 水平縮放
  3. 服務發現
  4. 負載均衡
  5. 安全/TLS
  6. 零停機部署
  7. 回滾
  8. 日誌記錄
  9. 監控

This is where Kubernetes fits in along with a number of other orchestration tools, like Docker Swarm, ECS, Mesos, and Nomad.

Which one should you use?

  • use Kubernetes if you need to manage large, complex clusters
  • use Docker Swarm if you are just getting started and/or need to manage small to medium-sized clusters
  • use ECS if you're already using a number of AWS services
ToolProsCons
Kuberneteslarge community, flexible, most features, hipcomplex setup, high learning curve, hip
Docker Swarmeasy to set up, perfect for smaller clusterslimited by the Docker API
ECSfully-managed service, integrated with AWSvendor lock-in

There's also a number of managed Kubernetes services on the market:

  1. Google Kubernetes Engine (GKE)
  2. Elastic Container Service (EKS)
  3. Azure Kubernetes Service (AKS)

For more, review the Choosing the Right Containerization and Cluster Management Tool blog post.

Kubernetes Concepts

Before diving in, let's look at some of the basic building blocks that you have to work with from the Kubernetes API:

  1. A Node is a worker machine provisioned to run Kubernetes. Each Node is managed by the Kubernetes master.
  2. Pod是在節點上運行的一組邏輯緊密耦合的應用程序容器。Pod 中的容器部署在一起並共享資源(如數據量和網絡地址)。多個 Pod 可以在單個節點上運行。
  3. 服務是執行類似功能的一組邏輯 Pod 。它支持負載平衡和服務發現。它是 Pod 之上的一個抽象層;Pod 是短暫的,而服務則更加持久。
  4. 部署用於描述 Kubernetes 的期望狀態。它們決定瞭如何創建、部署和復制 Pod。
  5. 標籤是附加到資源(如 Pod)的鍵/值對,用於組織相關資源。你可以把它們想像成 CSS 選擇器。例如:
    • 環境- dev, test,prod
    • 應用程序版本- beta,1.2.1
    • 類型- client, server,db
  6. Ingress是一組路由規則,用於根據請求主機或路徑控制對服務的外部訪問。
  7. 用於在容器生命週期之外持久保存數據。它們對於 Redis 和 Postgres 等有狀態應用程序尤其重要。
    • PersistentVolume定義了一個獨立於正常 Pod 生命週期的存儲卷。它在它所在的特定 Pod 之外進行管理。
    • PersistentVolumeClaim是用戶使用 PersistentVolume 的請求。

有關更多信息,請查看學習 Kubernetes 基礎教程。

項目設置

首先從https://github.com/testdrivenio/node-kubernetes repo 克隆應用程序:

$ git clone https://github.com/testdrivenio/node-kubernetes
$ cd node-kubernetes

構建鏡像並啟動容器:

$ docker-compose up -d --build

應用遷移並為數據庫播種:

$ docker-compose exec web knex migrate:latest
$ docker-compose exec web knex seed:run

測試以下端點...

獲取所有待辦事項:

$ curl http://localhost:3000/todos

[
  {
    "id": 1,
    "title": "Do something",
    "completed": false
  },
  {
    "id": 2,
    "title": "Do something else",
    "completed": false
  }
]

添加一個新的待辦事項:

$ curl -d '{"title":"something exciting", "completed":"false"}' \
    -H "Content-Type: application/json" -X POST http://localhost:3000/todos

"Todo added!"

獲取一個待辦事項:

$ curl http://localhost:3000/todos/3

[
  {
    "id": 3,
    "title": "something exciting",
    "completed": false
  }
]

更新所有人:

$ curl -d '{"title":"something exciting", "completed":"true"}' \
    -H "Content-Type: application/json" -X PUT http://localhost:3000/todos/3

"Todo updated!"

全部刪除:

$ curl -X DELETE http://localhost:3000/todos/3

在繼續之前快速瀏覽一下代碼:

├── .dockerignore
├── .gitignore
├── Dockerfile
├── README.md
├── docker-compose.yml
├── knexfile.js
├── kubernetes
│   ├── node-deployment-updated.yaml
│   ├── node-deployment.yaml
│   ├── node-service.yaml
│   ├── postgres-deployment.yaml
│   ├── postgres-service.yaml
│   ├── secret.yaml
│   ├── volume-claim.yaml
│   └── volume.yaml
├── package-lock.json
├── package.json
└── src
    ├── db
    │   ├── knex.js
    │   ├── migrations
    │   │   └── 20181009160908_todos.js
    │   └── seeds
    │       └── todos.js
    └── server.js

谷歌云設置

在本節中,我們將——

  1. 配置Google Cloud SDK
  2. 安裝kubectl,一個用於針對 Kubernetes 集群運行命令的 CLI 工具。
  3. 創建一個 GCP 項目。

Before beginning, you'll need a Google Cloud Platform (GCP) account. If you're new to GCP, Google provides a free trial with a $300 credit.

Start by installing the Google Cloud SDK.

If you’re on a Mac, we recommend installing the SDK with Homebrew:

$ brew update $ brew install google-cloud-sdk --cask

Test:

$ gcloud --version

Google Cloud SDK 365.0.1
bq 2.0.71
core 2021.11.19
gsutil 5.5

Once installed, run gcloud init to configure the SDK so that it has access to your GCP credentials. You'll also need to either pick an existing GCP project or create a new project to work with.

Set the project:

$ gcloud config set project <PROJECT_ID>

Finally, install kubectl:

$ gcloud components install kubectl

Kubernetes Cluster

Next, let's create a cluster on Kubernetes Engine:

$ gcloud container clusters create node-kubernetes \
    --num-nodes=3 --zone us-central1-a --machine-type g1-small

這將創建一個名為machinenode-kubernetesus-central1-a 區域中的三節點集群。旋轉需要幾分鐘。g1-small

$ kubectl get nodes

NAME                                             STATUS   ROLES    AGE   VERSION
gke-node-kubernetes-default-pool-139e0343-0hbt   Ready    <none>   75s   v1.21.5-gke.1302
gke-node-kubernetes-default-pool-139e0343-p4s3   Ready    <none>   75s   v1.21.5-gke.1302
gke-node-kubernetes-default-pool-139e0343-rxnc   Ready    <none>   75s   v1.21.5-gke.1302

谷歌云平台

將客戶端連接kubectl到集群:

$ gcloud container clusters get-credentials node-kubernetes --zone us-central1-a

Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-kubernetes.

如需 Kubernetes Engine 的幫助,請查看官方文檔

Docker 註冊表

使用gcr.io/<PROJECT_ID>/<IMAGE_NAME>:<TAG>Docker 標記格式,構建本地 Docker 映像,然後將 Node API 推送到Container Registry

$ gcloud auth configure-docker
$ docker build -t gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1 .
$ docker push gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1

請務必替換<PROJECT_ID>為您的項目的 ID。

谷歌云平台

節點設置

有了它,我們現在可以通過創建部署在pod上運行映像。

kubernetes/node-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node
  labels:
    name: node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
    spec:
      containers:
      - name: node
        image: gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
        env:
        - name: NODE_ENV
          value: "development"
        - name: PORT
          value: "3000"
      restartPolicy: Always

同樣,請務必替換<PROJECT_ID>為您的項目的 ID。

這裡發生了什麼事?

  1. metadata
    • name字段定義部署名稱 -node
    • labels定義部署的標籤 -name: node
  2. spec
    • replicas定義要運行的 pod 數量 -1
    • selector指定 pod 的標籤(必須匹配.spec.template.metadata.labels
    • template
      • metadata
        • labels指示應為 pod 分配哪些標籤 -app: node
      • spec
        • containers定義與每個 pod 關聯的容器
        • restartPolicy定義重啟策略-Always

因此,這將啟動一個node通過gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1我們剛剛推送的圖像命名的 pod。

創造:

$ kubectl create -f ./kubernetes/node-deployment.yaml

核實:

$ kubectl get deployments

NAME   READY   UP-TO-DATE   AVAILABLE   AGE
node   1/1     1            1           32s

$ kubectl get pods

NAME                    READY   STATUS    RESTARTS   AGE
node-59646c8856-72blj   1/1     Running   0          18s

您可以通過以下方式查看容器日誌kubectl logs <POD_NAME>

$ kubectl logs node-6fbfd984d-7pg92

> start
> nodemon src/server.js

[nodemon] 2.0.15
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node src/server.js`
Listening on port: 3000

您還可以從 Google Cloud 控制台查看這些資源:

谷歌云平台

要從外部訪問您的 API,讓我們通過service創建一個負載均衡器。

kubernetes/node-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: node
  labels:
    service: node
spec:
  selector:
    app: node
  type: LoadBalancer
  ports:
    - port: 3000

這將創建一個名為 的服務node,它將找到任何帶有標籤的 podnode並將端口暴露給外界。

創造:

$ kubectl create -f ./kubernetes/node-service.yaml

這將在 Google Cloud 上創建一個新的負載平衡器:

谷歌云平台

獲取外部IP:

$ kubectl get service node

NAME   TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
node   LoadBalancer   10.40.10.162   35.222.45.193   3000:31315/TCP   78s

測試一下:

  1. http://EXTERNAL_IP:3000
  2. http://EXTERNAL_IP:3000/todos

"Something went wrong."由於數據庫尚未設置,您應該會看到何時到達第二個端點。

秘密

Secrets用於管理敏感信息,例如密碼、API 令牌和 SSH 密鑰。我們將使用一個秘密來存儲我們的 Postgres 數據庫憑據。

Kubernetes/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials
type: Opaque
data:
  user: c2FtcGxl
  password: cGxlYXNlY2hhbmdlbWU=

用戶和密碼字段是 base64 編碼的字符串:

$ echo -n "pleasechangeme" | base64
cGxlYXNlY2hhbmdlbWU=

$ echo -n "sample" | base64
c2FtcGxl

創建秘密:

$ kubectl apply -f ./kubernetes/secret.yaml

核實:

$ kubectl describe secret postgres-credentials

Name:         postgres-credentials
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  14 bytes
user:      6 bytes

谷歌云平台

體積

由於容器是臨時的,我們需要通過PersistentVolumePersistentVolumeClaim配置一個卷,以將 Postgres 數據存儲在 pod 之外。如果沒有捲,當 pod 出現故障時,您將丟失數據。

創建永久磁盤

$ gcloud compute disks create pg-data-disk --size 50GB --zone us-central1-a

谷歌云平台

Kubernetes/volume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
  labels:
    name: postgres-pv
spec:
  capacity:
    storage: 50Gi
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: pg-data-disk
    fsType: ext4

此配置將創建一個 50 GB 的 PersistentVolume,其訪問模式為ReadWriteOnce,這意味著該卷可以由單個節點以讀寫方式掛載。

創建卷:

$ kubectl apply -f ./kubernetes/volume.yaml

檢查狀態:

$ kubectl get pv

NAME         CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS     CLAIM  STORAGECLASS  REASON  AGE
postgres-pv  50Gi      RWO           Retain          Available         standard              6s

谷歌云平台

kubernetes/volume-claim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  labels:
    type: local
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  volumeName: postgres-pv

這將在 PersistentVolume(我們剛剛創建)上創建一個聲明,Postgres pod 將能夠使用它來附加一個卷。

創造:

$ kubectl apply -f ./kubernetes/volume-claim.yaml

看法:

$ kubectl get pvc

NAME           STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-pvc   Bound    postgres-pv   50Gi       RWO            standard       6s

谷歌云平台

Postgres 設置

將數據庫憑據與卷一起設置後,我們現在可以配置 Postgres 數據庫本身。

kubernetes/postgres-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  labels:
    name: database
spec:
  replicas: 1
  selector:
    matchLabels:
      service: postgres
  template:
    metadata:
      labels:
        service: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:14-alpine
        volumeMounts:
        - name: postgres-volume-mount
          mountPath: /var/lib/postgresql/data
          subPath: postgres
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: password
      restartPolicy: Always
      volumes:
      - name: postgres-volume-mount
        persistentVolumeClaim:
          claimName: postgres-pvc

在這裡,除了通過postgres:14-alpine鏡像啟動一個新的 pod 之外,此配置還將 PersistentVolumeClaim 從該volumes部分安裝到該部分中定義的“/var/lib/postgresql/data”目錄volumeMounts

查看Stack Overflow 問題,了解有關我們為何在卷掛載中包含子路徑的更多信息。

創造:

$ kubectl create -f ./kubernetes/postgres-deployment.yaml

核實:

$ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE
node-59646c8856-72blj       1/1     Running   0          20m
postgres-64d485d86b-vtrlh   1/1     Running   0          25s

谷歌云平台

創建todos數據庫:

$ kubectl exec <POD_NAME> --stdin --tty -- createdb -U sample todos

kubernetes/postgres-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    service: postgres
spec:
  selector:
    service: postgres
  type: ClusterIP
  ports:
  - port: 5432

這將創建一個ClusterIP服務,以便其他 pod 可以連接到它。它在集群外部不可用。

創建服務:

$ kubectl create -f ./kubernetes/postgres-service.yaml

谷歌云平台

更新節點部署

接下來,將數據庫憑據添加到節點部署:

kubernetes/node-deployment-updated.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node
  labels:
    name: node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
    spec:
      containers:
      - name: node
        image: gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1 # update
        env:
        - name: NODE_ENV
          value: "development"
        - name: PORT
          value: "3000"
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: password
      restartPolicy: Always

創造:

$ kubectl delete -f ./kubernetes/node-deployment.yaml
$ kubectl create -f ./kubernetes/node-deployment-updated.yaml

核實:

$ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE
node-64c45d449b-9m7pf       1/1     Running   0          9s
postgres-64d485d86b-vtrlh   1/1     Running   0          4m7s

使用節點 pod,更新數據庫:

$ kubectl exec <POD_NAME> knex migrate:latest
$ kubectl exec <POD_NAME> knex seed:run

再次測試一下:

  1. http://EXTERNAL_IP:3000
  2. http://EXTERNAL_IP:3000/todos

您現在應該看到待辦事項:

[
  {
    "id": 1,
    "title": "Do something",
    "completed": false
  },
  {
    "id": 2,
    "title": "Do something else",
    "completed": false
  }
]

結論

在這篇文章中,我們研究瞭如何使用 GKE 在 Kubernetes 上運行基於節點的微服務。您現在應該對 Kubernetes 的工作原理有了基本的了解,並且能夠將運行應用程序的集群部署到 Google Cloud。

確保在完成後關閉資源(集群、永久性磁盤、容器註冊表上的映像)以避免產生不必要的費用:

$ kubectl delete -f ./kubernetes/node-service.yaml
$ kubectl delete -f ./kubernetes/node-deployment-updated.yaml

$ kubectl delete -f ./kubernetes/secret.yaml

$ kubectl delete -f ./kubernetes/volume-claim.yaml
$ kubectl delete -f ./kubernetes/volume.yaml

$ kubectl delete -f ./kubernetes/postgres-deployment.yaml
$ kubectl delete -f ./kubernetes/postgres-service.yaml

$ gcloud container clusters delete node-kubernetes --zone us-central1-a
$ gcloud compute disks delete pg-data-disk --zone us-central1-a
$ gcloud container images delete gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1

您可以在 GitHub 上的node-kubernetes 存儲庫中找到代碼。

來源:  https ://testdriven.io

#nodejs #kubernetes #googlecloud 

What is GEEK

Buddha Community

將 NodeJS 應用程序部署到 Google Cloud 上的 Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Adaline  Kulas

Adaline Kulas

1594162500

Multi-cloud Spending: 8 Tips To Lower Cost

A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.

Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.

By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.

However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.

  • Deactivate underused or unattached resources

Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.

Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.

  • Figure out idle instances

Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.

Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.

  • Deploy monitoring mechanisms

The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.

For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.

#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market

Rusty  Shanahan

Rusty Shanahan

1597833840

Overview of Google Cloud Essentials Quest

If you looking to learn about Google Cloud in depth or in general with or without any prior knowledge in cloud computing, then you should definitely check this quest out, Link.

Google Could Essentials is an introductory level Quest which is useful to learn about the basic fundamentals of Google Cloud. From writing Cloud Shell commands and deploying my first virtual machine, to running applications on Kubernetes Engine or with load balancing, Google Cloud Essentials is a prime introduction to the platform’s basic features.

Let’s see what was the Quest Outline:

  1. A Tour of Qwiklabs and Google Cloud
  2. Creating a Virtual Machine
  3. Getting Started with Cloud Shell & gcloud
  4. Kubernetes Engine: Qwik Start
  5. Set Up Network and HTTP Load Balancers

A Tour of Qwiklabs and Google Cloud was the first hands-on lab which basically gives an overview about Google Cloud. There were few questions to answers that will check your understanding about the topic and the rest was about accessing Google cloud console, projects in cloud console, roles and permissions, Cloud Shell and so on.

**Creating a Virtual Machine **was the second lab to create virtual machine and also connect NGINX web server to it. Compute Engine lets one create virtual machine whose resources live in certain regions or zones. NGINX web server is used as load balancer. The job of a load balancer is to distribute workloads across multiple computing resources. Creating these two along with a question would mark the end of the second lab.

#google-cloud-essentials #google #google-cloud #google-cloud-platform #cloud-computing #cloud

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Google Cloud: Caching Cloud Storage content with Cloud CDN

In this Lab, we will configure Cloud Content Delivery Network (Cloud CDN) for a Cloud Storage bucket and verify caching of an image. Cloud CDN uses Google’s globally distributed edge points of presence to cache HTTP(S) load-balanced content close to our users. Caching content at the edges of Google’s network provides faster delivery of content to our users while reducing serving costs.

For an up-to-date list of Google’s Cloud CDN cache sites, see https://cloud.google.com/cdn/docs/locations.

Task 1. Create and populate a Cloud Storage bucket

Cloud CDN content can originate from different types of backends:

  • Compute Engine virtual machine (VM) instance groups
  • Zonal network endpoint groups (NEGs)
  • Internet network endpoint groups (NEGs), for endpoints that are outside of Google Cloud (also known as custom origins)
  • Google Cloud Storage buckets

In this lab, we will configure a Cloud Storage bucket as the backend.

#google-cloud #google-cloud-platform #cloud #cloud storage #cloud cdn