Armani  Conroy

Armani Conroy

1604412475

Azkaban on Kubernetes

Azkaban is a popular workflow engine which I have used to run jobs especially in Data Lake for many times. There are similar workflow scheduler like Oozie, Airflow which provide more functionalities than Azkaban does, but I prefer Azkaban to others, because Azkaban has more attractive UI than others have.

Even though Azkaban provides several job types like hadoop, java, command, pig, hive, etc, I have used just command job type for most of cases. With command job type, you can just type some shell commands to run jobs. It is simple, and it works for most cases, I think. In this article, only command job type will be used to run jobs.

Azkaban consists of Azkaban Web Server which acts as coordinator, Azkaban Executors which act as worker, and MySQL which handles all the metadata for jobs. I am going to show you how to run azkaban web server, executors and mysql on kubernetes here. All the codes used here can be found in my git repo: https://github.com/mykidong/azkaban-on-kubernetes

Build Azkaban with Source Codes (Optional)

Because I have not found prebuilt azkaban 3.x, I am going to build with azkaban source.

cd ~;

git clone https://github.com/azkaban/azkaban.git
cd azkaban;

git checkout tags/3.90.0;

## Build and install distributions
./gradlew installDist

## package azkaban as tar files.
### db.
cd ~/azkaban/azkaban-db/build/install;
tar -zcf azkaban-db-3.90.0.tar.gz azkaban-db;

### executor.
cd ~/azkaban/azkaban-exec-server/build/install;
tar -zcf azkaban-exec-server-3.90.0.tar.gz azkaban-exec-server;

### web.
d ~/azkaban/azkaban-web-server/build/install;
tar -zcf azkaban-web-server-3.90.0.tar.gz azkaban-web-server;

We need three packages, namely, azkaban db in which there are sql scripts to create azkaban database and tables in mysql, azkaban executor and web server. After packaging azkaban as gz, I have uploaded these packages to google drive.

Create Azkaban Docker Images

Here, I am going to build azkaban docker images for azkaban db, executors, and web server.

First, let’s see the Dockerfile of azkaban db which is used to create azkaban database and tables to mysql db.

FROM java:8-jre

ENV APP_HOME /opt/azkaban-db

RUN echo "Asia/Seoul" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata

RUN useradd -ms /bin/bash -d ${APP_HOME} db

RUN set -ex \
    && AZKABAN_DB_NAME=azkaban-db-3.90.0 \
    && fileId=1_oYPbDg3MKAu4RjL0P-_ZIl5ixlPgq04 \
    && fileName=${AZKABAN_DB_NAME}.tar.gz \
    && curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=${fileId}" > /dev/null \
    && code="$(awk '/_warning_/ {print $NF}' /tmp/cookie)" \
    && curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${code}&id=${fileId}" -o ${fileName} \
    && tar -zxf ${fileName} -C ${APP_HOME} \
    && cp -R ${APP_HOME}/azkaban-db/* ${APP_HOME}/ \
    && rm -rf ${APP_HOME}/azkaban-db \
    && rm -rf ${fileName}

RUN chown db: -R ${APP_HOME}

RUN echo "deb [check-valid-until=no] http://cdn-fastly.deb.debian.org/debian jessie main" > /etc/apt/sources.list.d/jessie.list
RUN echo "deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/jessie-backports.list
RUN sed -i '/deb http:\/\/deb.debian.org\/debian jessie-updates main/d' /etc/apt/sources.list
RUN apt-get -o Acquire::Check-Valid-Until=false update

RUN apt-get -y -f install mysql-client

USER db

As seen above, azkaban db package is downloaded from google drive, and it is extracted by tar. Note that mysql-client is installed at the end of the file.

Next, let’s see azkaban executor Dockerfile.

FROM java:8-jre

ENV APP_HOME /opt/azkaban-executor

RUN echo "Asia/Seoul" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata

RUN useradd -ms /bin/bash -d ${APP_HOME} executor

RUN set -ex \
    && AZKABAN_EXEC_NAME=azkaban-exec-server-3.90.0 \
    && fileId=15jllIx3eAmAb9d-GZ_KISWxnZuAJiP5r \
    && fileName=${AZKABAN_EXEC_NAME}.tar.gz \
    && curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=${fileId}" > /dev/null \
    && code="$(awk '/_warning_/ {print $NF}' /tmp/cookie)" \
    && curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${code}&id=${fileId}" -o ${fileName} \
    && tar -zxf ${fileName} -C ${APP_HOME} \
    && cp -R ${APP_HOME}/azkaban-exec-server/* ${APP_HOME}/ \
    && rm -rf ${APP_HOME}/azkaban-exec-server \
    && rm -rf ${APP_HOME}/conf/azkaban.properties \
    && rm -rf ${fileName}

COPY activate-executor.sh ${APP_HOME}/bin/activate-executor.sh
COPY start-exec.sh ${APP_HOME}/bin/start-exec.sh
COPY start-and-activate-exec.sh ${APP_HOME}/bin/start-and-activate-exec.sh

RUN chmod a+x -R ${APP_HOME}/bin
RUN chown executor: -R ${APP_HOME}

RUN echo "deb [check-valid-until=no] http://cdn-fastly.deb.debian.org/debian jessie main" > /etc/apt/sources.list.d/jessie.list
RUN echo "deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/jessie-backports.list
RUN sed -i '/deb http:\/\/deb.debian.org\/debian jessie-updates main/d' /etc/apt/sources.list
RUN apt-get -o Acquire::Check-Valid-Until=false update
RUN apt-get install -y openssh-client

USER executor
RUN ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
WORKDIR ${APP_HOME}

Similar to azkaban db dockerfile, azkaban executor package is downloaded and extracted. some shell files are copied, and at the end, ssh keys are generated which are used to connect the remote machine via ssh.

And, Azkaban web server:

FROM java:8-jre

ENV APP_HOME /opt/azkaban-web

RUN echo "Asia/Seoul" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata

RUN useradd -ms /bin/bash -d ${APP_HOME} web

RUN set -ex \
    && AZKABAN_WEB_NAME=azkaban-web-server-3.90.0 \
    && fileId=1GzVG5_aKlG8Mb38M3a10jF8X-VYpSxJx \
    && fileName=${AZKABAN_WEB_NAME}.tar.gz \
    && curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=${fileId}" > /dev/null \
    && code="$(awk '/_warning_/ {print $NF}' /tmp/cookie)" \
    && curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${code}&id=${fileId}" -o ${fileName} \
    && tar -zxf ${fileName} -C ${APP_HOME} \
    && cp -R ${APP_HOME}/azkaban-web-server/* ${APP_HOME}/ \
    && rm -rf ${APP_HOME}/azkaban-web-server \
    && rm -rf ${APP_HOME}/conf/azkaban.properties \
    && rm -rf ${fileName}

COPY start-web.sh ${APP_HOME}/bin/start-web.sh

RUN chmod a+x -R ${APP_HOME}/bin
RUN chown web: -R ${APP_HOME}

EXPOSE 8081
USER web
WORKDIR ${APP_HOME}

Azkaban web server will be exposed with the port of 8081.

You can build all the azkaban components like this.

## remove azkaban docker images.
docker rmi -f $(docker images -a | grep azkaban | awk '{print $3}')

## azkaban db docker image.
cd <src>/docker/db;
docker build . -t yourrepo/azkaban-db:3.90.0;

### push.
docker push yourrepo/azkaban-db:3.90.0;

## azkaban executor image.
cd <src>/docker/executor;
docker build . -t yourrepo/azkaban-exec-server:3.90.0;

### push.
docker push yourrepo/azkaban-exec-server:3.90.0;

## azkaban web image.
cd <src>/docker/web;
docker build . -t yourrepo/azkaban-web-server:3.90.0;

### push.
docker push yourrepo/azkaban-web-server:3.90.0;

Now, you have docker images of azkaban in your repository.

Run Azkaban on Kubernetes

There are several kubernetes yaml files found in the source.

In mysql.yaml , you have to change storage class.

storageClassName: direct.csi.min.io

And all the docker repo name for azkaban images found in all the yaml files should be changed to your docker repo name, for instance, in azkaban-executor.yaml :

image: yourrepo/azkaban-exec-server:3.90.0

Now, you are ready to run azkaban on kubernetes, let’s type the following:

### ---- init.
## create mysql server.
kubectl apply -f mysql.yaml;

## wait for mysql pod being ready.
while [[ $(kubectl get pods -n azkaban -l app=mysql -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for mysql pod being ready" && sleep 1; done

## configmaps
kubectl create configmap azkaban-cfg --dry-run --from-file=azkaban-executor.properties --from-file=azkaban-web.properties -o yaml -n azkaban | kubectl apply -f -

## create db and tables.
kubectl apply -f init-schema.yaml;

## wait for job being completed.
while [[ $(kubectl get pods -n azkaban -l job-name=azakban-initschema -o jsonpath={..status.phase}) != *"Succeeded"* ]]; do echo "waiting for finishing init schema job" && sleep 2; done

### ---- azkaban.
## create azkaban executor.
kubectl apply -f azkaban-executor.yaml;

## wait for azkaban executor being run
while [[ $(kubectl get pods -n azkaban -l app=azkaban-executor -o jsonpath={..status.phase}) != *"Running"* ]]; do echo "waiting for executor being run" && sleep 2; done

## create azkaban web.
kubectl apply -f azkaban-web.yaml;

Let’s see the pods in azkaban namespace. It looks like this:

kubectl get po -n azkaban;
NAME                           READY   STATUS       RESTARTS   AGE
azakban-initschema-hr4bn       0/1     Init:Error   0          4h3m
azakban-initschema-kg75t       0/1     Completed    0          4h3m
azakban-initschema-ppngd       0/1     Init:Error   0          4h3m
azkaban-executor-0             1/1     Running      0          3h19m
azkaban-executor-1             1/1     Running      0          3h18m
azkaban-executor-2             1/1     Running      0          3h18m
azkaban-web-664967cb99-xhmrf   1/1     Running      0          3h9m
mysql-statefulset-0            1/1     Running      0          4h3m

As seen here, a mysql server, three executor servers, one web server are running on kubernetes.

Access UI

To access UI, let’s see the services in azkaban namespaces.

kubectl get svc -n azkaban;
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
azkaban-executor   ClusterIP      None            <none>          <none>           3h20m
azkaban-web        LoadBalancer   10.233.49.152   52.231.165.73   8081:31538/TCP   3h9m
mysql-service      ClusterIP      10.233.53.51    <none>          3306/TCP         4h4m

With the external ip of azkaban-web Service, you can access UI in browser:

http://52.231.165.73:8081/

Azkaban Smoke Test

You can test azkaban with running example projects.

## install azkaban cli.
sudo pip install --upgrade "urllib3==1.22" azkaban;

## download sample projects and create project with azkaban cli.
wget https://github.com/azkaban/azkaban/raw/master/az-examples/flow20-projects/basicFlow20Project.zip;
wget https://github.com/azkaban/azkaban/raw/master/az-examples/flow20-projects/embeddedFlow20Project.zip;

azkaban upload -c -p basicFlow20Project -u azkaban@http://52.231.165.73:8081 ./basicFlow20Project.zip;
azkaban upload -c -p embeddedFlow20Project -u azkaban@http://52.231.165.73:8081 ./embeddedFlow20Project.zip;

Run shell in remote machine from azkaban executor

From my most experiences, I have used shell located in remote machine which will be invoked via ssh in azkaban executors remotely.

It is another example in which azkaban executor will call the remote shell to run spark job. Let’s say, because spark and kubectl are installed on the remote machine, it is ready to submit spark job to kubernetes there. To do so, ssh access to the remote machine from azkaban executor must be enabled.

Let’s copy the public key of azkaban executors to the remote machine.

## list pods.
kubectl get po -n azkaban
NAME                           READY   STATUS       RESTARTS   AGE
azakban-initschema-9bgbh       0/1     Completed    0          16h
azakban-initschema-dtgg7       0/1     Init:Error   0          16h
azakban-initschema-fw7gt       0/1     Init:Error   0          16h
azkaban-executor-0             1/1     Running      0          16h
azkaban-executor-1             1/1     Running      0          16h
azkaban-executor-2             1/1     Running      0          16h
azkaban-web-664967cb99-z8dzn   1/1     Running      0          16h
mysql-statefulset-0            1/1     Running      0          16h

## access executor pod to get public key.
kubectl exec -it azkaban-executor-0 -n azkaban -- cat .ssh/id_rsa.pub;
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0vuKKMz4dD0aBrJKtlVU8fDmYgqkwpkDXTzoUTqm57CqEmzHa5EDS90xGch1rAN4HucOR6dzUGvb2VlATBGIi5VZ6w0OuRR+r50KHqiC0TLdEXzX1/TRO/uHftI/xdUMFDHOWTuZnsYS5V7DCrw1yJnPzHTHktgXDyycM/iEspdfslzgZuIV4zT3HNVAYIplQPyy8TKRy7gojm7OYw5W2S14hqiY5/HL/CZ9CQpKV37qJvd3E4u/pOZCHH7r1Tm5E3bnUX9U8z7Nj0Fb+TZSkxiEbwoKB/Ib07Urc0il2f4mug2bKazZRsU+/bb1+VjoMW0ek+9Rvk1JTkaXIu8k/ executor@33842653d6db## copy this executor public key and paste it to authorized_keys file in remote machine.
### in remote machine.
vi ~/.ssh/authorized_keys;
... paste public key.

## chmod 600.
chmod 600 ~/.ssh/authorized_keys;

and then, login to remote machine via ssh in the individual azkaban executor:

kubectl exec -it azkaban-executor-0 -n azkaban -- sh;
ssh pcp@x.x.x.x;
...
exit;

Let’s create shell to run an example spark job in the remote machine:

cat > run-spark-example.sh <<'EOF'
############### spark job: create delta table

## submit spark job onto kubernetes.
export MASTER=k8s://https://xxxx:6443;
export NAMESPACE=ai-developer;
export ENDPOINT=http://$(kubectl get svc s3g-service -n ai-developer -o jsonpath={.status.loadBalancer.ingress[0].ip}):9898;
export HIVE_METASTORE=metastore.ai-developer:9083;

spark-submit \
--master ${MASTER} \
--deploy-mode cluster \
--name spark-delta-example \
--class io.spongebob.spark.examples.DeltaLakeExample \
--packages com.amazonaws:aws-java-sdk-s3:1.11.375,org.apache.hadoop:hadoop-aws:3.2.0 \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.path=/checkpoint \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.subPath=checkpoint \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.mount.readOnly=false \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=spark-driver-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.path=/checkpoint \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.subPath=checkpoint \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.mount.readOnly=false \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=spark-exec-pvc \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.path=/localdir \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.readOnly=false \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.options.claimName=spark-driver-localdir-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.path=/localdir \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.mount.readOnly=false \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-localdirpvc.options.claimName=spark-exec-localdir-pvc \
--conf spark.kubernetes.file.upload.path=s3a://mykidong/spark-examples \
--conf spark.kubernetes.container.image.pullPolicy=Always \
--conf spark.kubernetes.namespace=$NAMESPACE \
--conf spark.kubernetes.container.image=xxx/spark:v3.0.0 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.hadoop.hive.metastore.client.connect.retry.delay=5 \
--conf spark.hadoop.hive.metastore.client.socket.timeout=1800 \
--conf spark.hadoop.hive.metastore.uris=thrift://$HIVE_METASTORE \
--conf spark.hadoop.hive.server2.enable.doAs=false \
--conf spark.hadoop.hive.server2.thrift.http.port=10002 \
--conf spark.hadoop.hive.server2.thrift.port=10016 \
--conf spark.hadoop.hive.server2.transport.mode=binary \
--conf spark.hadoop.metastore.catalog.default=spark \
--conf spark.hadoop.hive.execution.engine=spark \
--conf spark.hadoop.hive.input.format=io.delta.hive.HiveInputFormat \
--conf spark.hadoop.hive.tez.input.format=io.delta.hive.HiveInputFormat \
--conf spark.sql.warehouse.dir=s3a:/mykidong/apps/spark/warehouse \
--conf spark.hadoop.fs.defaultFS=s3a://mykidong \
--conf spark.hadoop.fs.s3a.access.key=any-access-key \
--conf spark.hadoop.fs.s3a.secret.key=any-secret-key \
--conf spark.hadoop.fs.s3a.connection.ssl.enabled=true \
--conf spark.hadoop.fs.s3a.endpoint=$ENDPOINT \
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
--conf spark.hadoop.fs.s3a.fast.upload=true \
--conf spark.hadoop.fs.s3a.path.style.access=true \
--conf spark.driver.extraJavaOptions="-Divy.cache.dir=/tmp -Divy.home=/tmp" \
--conf spark.executor.instances=3 \
--conf spark.executor.memory=2G \
--conf spark.executor.cores=1 \
--conf spark.driver.memory=1G \
file:///home/pcp/xxx/examples/spark/target/spark-example-1.0.0-SNAPSHOT-spark-job.jar \
--master ${MASTER};
EOF## make it executable
chmod a+x run-spark-example.sh;

Let’s create azkaban flow scripts, spark.flow

---
config:
  failure.emails: mykidong@gmail.com

nodes:
- name: Start
  type: noop

- name: RunSparkJob
  type: command
  config:
    command: ssh pcp@x.x.x.x "/home/pcp/run-spark-example.sh"
  dependsOn:
  - Start

- name: End
  type: noop
  dependsOn:
  - RunSparkJob

Take a look at the command in the above flow. run-spark-example.sh located in the remote machine will be called via ssh.

Let’s create flow meta file, named flow20.project :

azkaban-flow-version: 2.0

Finally, let’s do zipping and uploading azkaban project to azkaban web server.

## build azkaban project. 
zip spark-job-example.zip azkaban/*;  ## create azkaban project. 
azkaban upload -c -p spark-job-example -u azkaban@http://52.231.165.73:8081 ./spark-job-example.zip;

#kubernetes #devops

What is GEEK

Buddha Community

Azkaban on Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Mitchel  Carter

Mitchel Carter

1601305200

Microsoft Announces General Availability Of Bridge To Kubernetes

Recently, Microsoft announced the general availability of Bridge to Kubernetes, formerly known as Local Process with Kubernetes. It is an iterative development tool offered in Visual Studio and VS Code, which allows developers to write, test as well as debug microservice code on their development workstations while consuming dependencies and inheriting the existing configuration from a Kubernetes environment.

Nick Greenfield, Program Manager, Bridge to Kubernetes stated in an official blog post, “Bridge to Kubernetes is expanding support to any Kubernetes. Whether you’re connecting to your development cluster running in the cloud, or to your local Kubernetes cluster, Bridge to Kubernetes is available for your end-to-end debugging scenarios.”

Bridge to Kubernetes provides a number of compelling features. Some of them are mentioned below-

#news #bridge to kubernetes #developer tools #kubernetes #kubernetes platform #kubernetes tools #local process with kubernetes #microsoft

Houston  Sipes

Houston Sipes

1600992000

Did Google Open Sourcing Kubernetes Backfired?

Over the last few years, Kubernetes have become the de-facto standard for container orchestration and has also won the race against Docker for being the most loved platforms among developers. Released in 2014, Kubernetes has come a long way with currently being used across the entire cloudscape platforms. In fact, recent reports state that out of 109 tools to manage containers, 89% of them are leveraging Kubernetes versions.

Although inspired by Borg, Kubernetes, is an open-source project by Google, and has been donated to a vendor-neutral firm — The Cloud Native Computing Foundation. This could be attributed to Google’s vision of creating a platform that can be used by every firm of the world, including the large tech companies and can host multiple cloud platforms and data centres. The entire reason for handing over the control to CNCF is to develop the platform in the best interest of its users without vendor lock-in.

#opinions #google open source #google open source tools #google opening kubernetes #kubernetes #kubernetes platform #kubernetes tools #open source kubernetes backfired

Kubernetes: Monitoring, Reducing, and Optimizing Your Costs

Over the past two years at Magalix, we have focused on building our system, introducing new features, and scaling our infrastructure and microservices. During this time, we had a look at our Kubernetes clusters utilization and found it to be very low. We were paying for resources we didn’t use, so we started a cost-saving practice to increase cluster utilization, use the resources we already had and pay less to run our cluster.

In this article, I will discuss the top five techniques we used to better utilize our Kubernetes clusters on the cloud and eliminate wasted resources, thus saving money. In the end, we were able to cut our monthly bill by more than 50%!

  • Applying Workload Right-Sizing
  • Choosing The Right Worker Nodes
  • Autoscaling Workloads
  • Autoscaling Worker Nodes
  • Purchasing Commitment/Saving Plans

#cloud-native #kubernetes #optimization #kubecost #kubernetes-cost-savings #kubernetes-cost-monitoring #kubernetes-reduce-cost #kubernetes-cost-analysis