Panmure  Anho

Panmure Anho

1593222744

Deploying Scalable, Production-Ready Airflow in 10 Easy Steps Using Kubernetes

Airflow, Airbnb’s brainchild, is an open-source data orchestration tool that allows you to programmatically schedule jobs in order to extract, transform, or load (ETL) data. Since Airflow’s workflows are written in Python as DAGs (directed acyclic graphs) they allow for complex computation, scalability, and maintainability unlike cron jobs or other scheduling tools. As a data scientist and engineer, data is incredibly important to me. I use Airflow to ensure that all the data I need is processed, cleaned, and available so I can easily run my models.

When I started this journey a year ago, I was scouring the web, looking for resources but there weren’t any. I wanted a clear path to deploying a scalable version of Airflow but many of the articles I found were incomplete or created small versions that couldn’t handle the amount of data I wanted to process. My goal was to run upwards of hundreds of thousands of jobs everyday, efficiently and reliably, without putting a dent in my wallet. At that point, the KubernetesExecutor wasn’t released to the public yet but now it is.

I’m writing this article to hopefully prevent you from many sleepless nights I spent pondering the existence of Airflow (as well as my own) and cultivating an unhealthy obsession for it. This is how I created a scalable, production-ready Airflow with the latest version (1.10.10) in 10 easy steps.

Prerequisites:

  • Docker. You can use their awesome instructions here or you can refer to this PDF I made if you have an Ubuntu system:
  • Git repository called “dags” so you can store your workflows; this allows for collaboration.
  • A container repository to store your completed image.
  • Kubernetes cluster (or minikube) if you want to deploy Airflow in production.

STEP 1: Docker Image

There are a lot of options, including a helm chart (which uses the puckel image), but I find Dockerfile to be the most intuitive, direct, and customizable version. There are few great images out there — puckel is fantastic and works great if you’re starting out — but if you’re planning on using the KubernetesExecutor, I recommend that you create your own image using Dockerfile. You can also get the latest version of Airflow (1.10.10) using this method.

In your linux environment type:

cd ~
git clone https://github.com/spanneerselvam/airflow-image.git
cd airflow-image
ls

Your config directory has all the files you will copy over from your machine to Airflow. Open the Dockerfile.

You should see the following code here (this is just a snippet):

FROM python:3.7
RUN apt-get update && apt-get install -y supervisor
USER root
RUN apt-get update && apt-get install — yes \
sudo \
git \
vim \
cron \
gcc
RUN pip install apache-airflow[1.10.10]
RUN cd /usr/local && mkdir airflow && chmod +x airflow && cd airflow
RUN useradd -ms /bin/bash airflow
RUN usermod -a -G sudo airflow
RUN chmod 666 -R /usr/local/airflow
ARG AIRFLOW_USER_HOME=/usr/local/airflow
ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME}
COPY config/airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg
EXPOSE 8080
#Python Package Dependencies for Airflow
RUN pip install pyodbc flask-bcrypt pymssql sqlalchemy psycopg2-binary pymysql

Here’s a fun fact for you (your definition of fun is probably _very _different from mine): the Docker logo is this really cute whale — like it’s literally the cutest logo I’ve ever seen (don’t believe me? Go on, go Google it!) — because it overwhelmingly won in a logo contest and even beat giraffes! Docker even adopted a whale named Molly Dock. She’s swimming away in the vast Pacific ocean. Why do I know this? Well when you’re awake really late deploying Airflow, you end up Googling some strange things…

STEP 2: DAGs

Setting up DAGs in Airflow using the KubernetesExecutor is tricky and this was the last piece of the puzzle I put together. There are few options such as embedding the DAGs in your docker image but the issue with this approach is that you have to rebuild your image every time you change your DAG code.

I find that the best solution for collaboration is to use GitHub (or BitBucket) to store your DAGs. Create your own repo (clone mine — more on that later) and then you and your team can push all your work into the repository. Once you’ve done this, you need to mount the DAGs to the pods that run Airflow by using PV and PVC (Persistent Volume and Persistent Volume Claim) with Azure file share (or the EKS equivalent).

PV and PVC are services offered by Kubernetes. Think of them as shared storage resources that can attach to every single pod you deploy. To do this you need to create an Azure file share (instructions are here). Make sure that you mount the file share to your computer (instructions for mounting can be found here) so it pulls code from your git repo and shows up in the Azure file share. I used a simple cron job that runs a shell script called git_sync.sh every minute to pull code from GitHub.

crontab -e:

* * * * /home/git_sync.sh

#Mandatory Blank Line

git_sync.sh (Note: my remote name is “DAGs” not origin):

cd ~
git clone git@github.com:spanneerselvam/airflow-image.git
cd airflow-image
ls

Once you’ve done this (I’d recommend using at least 5 Gi of storage for your Azure File share), you need to deploy the Azure File share in Kubernetes. Follow these steps (they apply for EKS as well):

  1. Create a Kubernetes secret for your Azure File share. Read this guide to securely create your secret here.
  2. Deploy a PVC (see code airflow-pvc.yaml). You only have to do this once.
  3. Deploy a PV (see code airflow-pv.yaml and airflow-pv-k8s.yaml in repo). You have to do this for each namespace (in my case, “default” and “k8s-tasks).

Steps #2–3: To deploy PV and PVC, this run the following lines of code:

kubectl create -f airflow-pvc.yaml
kubectl get pvc
kubectl create -f airflow-pv.yaml
kubectl create -f airflow-pv-k8s.yaml
kubectl get pv

If the statuses of the PVs and PVC are “Bound”, you’re good to go!

Now that the DAGs are showing up in the Azure File share, you need to adjust the Airflow settings. I store my DAGs in the pod in the folder called “/usr/local/airflow/DAGs”. This folder is mounted in the master pod from the File share but in order to work correctly it also must be mounted in each worker pod. If you look at the airflow.cfg, notice these two settings under the [kubernetes] section.

dags_in_image = False #The worker will get the mount location for the dags
#dags_volume_subpath = This line is commented out because the mount folder is the same as the dag folder
dags_volume_claim = airflow-dags #put your claim name here (this must match your airflow-pvc.yaml file)

Writing DAGs: Also check out this handy-dandy guide I wrote to the art of Writing DAGs here. You can clone this repo and download the two DAGs, template_dag.py and gcp_dag.py here.

STEP 3: Logging

For every task run, airflow creates a log that helps the user debug. Here, we are storing the logs in the _“/usr/local/airflow/logs” _folder. This is such a critical piece. I’ve had so many issues with logging — the dreaded “*** Log file does not exist” comes to mind — and I swear every time I’d get this error, I’d die a little (a lot, actually) inside and get heart palpitations. But not to worry, I’ve got your back! There are two options when it comes to Kubernetes.

  1. Using a PVC (Persistent Volume Claim) on a Kubernetes cluster
  2. Remote logging

Since we’ve already gone through the process of using a PVC, I will show you how to use remote logging with GCP (Google Cloud Platform). Create a bucket using the Google console and then a folder called “logs”. Make sure you open the permissions of your bucket; you can check them out here. You also need to create a Service Account which can be found here as well (Thank goodness GCP has AMAZING instructions!) Download the json authentication file and copy the contents into the “airflow-image/config/gcp.json” file.

You need to add the bucket and folder details to the airflow.cfg and Dockerfile in line 72. Once you do that, you’re golden (kind of). I’ve created a log connection ID with the name “AirflowGCPKey”. This ID is associated with the sensitive details of the GCP connection. You can create this ID in the UI or what I like to do personally is run the gcp_dag.py in the UI that creates this connection automatically (the code for this is here).

RUN pip install apache-airflow[gcp] apache-airflow[gcp-api]
RUN echo “deb http://packages.cloud.google.com/apt cloud-sdk main” | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN apt-get install gnupg -y
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update && apt-get install google-cloud-sdk -y
RUN gcloud auth activate-service-account <insert your service account> — key-file=/usr/local/airflow/gcp.json — project=<your project name>

#kubernetes #docker #airflow #data-science

What is GEEK

Buddha Community

Deploying Scalable, Production-Ready Airflow in 10 Easy Steps Using Kubernetes
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Panmure  Anho

Panmure Anho

1593222744

Deploying Scalable, Production-Ready Airflow in 10 Easy Steps Using Kubernetes

Airflow, Airbnb’s brainchild, is an open-source data orchestration tool that allows you to programmatically schedule jobs in order to extract, transform, or load (ETL) data. Since Airflow’s workflows are written in Python as DAGs (directed acyclic graphs) they allow for complex computation, scalability, and maintainability unlike cron jobs or other scheduling tools. As a data scientist and engineer, data is incredibly important to me. I use Airflow to ensure that all the data I need is processed, cleaned, and available so I can easily run my models.

When I started this journey a year ago, I was scouring the web, looking for resources but there weren’t any. I wanted a clear path to deploying a scalable version of Airflow but many of the articles I found were incomplete or created small versions that couldn’t handle the amount of data I wanted to process. My goal was to run upwards of hundreds of thousands of jobs everyday, efficiently and reliably, without putting a dent in my wallet. At that point, the KubernetesExecutor wasn’t released to the public yet but now it is.

I’m writing this article to hopefully prevent you from many sleepless nights I spent pondering the existence of Airflow (as well as my own) and cultivating an unhealthy obsession for it. This is how I created a scalable, production-ready Airflow with the latest version (1.10.10) in 10 easy steps.

Prerequisites:

  • Docker. You can use their awesome instructions here or you can refer to this PDF I made if you have an Ubuntu system:
  • Git repository called “dags” so you can store your workflows; this allows for collaboration.
  • A container repository to store your completed image.
  • Kubernetes cluster (or minikube) if you want to deploy Airflow in production.

STEP 1: Docker Image

There are a lot of options, including a helm chart (which uses the puckel image), but I find Dockerfile to be the most intuitive, direct, and customizable version. There are few great images out there — puckel is fantastic and works great if you’re starting out — but if you’re planning on using the KubernetesExecutor, I recommend that you create your own image using Dockerfile. You can also get the latest version of Airflow (1.10.10) using this method.

In your linux environment type:

cd ~
git clone https://github.com/spanneerselvam/airflow-image.git
cd airflow-image
ls

Your config directory has all the files you will copy over from your machine to Airflow. Open the Dockerfile.

You should see the following code here (this is just a snippet):

FROM python:3.7
RUN apt-get update && apt-get install -y supervisor
USER root
RUN apt-get update && apt-get install — yes \
sudo \
git \
vim \
cron \
gcc
RUN pip install apache-airflow[1.10.10]
RUN cd /usr/local && mkdir airflow && chmod +x airflow && cd airflow
RUN useradd -ms /bin/bash airflow
RUN usermod -a -G sudo airflow
RUN chmod 666 -R /usr/local/airflow
ARG AIRFLOW_USER_HOME=/usr/local/airflow
ENV AIRFLOW_HOME=${AIRFLOW_USER_HOME}
COPY config/airflow.cfg ${AIRFLOW_USER_HOME}/airflow.cfg
EXPOSE 8080
#Python Package Dependencies for Airflow
RUN pip install pyodbc flask-bcrypt pymssql sqlalchemy psycopg2-binary pymysql

Here’s a fun fact for you (your definition of fun is probably _very _different from mine): the Docker logo is this really cute whale — like it’s literally the cutest logo I’ve ever seen (don’t believe me? Go on, go Google it!) — because it overwhelmingly won in a logo contest and even beat giraffes! Docker even adopted a whale named Molly Dock. She’s swimming away in the vast Pacific ocean. Why do I know this? Well when you’re awake really late deploying Airflow, you end up Googling some strange things…

STEP 2: DAGs

Setting up DAGs in Airflow using the KubernetesExecutor is tricky and this was the last piece of the puzzle I put together. There are few options such as embedding the DAGs in your docker image but the issue with this approach is that you have to rebuild your image every time you change your DAG code.

I find that the best solution for collaboration is to use GitHub (or BitBucket) to store your DAGs. Create your own repo (clone mine — more on that later) and then you and your team can push all your work into the repository. Once you’ve done this, you need to mount the DAGs to the pods that run Airflow by using PV and PVC (Persistent Volume and Persistent Volume Claim) with Azure file share (or the EKS equivalent).

PV and PVC are services offered by Kubernetes. Think of them as shared storage resources that can attach to every single pod you deploy. To do this you need to create an Azure file share (instructions are here). Make sure that you mount the file share to your computer (instructions for mounting can be found here) so it pulls code from your git repo and shows up in the Azure file share. I used a simple cron job that runs a shell script called git_sync.sh every minute to pull code from GitHub.

crontab -e:

* * * * /home/git_sync.sh

#Mandatory Blank Line

git_sync.sh (Note: my remote name is “DAGs” not origin):

cd ~
git clone git@github.com:spanneerselvam/airflow-image.git
cd airflow-image
ls

Once you’ve done this (I’d recommend using at least 5 Gi of storage for your Azure File share), you need to deploy the Azure File share in Kubernetes. Follow these steps (they apply for EKS as well):

  1. Create a Kubernetes secret for your Azure File share. Read this guide to securely create your secret here.
  2. Deploy a PVC (see code airflow-pvc.yaml). You only have to do this once.
  3. Deploy a PV (see code airflow-pv.yaml and airflow-pv-k8s.yaml in repo). You have to do this for each namespace (in my case, “default” and “k8s-tasks).

Steps #2–3: To deploy PV and PVC, this run the following lines of code:

kubectl create -f airflow-pvc.yaml
kubectl get pvc
kubectl create -f airflow-pv.yaml
kubectl create -f airflow-pv-k8s.yaml
kubectl get pv

If the statuses of the PVs and PVC are “Bound”, you’re good to go!

Now that the DAGs are showing up in the Azure File share, you need to adjust the Airflow settings. I store my DAGs in the pod in the folder called “/usr/local/airflow/DAGs”. This folder is mounted in the master pod from the File share but in order to work correctly it also must be mounted in each worker pod. If you look at the airflow.cfg, notice these two settings under the [kubernetes] section.

dags_in_image = False #The worker will get the mount location for the dags
#dags_volume_subpath = This line is commented out because the mount folder is the same as the dag folder
dags_volume_claim = airflow-dags #put your claim name here (this must match your airflow-pvc.yaml file)

Writing DAGs: Also check out this handy-dandy guide I wrote to the art of Writing DAGs here. You can clone this repo and download the two DAGs, template_dag.py and gcp_dag.py here.

STEP 3: Logging

For every task run, airflow creates a log that helps the user debug. Here, we are storing the logs in the _“/usr/local/airflow/logs” _folder. This is such a critical piece. I’ve had so many issues with logging — the dreaded “*** Log file does not exist” comes to mind — and I swear every time I’d get this error, I’d die a little (a lot, actually) inside and get heart palpitations. But not to worry, I’ve got your back! There are two options when it comes to Kubernetes.

  1. Using a PVC (Persistent Volume Claim) on a Kubernetes cluster
  2. Remote logging

Since we’ve already gone through the process of using a PVC, I will show you how to use remote logging with GCP (Google Cloud Platform). Create a bucket using the Google console and then a folder called “logs”. Make sure you open the permissions of your bucket; you can check them out here. You also need to create a Service Account which can be found here as well (Thank goodness GCP has AMAZING instructions!) Download the json authentication file and copy the contents into the “airflow-image/config/gcp.json” file.

You need to add the bucket and folder details to the airflow.cfg and Dockerfile in line 72. Once you do that, you’re golden (kind of). I’ve created a log connection ID with the name “AirflowGCPKey”. This ID is associated with the sensitive details of the GCP connection. You can create this ID in the UI or what I like to do personally is run the gcp_dag.py in the UI that creates this connection automatically (the code for this is here).

RUN pip install apache-airflow[gcp] apache-airflow[gcp-api]
RUN echo “deb http://packages.cloud.google.com/apt cloud-sdk main” | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN apt-get install gnupg -y
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update && apt-get install google-cloud-sdk -y
RUN gcloud auth activate-service-account <insert your service account> — key-file=/usr/local/airflow/gcp.json — project=<your project name>

#kubernetes #docker #airflow #data-science

dev karmanr

1634323972

Xcode 12 deployment target warnings when use CocoaPods

The Installer is responsible of taking a Podfile and transform it in the Pods libraries. It also integrates the user project so the Pods libraries can be used out of the box.

The Installer is capable of doing incremental updates to an existing Pod installation.

The Installer gets the information that it needs mainly from 3 files:

- Podfile: The specification written by the user that contains
 information about targets and Pods.
- Podfile.lock: Contains information about the pods that were previously
 installed and in concert with the Podfile provides information about
 which specific version of a Pod should be installed. This file is
 ignored in update mode.
- Manifest.lock: A file contained in the Pods folder that keeps track of
 the pods installed in the local machine. This files is used once the
 exact versions of the Pods has been computed to detect if that version
 is already installed. This file is not intended to be kept under source
 control and is a copy of the Podfile.lock.
The Installer is designed to work in environments where the Podfile folder is under source control and environments where it is not. The rest of the files, like the user project and the workspace are assumed to be under source control.

https://www.npmjs.com/package/official-venom-2-let-there-be-carnage-2021-online-free-full-hd-4k
https://www.npmjs.com/package/venom-2-let-there-be-carnage-2021-online-free-full-hd

Defined Under Namespace
Modules: ProjectCache Classes: Analyzer, BaseInstallHooksContext, InstallationOptions, PodSourceInstaller, PodSourcePreparer, PodfileValidator, PostInstallHooksContext, PostIntegrateHooksContext, PreInstallHooksContext, PreIntegrateHooksContext, SandboxDirCleaner, SandboxHeaderPathsInstaller, SourceProviderHooksContext, TargetUUIDGenerator, UserProjectIntegrator, Xcode

Constant Summary
collapse
MASTER_SPECS_REPO_GIT_URL =
'https://github.com/CocoaPods/Specs.git'.freeze
Installation results
collapse

https://www.npmjs.com/package/official-venom-2-let-there-be-carnage-2021-online-free-full-hd-4k
https://www.npmjs.com/package/venom-2-let-there-be-carnage-2021-online-free-full-hd


#aggregate_targets ⇒ Array<AggregateTarget> readonly
The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.
#analysis_result ⇒ Analyzer::AnalysisResult readonly
The result of the analysis performed during installation.
#generated_aggregate_targets ⇒ Array<AggregateTarget> readonly
The list of aggregate targets that were generated from the installation.
#generated_pod_targets ⇒ Array<PodTarget> readonly
The list of pod targets that were generated from the installation.
#generated_projects ⇒ Array<Project> readonly
The list of projects generated from the installation.
#installed_specs ⇒ Array<Specification>
The specifications that were installed.
#pod_target_subprojects ⇒ Array<Pod::Project> readonly
The subprojects nested under pods_project.
#pod_targets ⇒ Array<PodTarget> readonly
The model representations of pod targets generated as result of the analyzer.
#pods_project ⇒ Pod::Project readonly
The `Pods/Pods.xcodeproj` project.
#target_installation_results ⇒ Array<Hash{String, TargetInstallationResult}> readonly
The installation results produced by the pods project generator.
Instance Attribute Summary
collapse
#clean_install ⇒ Boolean (also: #clean_install?)
when incremental installation is enabled.
#deployment ⇒ Boolean (also: #deployment?)
Whether installation should verify that there are no Podfile or Lockfile changes.
#has_dependencies ⇒ Boolean (also: #has_dependencies?)
Whether it has dependencies.
#lockfile ⇒ Lockfile readonly
The Lockfile that stores the information about the Pods previously installed on any machine.
#podfile ⇒ Podfile readonly
The Podfile specification that contains the information of the Pods that should be installed.
#repo_update ⇒ Boolean (also: #repo_update?)
Whether the spec repos should be updated.
#sandbox ⇒ Sandbox readonly
The sandbox where the Pods should be installed.
#update ⇒ Hash, ...
Pods that have been requested to be updated or true if all Pods should be updated.
#use_default_plugins ⇒ Boolean (also: #use_default_plugins?)
Whether default plugins should be used during installation.
Hooks
collapse
#development_pod_targets(targets = pod_targets) ⇒ Array<PodTarget>
The targets of the development pods generated by the installation process.
Convenience Methods
collapse
.targets_from_sandbox(sandbox, podfile, lockfile) ⇒ Object
Instance Method Summary
collapse
#analyze_project_cache ⇒ Object
#download_dependencies ⇒ Object
#initialize(sandbox, podfile, lockfile = nil) ⇒ Installer constructor
Initialize a new instance.
#install! ⇒ void
Installs the Pods.
#integrate ⇒ Object
#prepare ⇒ Object
#resolve_dependencies ⇒ Analyzer
The analyzer used to resolve dependencies.
#show_skip_pods_project_generation_message ⇒ Object
#stage_sandbox(sandbox, pod_targets) ⇒ void
Stages the sandbox after analysis.
Methods included from Config::Mixin
#config

Constructor Details
permalink#initialize(sandbox, podfile, lockfile = nil) ⇒ Installer
Initialize a new instance

Parameters:

sandbox (Sandbox) — @see #sandbox
podfile (Podfile) — @see #podfile
lockfile (Lockfile) (defaults to: nil) — @see #lockfile
[View source]
Instance Attribute Details
permalink#aggregate_targets ⇒ Array<AggregateTarget> (readonly)
Returns The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.

Returns:

(Array<AggregateTarget>) — The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.
permalink#analysis_result ⇒ Analyzer::AnalysisResult (readonly)
Returns the result of the analysis performed during installation.

Returns:

(Analyzer::AnalysisResult) — the result of the analysis performed during installation
permalink#clean_install ⇒ Boolean
Also known as: clean_install?
when incremental installation is enabled.

Returns:

(Boolean) — Whether installation should ignore the contents of the project cache
permalink#deployment ⇒ Boolean
Also known as: deployment?
Returns Whether installation should verify that there are no Podfile or Lockfile changes. Defaults to false.

Returns:

(Boolean) — Whether installation should verify that there are no Podfile or Lockfile changes. Defaults to false.
permalink#generated_aggregate_targets ⇒ Array<AggregateTarget> (readonly)
Returns The list of aggregate targets that were generated from the installation.

Returns:

(Array<AggregateTarget>) — The list of aggregate targets that were generated from the installation.
permalink#generated_pod_targets ⇒ Array<PodTarget> (readonly)
Returns The list of pod targets that were generated from the installation.

Returns:

(Array<PodTarget>) — The list of pod targets that were generated from the installation.
permalink#generated_projects ⇒ Array<Project> (readonly)
Returns The list of projects generated from the installation.

Returns:

(Array<Project>) — The list of projects generated from the installation.
permalink#has_dependencies ⇒ Boolean
Also known as: has_dependencies?
Returns Whether it has dependencies. Defaults to true.

Returns:

(Boolean) — Whether it has dependencies. Defaults to true.
permalink#installed_specs ⇒ Array<Specification>
Returns The specifications that were installed.

Returns:

(Array<Specification>) — The specifications that were installed.
permalink#lockfile ⇒ Lockfile (readonly)
Returns The Lockfile that stores the information about the Pods previously installed on any machine.

Returns:

(Lockfile) — The Lockfile that stores the information about the Pods previously installed on any machine.
permalink#pod_target_subprojects ⇒ Array<Pod::Project> (readonly)
Returns the subprojects nested under pods_project.

Returns:

(Array<Pod::Project>) — the subprojects nested under pods_project.
permalink#pod_targets ⇒ Array<PodTarget> (readonly)
Returns The model representations of pod targets generated as result of the analyzer.

Returns:

(Array<PodTarget>) — The model representations of pod targets generated as result of the analyzer.
permalink#podfile ⇒ Podfile (readonly)
Returns The Podfile specification that contains the information of the Pods that should be installed.

Returns:

(Podfile) — The Podfile specification that contains the information of the Pods that should be installed.
permalink#pods_project ⇒ Pod::Project (readonly)
Returns the `Pods/Pods.xcodeproj` project.

Returns:

(Pod::Project) — the `Pods/Pods.xcodeproj` project.
permalink#repo_update ⇒ Boolean
Also known as: repo_update?
Returns Whether the spec repos should be updated.

Returns:

(Boolean) — Whether the spec repos should be updated.
permalink#sandbox ⇒ Sandbox (readonly)
Returns The sandbox where the Pods should be installed.

Returns:

(Sandbox) — The sandbox where the Pods should be installed.
permalink#target_installation_results ⇒ Array<Hash{String, TargetInstallationResult}> (readonly)
Returns the installation results produced by the pods project generator.

Returns:

(Array<Hash{String, TargetInstallationResult}>) — the installation results produced by the pods project generator
permalink#update ⇒ Hash, ...
Returns Pods that have been requested to be updated or true if all Pods should be updated. If all Pods should been updated the contents of the Lockfile are not taken into account for deciding what Pods to install.

Returns:

(Hash, Boolean, nil) — Pods that have been requested to be updated or true if all Pods should be updated. If all Pods should been updated the contents of the Lockfile are not taken into account for deciding what Pods to install.
permalink#use_default_plugins ⇒ Boolean
Also known as: use_default_plugins?
Returns Whether default plugins should be used during installation. Defaults to true.

Returns:

(Boolean) — Whether default plugins should be used during installation. Defaults to true.
Class Method Details
permalink.targets_from_sandbox(sandbox, podfile, lockfile) ⇒ Object
Raises:

(Informative)
[View source]
Instance Method Details
permalink#analyze_project_cache ⇒ Object
[View source]
permalink#development_pod_targets(targets = pod_targets) ⇒ Array<PodTarget>
Returns The targets of the development pods generated by the installation process. This can be used as a convenience method for external scripts.

Parameters:

targets (Array<PodTarget>) (defaults to: pod_targets)
Returns:

(Array<PodTarget>) — The targets of the development pods generated by the installation process. This can be used as a convenience method for external scripts.
[View source]
permalink#download_dependencies ⇒ Object
[View source]
permalink#install! ⇒ void
This method returns an undefined value.

Installs the Pods.

The installation process is mostly linear with a few minor complications to keep in mind:

The stored podspecs need to be cleaned before the resolution step otherwise the sandbox might return an old podspec and not download the new one from an external source.

The resolver might trigger the download of Pods from external sources necessary to retrieve their podspec (unless it is instructed not to do it).

[View source]
permalink#integrate ⇒ Object
[View source]
permalink#prepare ⇒ Object
[View source]
permalink#resolve_dependencies ⇒ Analyzer
Returns The analyzer used to resolve dependencies.

Returns:

(Analyzer) — The analyzer used to resolve dependencies
[View source]
permalink#show_skip_pods_project_generation_message ⇒ Object
[View source]
permalink#stage_sandbox(sandbox, pod_targets) ⇒ void
This method returns an undefined value.

Stages the sandbox after analysis.

Parameters:

sandbox (Sandbox) — The sandbox to stage.
pod_targets (Array<PodTarget>) — The list of all pod targets.

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website