Ella  Windler

Ella Windler

1672628220

How to Perform Bulk Deletions Of Kubernetes Resources

In this Kubernetes tutorial we will learn about How to Perform Bulk Deletions Of Kubernetes Resources. Kubernetes makes it easy to create many resources at once, with the kubectl apply -f filename.yaml command creating all the resources in a compound YAML file. But how do you then delete multiple resources without specifying them individually?

In this post, I show you how to perform bulk deletions of Kubernetes resources.

Example Kubernetes deployment

Let's take a look at a typical YAML file describing a Kubernetes deployment and service:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

When this YAML is saved to a file called nginx.yaml, the resources are created with the following command:

kubectl apply -f nginx.yaml

You can then view the new resources created with the following commands:

kubectl get pods
kubectl get deployments
kubectl get services

You see three pods, one deployment, and one service are created. The pods aren't directly defined in the YAML file and are created by the deployment, with three pods created due to the replicas property being set to 3.

Deleting resources from file

The easiest way to delete these resources is to use the delete command and pass the same file that was used to initially create the resources:

kubectl delete -f nginx.yaml

If you re-run the kubectl get commands above you see the pods, deployment, and service are deleted. Because the pods are managed by the deployment, deleting the deployment also deletes the pods.

Manually deleting resources

To manually delete specific types of resources, the kubectl delete command accepts an --all argument that defines the type of resource to delete. For example, the following command deletes all the services:

kubectl delete --all services

You can confirm the services are deleted with the command:

kubectl get services

This command deletes all the pods:

kubectl delete --all pods

The output of the command looks something like this:

pod "my-nginx-6595874d85-88jlr" deleted
pod "my-nginx-6595874d85-9w52c" deleted
pod "my-nginx-6595874d85-dpzds" deleted

However, something interesting happens when you confirm the pods are deleted. Run the following command to list any pods:

kubectl get pods

Notice that there are still 3 pods, with the output looking something like this:

NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-6595874d85-2j4g8   1/1     Running   0          76s
my-nginx-6595874d85-4vrfb   1/1     Running   0          76s
my-nginx-6595874d85-4wj9p   1/1     Running   0          76s

If you look closely, the pod names shown by the kubectl get pods command are different to those returned by the kubectl delete --all pods command. This is because the pods are managed by the deployment, and when the deployment sees that the pods it was managing have been deleted, it recreates new pods to fulfill its replica count.

Deleting pods managed by a deployment essentially recreates them, which is useful if you want to force the pods to restart. But the only way to permanently delete the pods is to delete their parent deployment. This is done with the command:

kubectl delete --all deployments

After the deployment is deleted, there's no deployments or pods.

Deleting namespaces

Namespaces are a convenient way to group related resources. Create a new namespace called foo with the command:

kubectl create namespace foo

Then create the NGINX resources in the new namespace with the command:

kubectl apply -f nginx.yaml -n foo

List the resources with the commands:

kubectl get pods -n foo
kubectl get deployments -n foo
kubectl get services -n foo

Then delete the namespace with the command:

kubectl delete namespace foo

This results in the namespace, and all the resources contained in it, being deleted.

Shorthand "all" resource

You can pass all for the resource type when calling kubectl to reference a common subset of Kubernetes resource types. So the following command deletes the service, deployment, and pods:

kubectl delete all --all

The all type includes:

  • pod
  • service
  • daemonset
  • deployment
  • replicaset
  • statefulset
  • job
  • cronjobs

Deleting resources matching a label

Labels are used to enrich resources with metadata often describing things like the resource's purpose, environment, and version. You can select resources based on these labels to delete them. This lets you selectively delete groups of resources.

The following command deletes deployments with the label called app set to nginx:

kubectl delete deployments -l app=nginx

Likewise, you can delete the services with the same label:

kubectl delete service -l app=nginx

Dry runs

Bulk deletion of resources is convenient, but dangerous. Fortunately, kubectl has the --dry-run argument that lets you see the resources that would be deleted, but without actually deleting them. The following command previews the resources that would be matched by the all resource type:

kubectl delete all --all --dry-run

Conclusion

Bulk deleting resources is easy with kubectl, and in this post you learned how to delete resources:

  • Defined in a YAML file
  • Matching a single resource type
  • Grouped in the all resource type
  • Contained in a namespace
  • With matching labels

You also learned how to use the --dry-run argument to preview any resources that would be deleted.

Original article sourced at: https://octopus.com

#kubernetes 

What is GEEK

Buddha Community

How to Perform Bulk Deletions Of Kubernetes Resources
Christa  Stehr

Christa Stehr

1602964260

50+ Useful Kubernetes Tools for 2020 - Part 2

Introduction

Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.

According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.

(State of Kubernetes and Container Security, 2020)

And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.

(State of Kubernetes and Container Security, 2020)

#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml

Maud  Rosenbaum

Maud Rosenbaum

1601051854

Kubernetes in the Cloud: Strategies for Effective Multi Cloud Implementations

Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.

This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.

Kubernetes: Your Multi Cloud Strategy

Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.

In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.

Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.

In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.

The Compelling Attributes of Multi Cloud Kubernetes

Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.

Stability

In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.

#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud

Tamia  Walter

Tamia Walter

1601396400

Bulk Updates with Optimistic Concurrency Control

In this blog we are going to focus on a scenario where you want to do many updates to vertices in a graph stored in Azure Cosmos DB using the Gremlin API, where each update needs to account for other updates that might occur at the same time.

A great feature about the Core (SQL) API in Azure Cosmos DB is that it is fully interoperable with the Gremlin API with respect to CRUD operations, which means you can use all of the powerful features in Core (SQL) API and it’s SDKs to manipulate vertices and edges in a graph.

A feature that has been added to the .NET SDK in Azure Cosmos DB since version 3 is bulk support, which we will also explore here. The bulk support feature in the SDK is a replacement for the older bulk executor library, and now fully supports all CRUD operations (insert, update, read, delete).

The full code sample for the below can be found here.

Bulk Scenario

Imagine you have a graph which models devices across a fleet of cars. Suppose you want to run a bulk operation to increase a temperature setting on each device by a certain amount for all devices in a particular fleet, but you are aware that independent operations could also be setting the temperature setting at the same time, and you want to make sure that this temperature state is not overwritten in your bulk update. In other words, you want to ensure all devices have their temperature setting increased by the same amount, relative to the value that was present in the database at the time of bulk update, without missing any interim updates. First, lets model the vertices in the graph as classes:

    public class VertexDevice
    {
        [JsonProperty("id")]
        internal string Id { get; set; }

        [JsonProperty("pk")]
        internal string pk { get; set; }

        [JsonProperty("label")]
        internal string label { get; set; }

        [JsonProperty("model")]
        public List<VertexProperty> model { get; set; }

        [JsonProperty("temp")]
        public List<VertexPropertyNumber> temp { get; set; }

        [JsonProperty("status")]
        public List<VertexProperty> status { get; set; }

        [JsonProperty("_rid")]
        internal string _rid { get; set; }

        [JsonProperty("_self")]
        internal string _self { get; set; }

        [JsonProperty("_etag")]
        internal string _etag { get; set; }

        [JsonProperty("_attachments")]
        internal string _attachments { get; set; }

        [JsonProperty("_ts")]
        internal string _ts { get; set; }
    }

    public class VertexProperty
    {
        [JsonProperty("id")]
        public string Id { get; set; }

        [JsonProperty("_value")]
        public string _value { get; set; }
    }

    public class VertexPropertyNumber
    {
        [JsonProperty("id")]
        public string Id { get; set; }

        [JsonProperty("_value")]
        public int _value { get; set; }
    }

#core (sql) api #gremlin api #tips and tricks #.net sdk #bulk delete #bulk insert #bulk reads #bulk support #bulk updates #etag #gremlin #optimistic concurrency #sql api

Seamus  Quitzon

Seamus Quitzon

1595201363

Php how to delete multiple rows through checkbox using ajax in laravel

First thing, we will need a table and i am creating products table for this example. So run the following query to create table.

CREATE TABLE `products` (
 `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
 `name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
 `description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
 `created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
 `updated_at` datetime DEFAULT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci

Next, we will need to insert some dummy records in this table that will be deleted.

INSERT INTO `products` (`name`, `description`) VALUES

('Test product 1', 'Product description example1'),

('Test product 2', 'Product description example2'),

('Test product 3', 'Product description example3'),

('Test product 4', 'Product description example4'),

('Test product 5', 'Product description example5');

Now we are redy to create a model corresponding to this products table. Here we will create Product model. So let’s create a model file Product.php file under app directory and put the code below.

<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Product extends Model
{
    protected $fillable = [
        'name','description'
    ];
}

Step 2: Create Route

Now, in this second step we will create some routes to handle the request for this example. So opeen routes/web.php file and copy the routes as given below.

routes/web.php

Route::get('product', 'ProductController@index');
Route::delete('product/{id}', ['as'=>'product.destroy','uses'=>'ProductController@destroy']);
Route::delete('delete-multiple-product', ['as'=>'product.multiple-delete','uses'=>'ProductController@deleteMultiple']);

#laravel #delete multiple rows in laravel using ajax #laravel ajax delete #laravel ajax multiple checkbox delete #laravel delete multiple rows #laravel delete records using ajax #laravel multiple checkbox delete rows #laravel multiple delete

Mitchel  Carter

Mitchel Carter

1601305200

Microsoft Announces General Availability Of Bridge To Kubernetes

Recently, Microsoft announced the general availability of Bridge to Kubernetes, formerly known as Local Process with Kubernetes. It is an iterative development tool offered in Visual Studio and VS Code, which allows developers to write, test as well as debug microservice code on their development workstations while consuming dependencies and inheriting the existing configuration from a Kubernetes environment.

Nick Greenfield, Program Manager, Bridge to Kubernetes stated in an official blog post, “Bridge to Kubernetes is expanding support to any Kubernetes. Whether you’re connecting to your development cluster running in the cloud, or to your local Kubernetes cluster, Bridge to Kubernetes is available for your end-to-end debugging scenarios.”

Bridge to Kubernetes provides a number of compelling features. Some of them are mentioned below-

#news #bridge to kubernetes #developer tools #kubernetes #kubernetes platform #kubernetes tools #local process with kubernetes #microsoft