Oral  Brekke

Oral Brekke

1670971020

How to Test ETag Browser Caching with cURL Requests

Recently I’ve been playing around with Netlify and as a result I’m becoming more familiar with caching strategies commonly found with content delivery networks (CDN). One such strategy makes use of ETag identifiers for web resources.

In short, an ETag identifier is a value, typically a hash, that represents the version of a particular web resource. The resource is cached within the browser along with the ETag value and that value is used when determining if the particular cached resource has changed remotely.

We’re going to explore how to simulate the requests that the browser makes when working with ETag identifiers, but using simple cURL requests instead.

To get started, we’re going to make a request for a resource:

$ curl -I https://www.thepolyglotdeveloper.com/css/custom.min.css

HTTP/2 200 
accept-ranges: bytes
cache-control: public, max-age=0, must-revalidate
content-length: 7328
content-type: text/css; charset=UTF-8
date: Wed, 04 Sep 2019 00:41:04 GMT
strict-transport-security: max-age=31536000
etag: "018b8b0ecb632aab770af328f043b119-ssl"
age: 0
server: Netlify
x-nf-request-id: 65a8e1aa-03a0-4b6c-9f46-51aba795ad83-921013

In the above request I’ve only requested the header information from the response. For this tutorial, the body of the response isn’t important to us.

Take note of the cache-control and the etag headers as well as the response code.

In the scenario of Netlify, the cache-control header tells the browser to cache the resource, but also not to trust the cache. This is done so the client always attempts to get the latest resource. The etag header represents the version of the resource and it is sent with future requests. If the server says the etag hasn’t changed between requests then the response will have a 304 code and the cached resource will be used instead.

So let’s check to see if the resource has changed with cURL:

$ curl -I -H 'If-None-Match: "018b8b0ecb632aab770af328f043b119-ssl"' https://www.thepolyglotdeveloper.com/css/custom.min.css

HTTP/2 304 
date: Wed, 04 Sep 2019 00:53:24 GMT
etag: "018b8b0ecb632aab770af328f043b119-ssl"
cache-control: public, max-age=0, must-revalidate
server: Netlify
x-nf-request-id: eca29310-c9bf-4742-87e1-3412e8852381-2165939

With the new request to the same resource, the If-None-Match header is included and the value is the etag hash from the previous request.

Notice that this time around the response status code was 304 as anticipated. Had the etag been different, a 200 response would have happened with a new etag hash.

Interacting with Compressed Cached Resources

If you look at your browser’s network inspector you might notice that etag hashes for resources have a -df value appended to them. For example, for the same resource, my browser is showing the following:

018b8b0ecb632aab770af328f043b119-ssl-df

While similar, it isn’t totally the same as the etag hash that came back with the previous cURL requests. Try to run a cURL request with the above etag value:

$ curl -I -H 'If-None-Match: "018b8b0ecb632aab770af328f043b119-ssl-df"' https://www.thepolyglotdeveloper.com/css/custom.min.css

HTTP/2 200 
accept-ranges: bytes
cache-control: public, max-age=0, must-revalidate
content-length: 7328
content-type: text/css; charset=UTF-8
date: Wed, 04 Sep 2019 01:03:13 GMT
strict-transport-security: max-age=31536000
etag: "018b8b0ecb632aab770af328f043b119-ssl"
age: 0
server: Netlify
x-nf-request-id: 2734ffab-c611-4fc9-841e-460f172aa3b4-1604468

The response was not a 304 code because the -df means that it is a compressed version of the URL. As it stands, our cURL requests have been for uncompressed versions of the URL.

A Support Engineer at Netlify pointed this difference out to me in this forum thread.

In most circumstances the web browser will include the appropriate header information to work with compressed resources, so in cURL we have to do something different.

To get beyond this with cURL, the following request would work:

$ curl --compressed -I -H 'If-None-Match: "018b8b0ecb632aab770af328f043b119-ssl-df"' https://www.thepolyglotdeveloper.com/css/custom.min.css

HTTP/2 304 
date: Wed, 04 Sep 2019 01:07:36 GMT
etag: "018b8b0ecb632aab770af328f043b119-ssl-df"
cache-control: public, max-age=0, must-revalidate
server: Netlify
vary: Accept-Encoding
x-nf-request-id: 65a8e1aa-03a0-4b6c-9f46-51aba795ad83-1301670

Notice in the above request that we’re now using the --compressed flag with cURL. As a result, we get a 304 response indicating that the resource hasn’t changed and we should used the locally cached copy.

Alternatively, we could execute the following cURL request:

$ curl -I -H 'If-None-Match: "018b8b0ecb632aab770af328f043b119-ssl-df"' -H 'Accept-Encoding: gzip, deflate, br' https://www.thepolyglotdeveloper.com/css/custom.min.css

HTTP/2 304 
date: Wed, 04 Sep 2019 01:12:34 GMT
etag: "018b8b0ecb632aab770af328f043b119-ssl-df"
cache-control: public, max-age=0, must-revalidate
server: Netlify
vary: Accept-Encoding
x-nf-request-id: eca29310-c9bf-4742-87e1-3412e8852381-2432816

Instead of using the --compressed flag, we are including an accept-encoding header.

Again, information around compressed versions were provided to me by Luke Lawson from Netlify in this forum thread.

Conclusion

You just saw how to simulate the same caching that happens in the web browser using cURL instead. Since I’m new to content delivery networks (CDN) and how they handle caching, this was very useful to me when it came to testing how caching worked with the etag hash for any given resource. The 304 response will always be received quicker and with a smaller payload than a 200 response which saves bandwidth and performance without sacrificing the freshness of content.

In theory, the CDN will maintain versioning information for a given resource and as a result will be able to validate etag values for freshness. It is not up to the browser to determine if the etag is stale.

Original article source at: https://www.thepolyglotdeveloper.com/

#caching #test #requests

How to Test ETag Browser Caching with cURL Requests
Rupert  Beatty

Rupert Beatty

1667609940

Just: Swift HTTP for Humans

Just

Just is a client-side HTTP library inspired by python-requests - HTTP for Humans.

Features

Just lets you to the following effortlessly:

  • URL queries
  • custom headers
  • form (x-www-form-encoded) / JSON HTTP body
  • redirect control
  • multipart file upload along with form values.
  • basic/digest authentication
  • cookies
  • timeouts
  • synchronous / asynchronous requests
  • upload / download progress tracking for asynchronous requests
  • link headers
  • friendly accessible results

Use

The simplest request with Just looks like this:

//  A simple get request
Just.get("http://httpbin.org/get")

The next example shows how to upload a file along with some data:

//  talk to registration end point
let r = Just.post(
    "http://justiceleauge.org/member/register",
    data: ["username": "barryallen", "password":"ReverseF1ashSucks"],
    files: ["profile_photo": .url(fileURLWithPath:"flash.jpeg", nil)]
)

if r.ok { /* success! */ }

Here's the same example done asynchronously:

//  talk to registration end point
Just.post(
    "http://justiceleauge.org/member/register",
    data: ["username": "barryallen", "password":"ReverseF1ashSucks"],
    files: ["profile_photo": .url(fileURLWithPath:"flash.jpeg", nil)]
) { r in
    if r.ok { /* success! */ }
}

Read Getting Started on the web or in this playground to learn more!

Install

Here are some ways to leverage Just.

Xcode

Add https://github.com/dduan/Just.git the usual way.

Swift Package Manager

Add the following to your dependencies:

.package(url: "https://github.com/dduan/Just.git",  from: "0.8.0")

… and "Just" to your target dependencies.

Carthage

Include the following in your Cartfile:

github "dduan/Just"

Just includes dynamic framework targets for both iOS and OS X.

CocoaPods

The usual way:

platform :ios, '8.0'
use_frameworks!

target 'MyApp' do
  pod 'Just'
end

Manual

Drop Just.xcodeproj into your project navigator. Under the General tab of your project settings, use the plus sign to add Just.framework to Linked Framework and Libraries. Make sure to include the correct version for your target's platform.

It's also common to add Just as a git submodule to your projects repository:

cd path/to/your/project
git submodule add https://github.com/dduan/Just.git

Source File

Put Just.swift directly into your project. Alternately, put it in the Sources folder of a playground. (The latter makes a fun way to explore the web.)

Contribute

Pull requests are welcome. Here are some tips for code contributors:

Work in Just.xcworkspace.

The tests for link headers relies on Github APIs, which has a low per-hour limit. To overcome this, you can edit the Xcode build schemes and add environment variables GITHUB_TOKEN. Learn more about personal tokens here.

For Xcode rebels, checkout Makefile.

HTML documentation pages are generated by literate programmin tool docco

Download Details:

Author: dduan
Source Code: https://github.com/dduan/Just 
License: MIT license

#swift #http #requests 

Just: Swift HTTP for Humans
Lawrence  Lesch

Lawrence Lesch

1662028980

Tor-request: Light Tor Proxy Wrapper for Request Library

Tor-request - Simple HTTP client through Tor network  

Simple to use

var tr = require('tor-request');
tr.request('https://api.ipify.org', function (err, res, body) {
  if (!err && res.statusCode == 200) {
    console.log("Your public (through Tor) IP is: " + body);
  }
});

Demo

http://tor.jin.fi/

About

A very simple and light wrapper around the fantastic request library to send http(s) requests through Tor.

How

Tor communicates through the SOCKS Protocol so we need to create and configure appropriate SOCKS Agent objects for Node's http and https core libraries using the socks library.

Installation

from npm

npm install tor-request

from source

git clone https://github.com/talmobi/tor-request
cd tor-request
npm install

Requirements

A Tor client.

It's highly recommended you run the official client yourself, either locally or otherwise. Tor is available for most systems often just a quick one-line install away.

On Debian/Ubuntu you can install and run a relatively up to date Tor with.

apt install tor # should auto run as daemon after install

Misc Linux Command for running Tor as daemon --RunAsDaemon 1 thanks @knoxcard

/usr/bin/tor --RunAsDaemon 1

On OSX you can install with homebrew

brew install tor
tor # run tor

If you'd like to run it as a background process you can add & at the end of the command tor &. I like to have it running on a separate terminal window/tab/tmux/screen during development in order to see what's going on.

On Windows download the tor expert bundle (not the browser), unzip it and run tor.exe inside the Tor/ directory.

download link: Windows Expert Bundle

./Tor/tor.exe # --default-torrc PATH_TO_TORRC

See TorProject.org for detailed installation guides for all platforms.

The Tor client by default runs on port 9050. This is also the default address tor-request uses. You can change it if needed.

tr.setTorAddress(ipaddress, port); // "127.0.0.1" and 9050 by default

(Optional) Configuring Tor, enabling the ControlPort

You need to enable the Tor ControlPort if you want to programmatically refresh the Tor session (i.e., get a new proxy IP address) without restarting your Tor client.

Configure tor by editing the torrc file usually located at /etc/tor/torrc, /lib/etc/tor/torrc, ~/.torrc or /usr/local/etc/tor/torrc - Alternatively you can supply the path yourself with the --default-torrc PATH command line argument. See Tor Command-Line Options

Generate the hash password for the torrc file by running tor --hash-password SECRETPASSWORD.

tor --hash-password giraffe

The last line of the output contains the hash password that you copy paste into torrc

Jul 21 13:08:50.363 [notice] Tor v0.2.6.10 (git-58c51dc6087b0936) running on Darwin with Libevent 2.0.22-stable, OpenSSL 1.0.2h and Zlib 1.2.5.
Jul 21 13:08:50.363 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
16:AEBC98A6777A318660659EC88648EF43EDACF4C20D564B20FF244E81DF

Copy the generated hash password and add it to your torrc file

# sample torrc file
ControlPort 9051
HashedControlPassword 16:AEBC98A6777A318660659EC88648EF43EDACF4C20D564B20FF244E81DF

Lastly tell tor-request the password to use

var tr = require('tor-request')
tr.TorControlPort.password = 'giraffe'

API

// index.js
module.exports = {
  /**
   * This is a light wrapper function around the famous request nodeJS library, routing it through
   * your Tor client.
   *
   * Use it as you would use the request library - see their superb documentation.
   * https://github.com/request/request
   */
  request: function (url || opts, function (err, res, body))
  
  /**
   * @param {string} ipaddress - ip address of tor server ("127.0.0.1" by default)
   * @param {number} port - port of the tor server (by default tor runs on port 9050)
   */
  setTorAddress: function (ipaddress, port) // defaults to "127.0.0.1" on port 9050
  
  /**
   * Helper object to communicate with the tor ControlPort. Requires an enabled ControlPort on tor.
   */
  TorControlPort: {
    password: "", // default ControlPort password
    host: "127.0.0.1", // default address
    port: 9051, // default ControlPort
    
    /**
     * @param {Array.string} commands - signals that are sent to the ControlPort
     */
    send: function (commands, done(err, data))
  }
  
  /**
   * A set of predefined TorControlPort commands to request and verify tor for a new session (get a new ip to use).
   *
   * @param {function} done - the callback function to tell you when the process is done
   * @param {object} err - null if tor session renewed successfully
   */
  newTorSession: function ( done(err) ) // clears and renews the Tor session (i.e., you get a new IP)
  
}

Custom headers

https://github.com/talmobi/tor-request/issues/13

Refer to the https://github.com/request/request#custom-http-headers to specify your own headers ( like User-Agent ).

basically:

tr.request({ url: 'google.com', headers: { 'user-agent': 'giraffe' }}, function ( err, response, body ) { /*...*/ })

Request Pipe Streaming

request({
 url: 'https://www.google.com.np/images/srpr/logo11w.png',
 strictSSL: true,
 agentClass: require('socks5-https-client/lib/Agent'),
 agentOptions: {
   socksHost: 'my-tor-proxy-host', // Defaults to 'localhost'.
   socksPort: 9050, // Defaults to 1080.
   // Optional credentials
   socksUsername: 'proxyuser',
   socksPassword: 'p@ssw0rd',
 }
  }, function(err, res) {
 console.log(err || res.body);
  }).pipe(fs.createWriteStream('doodle.png'))

Test

Tests the original request library by connecting to http://api.ipify.org - returning your ip. Then makes a few additional requests, now through tor-request, and makes sure the ip's are different (went through tor).

mocha test/test.js

Download Details:

Author: talmobi 
Source Code: https://github.com/talmobi/tor-request 
License: MIT

#javascript #requests 

Tor-request: Light Tor Proxy Wrapper for Request Library
Brook  Hudson

Brook Hudson

1659388620

Ruby Fog: The Ruby Cloud Services Library

fog is the Ruby cloud services library, top to bottom:

  • Collections provide a simplified interface, making clouds easier to work with and switch between.
  • Requests allow power users to get the most out of the features of each individual cloud.
  • Mocks make testing and integrating a breeze.   

Dependency Notice

Currently all fog providers are getting separated into metagems to lower the load time and dependency count.

If there's a metagem available for your cloud provider, e.g. fog-aws, you should be using it instead of requiring the full fog collection to avoid unnecessary dependencies.

'fog' should be required explicitly only if the provider you use doesn't yet have a metagem available.

Getting Started

The easiest way to learn fog is to install the gem and use the interactive console. Here is an example of wading through server creation for Amazon Elastic Compute Cloud:

$ sudo gem install fog
[...]

$ fog

  Welcome to fog interactive!
  :default provides [...]

>> server = Compute[:aws].servers.create
ArgumentError: image_id is required for this operation

>> server = Compute[:aws].servers.create(:image_id => 'ami-5ee70037')
<Fog::AWS::EC2::Server [...]>

>> server.destroy # cleanup after yourself or regret it, trust me
true

Ruby version

Fog requires Ruby 2.0.0 or later.

Ruby 1.8 and 1.9 support was dropped in fog-v2.0.0 as a backwards incompatible change. Please use the later fog 1.x versions if you require 1.8.7 or 1.9.x support.

Collections

A high level interface to each cloud is provided through collections, such as images and servers. You can see a list of available collections by calling collections on the connection object. You can try it out using the fog command:

>> Compute[:aws].collections
[:addresses, :directories, ..., :volumes, :zones]

Some collections are available across multiple providers:

  • compute providers have flavors, images and servers
  • dns providers have zones and records
  • storage providers have directories and files

Collections share basic CRUD type operations, such as:

  • all - fetch every object of that type from the provider.
  • create - initialize a new record locally and a remote resource with the provider.
  • get - fetch a single object by its identity from the provider.
  • new - initialize a new record locally, but do not create a remote resource with the provider.

As an example, we'll try initializing and persisting a Rackspace Cloud server:

require 'fog'

compute = Fog::Compute.new(
  :provider           => 'Rackspace',
  :rackspace_api_key  => key,
  :rackspace_username => username
)

# boot a gentoo server (flavor 1 = 256, image 3 = gentoo 2008.0)
server = compute.servers.create(:flavor_id => 1, :image_id => 3, :name => 'my_server')
server.wait_for { ready? } # give server time to boot

# DO STUFF

server.destroy # cleanup after yourself or regret it, trust me

Models

Many of the collection methods return individual objects, which also provide common methods:

  • destroy - will destroy the persisted object from the provider
  • save - persist the object to the provider
  • wait_for - takes a block and waits for either the block to return true for the object or for a timeout (defaults to 10 minutes)

Mocks

As you might imagine, testing code using Fog can be slow and expensive, constantly turning on and shutting down instances. Mocking allows skipping this overhead by providing an in memory representation of resources as you make requests. Enabling mocking is easy to use: before you run other commands, simply run:

Fog.mock!

Then proceed as usual, if you run into unimplemented mocks, fog will raise an error and as always contributions are welcome!

Requests

Requests allow you to dive deeper when the models just can't cut it. You can see a list of available requests by calling #requests on the connection object.

For instance, ec2 provides methods related to reserved instances that don't have any models (yet). Here is how you can lookup your reserved instances:

$ fog
>> Compute[:aws].describe_reserved_instances
#<Excon::Response [...]>

It will return an excon response, which has body, headers and status. Both return nice hashes.

Go forth and conquer

Play around and use the console to explore or check out fog.io and the provider documentation for more details and examples. Once you are ready to start scripting fog, here is a quick hint on how to make connections without the command line thing to help you.

# create a compute connection
compute = Fog::Compute.new(:provider => 'AWS', :aws_access_key_id => ACCESS_KEY_ID, :aws_secret_access_key => SECRET_ACCESS_KEY)
# compute operations go here

# create a storage connection
storage = Fog::Storage.new(:provider => 'AWS', :aws_access_key_id => ACCESS_KEY_ID, :aws_secret_access_key => SECRET_ACCESS_KEY)
# storage operations go here

geemus says: "That should give you everything you need to get started, but let me know if there is anything I can do to help!"

Versioning

Fog library aims to adhere to Semantic Versioning 2.0.0, although it does not address challenges of multi-provider libraries. Semantic versioning is only guaranteed for the common API, not any provider-specific extensions. You may also need to update your configuration from time to time (even between Fog releases) as providers update or deprecate services.

However, we still aim for forwards compatibility within Fog major versions. As a result of this policy, you can (and should) specify a dependency on this gem using the Pessimistic Version Constraint with two digits of precision. For example:

spec.add_dependency 'fog', '~> 1.0'

This means your project is compatible with Fog 1.0 up until 2.0. You can also set a higher minimum version:

spec.add_dependency 'fog', '~> 1.16'

Getting Help

Contributing

Please refer to CONTRIBUTING.md.

License

Please refer to LICENSE.md.


Author: fog
Source code: https://github.com/fog/fog
License: MIT license

#ruby  #ruby-on-rails 

Ruby Fog: The Ruby Cloud Services Library
Lawson  Wehner

Lawson Wehner

1658828580

HTTP Requests Package inspired By Python Requests Module

HTTP Requests

HTTP Requests Package Inspired By Python Requests Module Which Is Used For To Make HTTP Request And Get Response. You Can Use It In Rest API

Install

Add this to your package's pubspec.yaml file:

dependencies:
  http_requests: ^1.2.0

Usage

Start by importing the library

import 'package:http_requests/http_requests.dart';

Let's make a simple HTTP request

Response r = await HttpRequests.get('https://google.com');
print(r.status);

Some Methods

just like in python's request module, the Response object has this functionallity

  • r.status - the response status code
  • r.url - the url in the request
  • r.headers - the response headers
  • r.success - a boolean. true indicates that the request was a great success
  • r.hasError - a boolean. true indicates that the request was not a great success
  • r.bytes - return the body in the respone as a list of bytes
  • r.contentLength - return the response content lenght
  • r.contentType - return the response content type application/json,
  • r.isRedirect - return the Redirection Status is true or false
  • r.content - return the body in the respone as a string (with UTF-8)
  • r.response - return the body in the respone as a string (without UTF-8 {take default})
  • r.json - recodes the body in the respone and returns the result (dynamic type)
  • r.throwForStatus() - will throw an exception if the response statusCode is not a great success.

Installing

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add http_requests

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  http_requests: ^1.4.0

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:http_requests/http_requests.dart';

example/example.dart

import 'package:http_requests/http_requests.dart';

void main() async {
  // Get Method
  Response r = await HttpRequests.get("https://secanonm.in");
  print(r);

  // Post Method With Headers And Data

  Map header = {
    'Host': 'www.secanonm.in',
    'User-Agent':
        'Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0',
    'Accept':
        'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.5',
    'Accept-Encoding': 'gzip, deflate, br',
    'Connection': 'keep-alive'
  };
  Map query = {"name": "kiyan"};
  Response req = await HttpRequests.post("https://secanonm.in",
      headers: header, data: query);
  print(req.status);
}

Author: Secanonm
Source Code: https://github.com/secanonm/http_requests 
License: MIT license

#flutter #dart #http #requests 

HTTP Requests Package inspired By Python Requests Module
Annie  Emard

Annie Emard

1652622360

KubeLinter: Static Analysis Tool That Checks Kubernetes YAML Files

Static analysis for Kubernetes

What is KubeLinter?

KubeLinter analyzes Kubernetes YAML files and Helm charts, and checks them against a variety of best practices, with a focus on production readiness and security.

KubeLinter runs sensible default checks, designed to give you useful information about your Kubernetes YAML files and Helm charts. This is to help teams check early and often for security misconfigurations and DevOps best practices. Some common examples of these include running containers as a non-root user, enforcing least privilege, and storing sensitive information only in secrets.

KubeLinter is configurable, so you can enable and disable checks, as well as create your own custom checks, depending on the policies you want to follow within your organization.

When a lint check fails, KubeLinter reports recommendations for how to resolve any potential issues and returns a non-zero exit code.

Documentation

Visit https://docs.kubelinter.io for detailed documentation on installing, using and configuring KubeLinter.

Installing KubeLinter

Using Go

To install using Go, run the following command:

GO111MODULE=on go install golang.stackrox.io/kube-linter/cmd/kube-linter

Otherwise, download the latest binary from Releases and add it to your PATH.

Using Homebrew for macOS or LinuxBrew for Linux

To install using Homebrew or LinuxBrew, run the following command:

brew install kube-linter

Building from source

Prerequisites

  • Make sure that you have installed Go prior to building from source.

Building KubeLinter

Installing KubeLinter from source is as simple as following these steps:

First, clone the KubeLinter repository.

git clone git@github.com:stackrox/kube-linter.git

Then, compile the source code. This will create the kube-linter binary files for each platform and places them in the .gobin folder.

make build

Finally, you are ready to start using KubeLinter. Verify your version to ensure you've successfully installed KubeLinter.

.gobin/kube-linter version

Testing KubeLinter

There are several layers of testing. Each layer is expected to pass.

go unit tests:

make test

end-to-end integration tests:

make e2e-test

and finally, end-to-end integration tests using bats-core:

make e2e-bats

Verifying KubeLinter images

KubeLinter images are signed by cosign. We recommend verifying the image before using it.

Once you've installed cosign, you can use the KubeLinter public key to verify the KubeLinter image with:

cat kubelinter-cosign.pub
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEl0HCkCRzYv0qH5QiazoXeXe2qwFX
DmAszeH26g1s3OSsG/focPWkN88wEKQ5eiE95v+Z2snUQPl/mjPdvqpyjA==
-----END PUBLIC KEY-----


cosign verify --key kubelinter-cosign $IMAGE_NAME

KubeLinter also provides cosign keyless signatures.

You can verify the KubeLinter image with:

# NOTE: Keyless signatures are NOT PRODUCTION ready.

COSIGN_EXPERIMENTAL=1 cosign verify $IMAGE_NAME

Using KubeLinter

Local YAML Linting

Running KubeLinter to Lint your YAML files only requires two steps in its most basic form.

Locate the YAML file you'd like to test for security and production readiness best practices:

Run the following command:

kube-linter lint /path/to/your/yaml.yaml

Example

Consider the following sample pod specification file pod.yaml. This file has two production readiness issues and one security issue:

Security Issue:

  1. The container in this pod is not running as a read only file system, which could allow it to write to the root filesystem.

Production readiness:

The container's CPU limits are not set, which could allow it to consume excessive CPU.

The container's memory limits are not set, which could allow it to consume excessive memory

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false

Copy the YAML above to pod.yaml and lint this file by running the following command:

kube-linter lint pod.yaml

KubeLinter runs its default checks and reports recommendations. Below is the output from our previous command.

pod.yaml: (object: <no namespace>/security-context-demo /v1, Kind=Pod) container "sec-ctx-demo" does not have a read-only root file system (check: no-read-only-root-fs, remediation: Set readOnlyRootFilesystem to true in your container's securityContext.)

pod.yaml: (object: <no namespace>/security-context-demo /v1, Kind=Pod) container "sec-ctx-demo" has cpu limit 0 (check: unset-cpu-requirements, remediation: Set    your container's CPU requests and limits depending on its requirements. See    https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/   #requests-and-limits for more details.)

pod.yaml: (object: <no namespace>/security-context-demo /v1, Kind=Pod) container "sec-ctx-demo" has memory limit 0 (check: unset-memory-requirements, remediation:    Set your container's memory requests and limits depending on its requirements.    See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/   #requests-and-limits for more details.)

Error: found 3 lint errors

To learn more about using and configuring KubeLinter, visit the documentation page.

Mentions/Tutorials

The following are tutorials on KubeLinter written by users. If you have one that you would like to add to this list, please send a PR!

Community

If you would like to engage with the KubeLinter community, including maintainers and other users, you can join the Slack workspace here.

To contribute, check out our contributing guide.

As a reminder, all participation in the KubeLinter community is governed by our code of conduct.

WARNING: Alpha release

KubeLinter is at an early stage of development. There may be breaking changes in the future to the command usage, flags, and configuration file formats. However, we encourage you to use KubeLinter to test your environment YAML files, see what breaks, and contribute.

StackRox

KubeLinter is made with ❤️ by StackRox.

If you're interested in KubeLinter, or in any of the other cool things we do, please know that we're hiring! Check out our open positions. We'd love to hear from you!

Author: stackrox
Source Code: https://github.com/stackrox/kube-linter
License: Apache-2.0 License

#kubernetes #linter #yaml 

KubeLinter: Static Analysis Tool That Checks Kubernetes YAML Files

How POST Requests with Python Make Web Scraping Easier

When scraping a website with Python, it’s common to use the

urllibor theRequestslibraries to sendGETrequests to the server in order to receive its information.

However, you’ll eventually need to send some information to the website yourself before receiving the data you want, maybe because it’s necessary to perform a log-in or to interact somehow with the page.

To execute such interactions, Selenium is a frequently used tool. However, it also comes with some downsides as it’s a bit slow and can also be quite unstable sometimes. The alternative is to send a

POSTrequest containing the information the website needs using the request library.

In fact, when compared to Requests, Selenium becomes a very slow approach since it does the entire work of actually opening your browser to navigate through the websites you’ll collect data from. Of course, depending on the problem, you’ll eventually need to use it, but for some other situations, a

POSTrequest may be your best option, which makes it an important tool for your web scraping toolbox.

In this article, we’ll see a brief introduction to the

POSTmethod and how it can be implemented to improve your web scraping routines.

#python #web-scraping #requests #web-scraping-with-python #data-science #data-collection #python-tutorials #data-scraping

How POST Requests with Python Make Web Scraping Easier
HI Python

HI Python

1623768540

Parallel web requests in Python

Performing web requests in parallel improves performance dramatically. The proposed Python implementation uses Queue and Thread to create a simple method saving a lot of time.

I have recently posted several articles using the Open Trip Planner as a source for the analysis of public transport. Trip routing was obtained from OTP through its REST API. OTP was running on the local machine but it still took a lot of time to make all required requests. The shown implementation in the articles is sequential. For simplicity I posted this sequential implementation but in other cases I use a parallel implementation. This article shows a parallel implementation for performing a lot of webrequests.

Though I have some experience the tutorials I found were quite difficult to master. This article contains my lessons learned and can be used for performing parallel web requests.

#python #multithreading #parallel web requests in python #requests #web requests #parallel

Parallel web requests in Python
Anna Yusef

Anna Yusef

1607657051

Essential Of Web scraping: urllib & Requests With Python

Web scraping provides a way for people & businesses to understand what one can achieve from a fair amount of data, you can challenge your competitors and surpass them just by doing a good data analysis and research on scraped data from the web. Maybe, from an individual perspective too, if you are looking for a job; Automated web scraping can help you process all the jobs posted on the internet in your spreadsheet where you can filter them out by your skills and experience. In contrast, when you spend hours to get the information you want, now it’s easy for you to create your web scraping script that can work like a charm for all your manual hours of labor.

There is so much information on the internet, and new data is generated every second, so manually scraping and researching is not possible, That’s why we need automated web scraping to accomplish our goals.

Web scraping became an essential part of every business, individual, and even government.

#python frameworks #python web framework #requests #python

Essential Of Web scraping: urllib & Requests With Python

How To Create HTTP Requests in .NET Core 3

Do you want to create a web request to access data from the web in a .NET Core application? .NET offers different ways to send web requests. In this video, I’m going to explain how to use the new APIs for the .NET Core platform.

#how #create #http #requests #.net core #.net

How To Create HTTP Requests in .NET Core 3

Learn How to Scrap Worldometer Live Update using BeautifulSoup

This is another webscraping series , on this tutorial we are going to learn how to scrap worldometers corona virus live update and store it on CSV file using BeautifulSoup and requests .

For those who don’t know worldometer , it’s one of trusted provider for corona real update of corona virus cases.

To effectively follow through this tutorial , you need to have bit understanding of how beautifulSoup and requests work.

If you’re new I would recommend to check out A beginner guide to Webscraping first before completing this tutorial. otherwise let’s proceed

#projects #tutorial #beautifulsoup #corona scraping #python requests #python tricks #requests #web scraping #website

Learn How to Scrap Worldometer Live Update using BeautifulSoup
Hal  Sauer

Hal Sauer

1591725780

Learn How to Extract All Links From Any Website in Python

On this tutorial you’re going to learn how to extract all links from a given website or URL using BeautifulSoup and requests.

If you’re new to web scraping I would recommend to start first with A beginner tutorial to Webscraping first before this .

Program overview

We will use the requests library to get the raw html page from the website and then we are going to use BeautifulSoup to extract all the links from the html page.

To complete this tutorial you should have the mentioned library installed.

#projects #tutorial #beautifulsoup #bs4 #python requests #requests #web crawling #web scraping

Learn How to Extract All Links From Any Website in Python

A brief Introduction to Web scraping in Python

Web scraping simply concerns with Extracting data from website .

As a programmer in many cases you will need to extract data from websites therefore Web scraping is a skill you need to have.

On this tutorial you’re going to learn how to perform web scraping using Python Programming language

https://kalebujordan.com/web-scraping-in-python-tutorial/

#python #webscraping #beautifulsoup #requests

A brief Introduction to Web scraping in Python

A brief Introduction to Web scraping in Python

Web scraping simply concerns with Extracting data from website .

As a programmer in many cases you will need to extract data from websites therefore Web scraping is a skill you need to have.

On this tutorial you’re going to learn how to perform web scraping using Python Programming languages .

Throughout the tutorial you will learn out basic web scraping examples together with implementing a simple web scraper to scrap quotations from a website .

#projects #tutorial #python requests #requests #web scraping

A brief Introduction to Web scraping in Python