1567659626
In just two relatively short years, Kubernetes has laid waste to its fellow competitors in the battlefield of container orchestration. Sadly, Docker Swarm hasn’t been a major contender since 2016 and, like AWS, admitted defeat by pledging K8s support and integration.
Since Kubernetes has skyrocketed to popularity as the container solution of choice, here’s a comprehensive list of all the tools that complement K8s to further enhance your development work.
Kubespray provides a set of Ansible roles for Kubernetes deployment and configuration. Kubespray can use AWS, GCE, Azure, OpenStack or a bare metal Infrastructure as a Service (IaaS) platform. Kubespray is an open-source project with an open development model. The tool is a good choice for people who already know Ansible as there’s no need to use another tool for provisioning and orchestration. Kubespray uses kubeadm under the hood.
Link: https://github.com/kubernetes-incubator/kubespray
Cost: Free
Minikube allows you to install and try out Kubernetes locally. The tool is a good starting point for Kubernetes exploration. Easily launch a single-node Kubernetes cluster inside a virtual machine (VM) on your laptop. Minikube is available on Windows, Linux, and OSX. In just 5 minutes you will be able to explore Kubernetes’ main features. Launch the Minikube dashboard straight-from-the-box with just one command.
Link: https://github.com/kubernetes/minikube
Cost: Free
Kubeadm is a Kubernetes distribution tool since version 1.4. The tool helps to bootstrap best-practice Kubernetes clusters on existing infrastructure. Kubeadm cannot provision infrastructure for you though. Its main advantage is the ability to launch minimum viable Kubernetes clusters anywhere. Add-ons and networking setup are both out of Kubeadm’s scope though, so you will need to install this manually or using another tool.
Link: https://github.com/kubernetes/kubeadm
Cost: Free
Kops helps you create, destroy, upgrade, and maintain production-grade, highly available Kubernetes clusters from the command line. Amazon Web Services (AWS) is currently officially supported, with GCE in beta support, and VMware vSphere in alpha, and other platform support is planned. Kops allows you to control the full Kubernetes cluster lifecycle; from infrastructure provisioning to cluster deletion.
Link: https://github.com/kubernetes/kops
Cost: Free
Bootkube is a great tool for launching self-hosted Kubernetes clusters. It helps you set up a temporary Kubernetes control plane which will operate until the self-hosted control-plane is able to handle requests.
Link: https://github.com/kubernetes-incubator/bootkube
Cost: Free
Kube-AWS is a console tool provided by CoreOS which deploys a fully-functional Kubernetes cluster using AWS CloudFormation. Kube-AWS allows you to deploy a traditional Kubernetes cluster and automatically provision every K8s service with native AWS features (e.g., ELB, S3, and Auto Scaling, etc.).
Link: https://github.com/kubernetes-incubator/kube-aws
Cost: Free
SimpleKube is a bash script which allows you to deploy a single-node Kubernetes cluster on a Linux server. While Minikube needs a hypervisor (VirtualBox, KVM), SimpleKube will install all K8s binaries into the server itself. Simplekube is tested on Debian 8/9 and Ubuntu 16.x/17.x. It’s a great tool for giving Kubernetes a first try.
Link: https://github.com/valentin2105/Simplekube
Cost: Free
Juju is an orchestrator from Canonical that allows you to remotely operate cloud provider solutions. Juju works at a higher level of abstraction than Puppet/Ansible/Chef and manages services instead of machines/VMs. Canonical made the great effort to provide what they call a suitable “Kubernetes-core bundle” in production. Juju is available as a dedicated tool with its own console/UI interface and also as a service (JaaS) which is free during the beta period.
Link: https://jujucharms.com/
Cost: Free Community Edition
Commercial Edition - from 200$ per year
Conjure-up is another Canonical product which allows you to deploy “The Canonical Distribution of Kubernetes on Ubuntu” with a few simple commands. It supports AWS, GCE, Azure, Joyent, OpenStack, VMware, bare metal, and localhost deployments. Juju, MAAS, and LXD are the underlying technology for Conjure-up.
Link: https://conjure-up.io/
Cost: Free
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service which makes it simple to deploy, manage, and scale containerized applications using Kubernetes. Amazon EKS manages your Kubernetes infrastructure across multiple AWS Availability Zones, while automatically detecting and replacing unhealthy control plane nodes, and providing on-demand upgrades and patching. You simply provision worker nodes and connect them to the provided Amazon EKS endpoint.
Link: https://aws.amazon.com/eks/
Cost: Pay for the resources used
Kubebox is a terminal console for Kubernetes cluster which allows you to manage and monitor your cluster-live status with nice, old-school interface. Kubebox shows your pod resource usage, cluster monitoring, and container logs, etc. Additionally, you can easily navigate to the desired namespace and execute into the desired container for fast troubleshooting/recovery.
Link: https://github.com/astefanutti/kubebox
Cost: Free
Kube-ops-view is a read-only system dashboard for multiple K8s clusters. With Kube-ops-view you can easily navigate between your cluster and monitor nodes as well as your pod’s healthiness. Kube-ops-view animates some Kubernetes processes such as pod creation and termination. It also uses Heapster as a source of data.
Link: https://github.com/hjacobs/kube-ops-view
Cost: Free
Kubetail is a small bash script which allows you to aggregate logs from multiple pods into one stream. The initial Kubetail version doesn’t have filtering or highlighting features, but there is an additional Kubetail fork on Github. This can form and perform logs coloring using multi-tail tools.
Link: https://github.com/johanhaleby/kubetailhttps://github.com/aks/kubetail
Cost: Free
Kubewatch is a Kubernetes watcher which can publish K8s events to the team communication app, Slack. Kubewatch runs as a pod inside Kubernetes clusters and monitors changes that occur in the system. You can specify the notifications you want to receive by editing the configuration file.
Link: https://github.com/bitnami-labs/kubewatch
Cost: Free
Weave Scope is a troubleshooting and monitoring tool for Docker and Kubernetes clusters. It can automatically generate applications and infrastructure topologies which can help you to identify application performance bottlenecks easily. You can deploy Weave Scope as a standalone application on your local server/laptop, or you can choose the Weave Scope Software as a Service (SaaS) solution on Weave Cloud. With Weave Scope, you can easily group, filter or search containers using names, labels, and/or resource consumption.
Link: https://www.weave.works/oss/scope/
Cost: Free in standalone mode
Standard mode - 30% per month (free 30-day trial)
Enterprise mode - 150$ per node/month
Searchlight by AppsCode is a Kubernetes operator for Icinga. Searchlight periodically runs various checks on Kubernetes clusters and alerts you via email, SMS or chat if something goes wrong. Searchlight includes a default suite of checks written specifically for Kubernetes. Also, it can enhance Prometheus monitoring with external black-box monitoring and serves as a fallback in case internal systems completely fail.
Link: https://github.com/appscode/searchlight
Cost: Free
Heapster enables container cluster monitoring and performance analysis for Kubernetes. Heapster supports Kubernetes natively and can run as a pod on all K8s setups. Heapster’s data then can be pushed to a configurable backend for storage and visualization.
Link: https://github.com/kubernetes/heapster
Cost: Free
Kube-monkey is the Kubernetes’ version of Netflix’s Chaos Monkey. Kube-monkey is a tool that follows the principles of chaos engineering. It can delete K8s pods at random, check services are failure-resilient, and contribute to your system’s healthiness. Kube-monkey is also configured by a TOML file where you can specify which app is to be killed and when to practice your recovery strategies.
Link: https://github.com/asobti/kube-monkey
Cost: Free
K8s-testsuite is made up of 2 Helm charts which work for network bandwidth testing and load testing a single Kubernetes cluster. Load tests emulate simple web-servers with loadbots which run as a Kubernetes microservice based on the Vegeta. Network tests use iperf3 and netperf-2.7.0 internally and run three times. Both sets of tests generate comprehensive log messages with all results and metrics.
Link: https://github.com/mrahbar/k8s-testsuite
Cost: Free
Test-infra is a collection of tools for Kubernetes testing and results verification. Test-infra includes a few dashboards for displaying history, aggregating failures, and showing what is currently testing. You can enhance your test-infra suite by creating your own test jobs. Test-infra can perform end-to-end Kubernetes testing with full Kubernetes lifecycle emulation on different providers using the Kubetest tool.
Link: https://github.com/kubernetes/test-infra
Cost: Free
Sonobuoy allows you to understand your current Kubernetes cluster state by running a set of tests in an accessible and non-destructive manner. Sonobuoy generates informative reports with detailed information about cluster performance. Sonobuoy supports Kubernetes versions 1.8 and on. Sonobuoy Scanner is a browser-based tool which allows you to test Kubernetes clusters in a few clicks, but the CLI version has a bigger set of tests available.
Link: https://github.com/heptio/sonobuoy
Cost: Free
PowerfulSeal is a tool similar to Kube-monkey and follows the Principles of Chaos Engineering. PowerfulSeal can kill pods and remove/add VMs from or to your clusters. In contrast to Kube-monkey, PowefulSeal has an interactive mode which allows you to manually break specific cluster components. Also, PowefulSeal doesn’t need external dependencies apart from SSH.
Link: https://github.com/bloomberg/powerfulseal
Cost: Free
Trireme is a flexible and straightforward implementation of the Kubernetes Network Policies. Trireme works in any Kubernetes cluster and allows you to manage traffic between pods from different clusters. The main advantages of Trireme are the lack of a need for any centralized policy management, the ability to easily organize the interaction of the two resources deployed in Kubernetes, and the lack of complexities of SDN, VLAN tags, and subnets (Trireme uses a conventional L3-network).
Link: https://github.com/aporeto-inc/trireme-kubernetes
Cost: Free
Aporeto provides security for containers, microservices, cloud and legacy applications based on workload identity, encryption, and distributed policies. As Aporeto policies function independently of the underlying infrastructure, security policies can be enabled across Kubernetes clusters or over hybrid environments that include Kubernetes and non-Kubernetes deployments.
Link: https://www.aporeto.com/
Cost: Contact Aporeto for a demo
Twistlock continually monitors your applications deployed on K8s for vulnerability and compliance issues, including the underlying host as well as containers and images. In addition, Twistlock Runtime Defense automatically models container behavior, allowing known, good behavior while alerting on or blocking anomalous activity. Finally, Twistlock provides both layer 3 microsegmentation as well as a layer 7 firewall that can protect front end microservices from common attacks.
Link: https://www.twistlock.com/
Cost: Contact Caylent directly for pricing or to request a free trial
Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Falco is based on the Sysdig Project, an open-source tool (and now a commercial service), built for monitoring container performance by way of tracking kernel system calls. Falco lets you continuously monitor and detect container, application, host, and network activity with one set of rules.
Link: https://sysdig.com/opensource/falco/
Cost: Free as a standalone tool
Basic Cloud: $20 per month (free trial)
Pro Cloud: $30 per month
Pro Software: Custom price
Sysdig Secure, part of the Sysdig Container Intelligence Platform, comes out-of-the-box with unmatched container visibility and deep integrations with container orchestration tools. These include Kubernetes, Docker, AWS ECS, and Apache Mesos. With Sysdig Secure you can Implement service-aware policies, block attacks, analyze your history, and monitor cluster performance. Sysdig Secure is available as cloud and on-premise software offerings.
Link: https://sysdig.com/product/secure/
Cost: Free as a standalone tool
Pro Cloud: Custom price
Pro Software: Custom price
Kubesec.io is a service which allows you to score Kubernetes resources for security feature usage. Kubesec.io verifies resource configuration according to Kubernetes security best-practices. As a result, you will have total control and additional suggestions for how to improve overall system security. The site also contains plenty of external links related to containers and Kubernetes security.
Link: https://kubesec.io
Cost: Free
Cabin functions as a mobile dashboard for the remote management of Kubernetes clusters. With Cabin, users can quickly manage applications, scale deployments, and troubleshoot overall K8s cluster from their Android or iOS device. Cabin is a great tool for operators of K8s clusters as it allows you to perform quick remediation actions in case of incidents.
Link: https://github.com/bitnami-labs/cabin
Cost: Free
Kubectx is a small open-source utility tool which enhances Kubectl functionality with the possibility to switch context easily and connect to a few Kubernetes clusters at the same time. Kubens allows you to navigate between Kubernetes namespaces. Both tools have an auto-completion feature on bash/zsh/fish shells.
Link: https://github.com/ahmetb/kubectx
Cost: Free
Kube-shell increases your productivity when working with kubectl. Kube-shell enables command auto-completion and auto-suggestion. Also, Kube-shell will provide in-line documentation about executed command. Kube-shell even can search and correct commands when wrongly typed. It’s a great tool to increase your performance and productivity in the K8s console.
Link: https://github.com/cloudnativelabs/kube-shell
Cost: Free
Kail is short for Kubernetes tail and works for Kubernetes clusters. With Kail, you can tail Docker logs for all matched pods. Kail allows you to filter pods by service, deployment, labels, and other features. Pods will be added (or removed) automatically to the log after a launch if it matches the criteria.
Link: https://github.com/boz/kail
Cost: Free
Telepresence provides the possibility to debug Kubernetes clusters locally by proxy data from your Kubernetes environment to the local process. Telepresence is able to provide access to Kubernetes services and AWS/GCP resources for your local code as it will be deployed to the cluster. With Telepresence, Kubernetes counts local code as a normal pod within your cluster.
Link: https://www.telepresence.io/
Cost: Free
Helm is a package manager for Kubernetes. It is like APT/Yum/Homebrew, but for Kubernetes. Helm operates with Char which is an archive set of Kubernetes resource manifests that make up a distributed application. You can share your application by creating a Helm chart. Helm allows you to create reproducible builds and manage Kubernetes manifests easily.
Link: https://github.com/kubernetes/helm
Cost: Free
Keel allows you to automate Kubernetes deployment updates and can be launched as a Kubernetes service in a dedicated namespace. With such organization, Keel introduces a minimal load on your environment and adds significant robustness. Keel helps to deploy Kubernetes service through labels, annotations, and charts. You just need to specify an update policy for each deployment or Helm release. Keel will automatically update your environment as soon as the new application version is available in the repository.
Link: https://keel.sh/
Cost: Free
Apollo is an open source application providing teams with self-service UI for creating and deploying their services to Kubernetes. Apollo allows operators to view logs and revert deployments to any point in time with just one click. Apollo has flexible permission models for deployments. Each user can deploy only what he needs to deploy.
Link: https://github.com/logzio/apollo
Cost: Free
Draft is a tool provided by the Azure team that streamlines application development and deployment into any Kubernetes cluster. Draft creates “inner loops” between code deployment and code commits which significantly speed up the change verification process. With Draft, developers can prepare application Dockerfiles and Helm charts plus deploy applications to a remote or local Kubernetes cluster with two commands.
Link: https://github.com/azure/draft
Cost: Free
Deis Workflow is an open-source tool. The Platform as a Service (PaaS) creates additional layers of abstraction on top of Kubernetes clusters. These layers allow you to deploy and/or update Kubernetes applications without specific domain knowledge from developers. Workflow builds upon Kubernetes concepts to provide simple, developer-friendly app deployment. Delivered as a set of Kubernetes microservices, operators can easily install the platform. Workflow can deploy new versions of your app with zero downtime.
Link: https://deis.com/workflow/
Cost: Free
Kel is an open-source PaaS from Eldarion, Inc., which helps to manage Kubernetes applications through the entire lifecycle. Kel provides two additional layers written in Python and Go on top of Kubernetes. Level 0 allows you to provision Kubernetes resources, and Level 1 helps you to deploy any application on K8s.
Link: http://www.kelproject.com/
Cost: Free
A full DevOps toolchain for containerized apps in production, Cloud 66 automates much of the heavy-lifting for Devs through specialized Ops tools. The platform currently runs 4,000 customer workloads on Kubernetes and manages 2,500 lines of config. By offering end-to-end infrastructure management, Cloud 66 enables engineers to build, deliver, deploy, and manage any application on any cloud or server.
Link: www.cloud66.com
Cost: Free for 14 days
Kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code without having to worry about the underlying infrastructure plumbing. Kubeless is aware of Kubernetes resources out-of-the-box and also provides auto-scaling, API routing, monitoring, and troubleshooting. Kubeless fully relies on K8s primitives, so Kubernetes users will also be able to use native K8s API servers and API gateways.
Link: https://github.com/kubeless/kubeless
Cost: Free
Fission is a fast serverless framework for Kubernetes with a focus on developer productivity and high performance. Fission works on a Kubernetes cluster anywhere: on your laptop, in any public cloud, or in a private data-center. You can write your function using Python, NodeJS, Go, C# or PHP, and deploy it on K8s clusters with Fission.
Link: https://fission.io/
Cost: Free
For a long time, there was only one Function as a Service (FaaS) implementation available for Kubernetes: Funktion. Funktion is an open source event-driven lambda-style programming model designed for Kubernetes. Funktion is tightly coupled with the fabric8 platform. With Funktion, you can create flows to subscribe from over 200 event sources to invoke your function, including most databases, messaging systems, social media, and other middleware and protocols.
Link: https://github.com/funktionio/funktion
Cost: Free
IronFunctions is an open source serverless platform or FaaS platform that you can run anywhere. IronFunction is written on Golang and really supports functions in any language. The main advantage of IronFunction is that it supports the AWS Lambda format. Import functions directly from Lambda and run them wherever you want.
Link: https://github.com/iron-io/functions
Cost: Free
Apache OpenWhisk is a robust open source-FaaS platform driven by IBM and Adobe. OpenWhisk can be deployed on a local on-premise device or on the cloud. The design of Apache OpenWhisk means it acts as an asynchronous and loosely-coupled execution environment that can run functions against external triggers. OpenWhisk is available as SaaS solution on Bluemix, or you can deploy a Vagrant-based VM locally.
Link: https://console.bluemix.net/openwhisk/
Cost: Free
The OpenFaaS framework aims to manage serverless functions on Docker Swarm or Kubernetes where it will collect and analyze a wide range of metrics. You can package any process inside your function and use it without repetitive coding or any other routine action. FaaS has Prometheus metrics baked-in, which means it can automatically scale your functions up and down for demand. FaaS natively supports a web-based interface where you can try out your function.
Link: https://github.com/openfaas/faas
Cost: Free
Nuclio is a serverless project which aims to proceed with high-performance events and large amounts of data. Nuclio can be launched on an on-premise device as a standalone library or inside a VM/Docker container. Also, Nuclio supports Kubernetes out of the box. Nuclio provides real-time data processing with maximum parallelism and minimum overheads. You can try out Nuclio on the playground page.
Link: https://github.com/nuclio/nuclio
Cost: Free
Virtual Kubelet is an open source Kubernetes Kubelet implementation that masquerades as a kubelet for the purposes of connecting Kubernetes to other APIs. Virtual Kubelet allows the nodes to be backed by other services like ACI, Hyper.sh, and AWS, etc. This connector features a pluggable architecture and direct use of Kubernetes primitives, making it much easier to build on.
Link: https://github.com/virtual-kubelet/virtual-kubelet
Cost: Free
Fnproject is a container native serverless project which supports practically any language and can run almost everywhere. Fn is written on Go, so it is performance-ready and lightweight. Fnproject supports AWS Lambda format style, so you can easily import your Lambda functions and launch it with Fnproject.
Link: http://fnproject.io/
Cost: Free
CoreDNS is a set of plugins written in Go which perform DNS functions. CoreDNS with additional Kubernetes plugins can replace the default Kube-DNS service and implement the specification defined for Kubernetes DNS-based service discovery. CoreDNS can also listen for DNS requests coming in over UDP/TCP, TLS, and gRPC.
Link: https://coredns.io/
Cost: Free
Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It is much easier to troubleshoot and monitor K8s clusters with a native dashboard. You need to create a secure proxy channel between your machine and Kubernetes API server to access the dashboard. The native Kubernetes dashboard relies on the Heapster data collector, so it also needs to be installed in the system.
Link: https://github.com/kubernetes/dashboard#kubernetes-dashboard
Cost: Free
And that’s the complete list! As always, we’d love your feedback and suggestions for future articles. (Don’t forget to check out our 50+ Useful Docker Tools too!)
*Originally published by **Stefan Thorpe at ***caylent.com
============================
Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter
☞ Learn Kubernetes: A Detailed Guide to Orchestrating Containers
☞ Revolutionizing Distributed Systems with Kubernetes
☞ Understanding the Kubernetes Networking Model
☞ Independent Deployment of the Frontend with Docker and Kubernetes
#kubernetes #web-development
1570097415
Do you know okteto https://github.com/okteto/okteto?
It helps you develop you apps in kubernetes:
1602964260
Last year, we provided a list of Kubernetes tools that proved so popular we have decided to curate another list of some useful additions for working with the platform—among which are many tools that we personally use here at Caylent. Check out the original tools list here in case you missed it.
According to a recent survey done by Stackrox, the dominance Kubernetes enjoys in the market continues to be reinforced, with 86% of respondents using it for container orchestration.
(State of Kubernetes and Container Security, 2020)
And as you can see below, more and more companies are jumping into containerization for their apps. If you’re among them, here are some tools to aid you going forward as Kubernetes continues its rapid growth.
(State of Kubernetes and Container Security, 2020)
#blog #tools #amazon elastic kubernetes service #application security #aws kms #botkube #caylent #cli #container monitoring #container orchestration tools #container security #containers #continuous delivery #continuous deployment #continuous integration #contour #developers #development #developments #draft #eksctl #firewall #gcp #github #harbor #helm #helm charts #helm-2to3 #helm-aws-secret-plugin #helm-docs #helm-operator-get-started #helm-secrets #iam #json #k-rail #k3s #k3sup #k8s #keel.sh #keycloak #kiali #kiam #klum #knative #krew #ksniff #kube #kube-prod-runtime #kube-ps1 #kube-scan #kube-state-metrics #kube2iam #kubeapps #kubebuilder #kubeconfig #kubectl #kubectl-aws-secrets #kubefwd #kubernetes #kubernetes command line tool #kubernetes configuration #kubernetes deployment #kubernetes in development #kubernetes in production #kubernetes ingress #kubernetes interfaces #kubernetes monitoring #kubernetes networking #kubernetes observability #kubernetes plugins #kubernetes secrets #kubernetes security #kubernetes security best practices #kubernetes security vendors #kubernetes service discovery #kubernetic #kubesec #kubeterminal #kubeval #kudo #kuma #microsoft azure key vault #mozilla sops #octant #octarine #open source #palo alto kubernetes security #permission-manager #pgp #rafay #rakess #rancher #rook #secrets operations #serverless function #service mesh #shell-operator #snyk #snyk container #sonobuoy #strongdm #tcpdump #tenkai #testing #tigera #tilt #vert.x #wireshark #yaml
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1601305200
Recently, Microsoft announced the general availability of Bridge to Kubernetes, formerly known as Local Process with Kubernetes. It is an iterative development tool offered in Visual Studio and VS Code, which allows developers to write, test as well as debug microservice code on their development workstations while consuming dependencies and inheriting the existing configuration from a Kubernetes environment.
Nick Greenfield, Program Manager, Bridge to Kubernetes stated in an official blog post, “Bridge to Kubernetes is expanding support to any Kubernetes. Whether you’re connecting to your development cluster running in the cloud, or to your local Kubernetes cluster, Bridge to Kubernetes is available for your end-to-end debugging scenarios.”
Bridge to Kubernetes provides a number of compelling features. Some of them are mentioned below-
#news #bridge to kubernetes #developer tools #kubernetes #kubernetes platform #kubernetes tools #local process with kubernetes #microsoft
1600992000
Over the last few years, Kubernetes have become the de-facto standard for container orchestration and has also won the race against Docker for being the most loved platforms among developers. Released in 2014, Kubernetes has come a long way with currently being used across the entire cloudscape platforms. In fact, recent reports state that out of 109 tools to manage containers, 89% of them are leveraging Kubernetes versions.
Although inspired by Borg, Kubernetes, is an open-source project by Google, and has been donated to a vendor-neutral firm — The Cloud Native Computing Foundation. This could be attributed to Google’s vision of creating a platform that can be used by every firm of the world, including the large tech companies and can host multiple cloud platforms and data centres. The entire reason for handing over the control to CNCF is to develop the platform in the best interest of its users without vendor lock-in.
#opinions #google open source #google open source tools #google opening kubernetes #kubernetes #kubernetes platform #kubernetes tools #open source kubernetes backfired
1601051854
Kubernetes is a highly popular container orchestration platform. Multi cloud is a strategy that leverages cloud resources from multiple vendors. Multi cloud strategies have become popular because they help prevent vendor lock-in and enable you to leverage a wide variety of cloud resources. However, multi cloud ecosystems are notoriously difficult to configure and maintain.
This article explains how you can leverage Kubernetes to reduce multi cloud complexities and improve stability, scalability, and velocity.
Maintaining standardized application deployments becomes more challenging as your number of applications and the technologies they are based on increase. As environments, operating systems, and dependencies differ, management and operations require more effort and extensive documentation.
In the past, teams tried to get around these difficulties by creating isolated projects in the data center. Each project, including its configurations and requirements were managed independently. This required accurately predicting performance and the number of users before deployment and taking down applications to update operating systems or applications. There were many chances for error.
Kubernetes can provide an alternative to the old method, enabling teams to deploy applications independent of the environment in containers. This eliminates the need to create resource partitions and enables teams to operate infrastructure as a unified whole.
In particular, Kubernetes makes it easier to deploy a multi cloud strategy since it enables you to abstract away service differences. With Kubernetes deployments you can work from a consistent platform and optimize services and applications according to your business needs.
The Compelling Attributes of Multi Cloud Kubernetes
Multi cloud Kubernetes can provide multiple benefits beyond a single cloud deployment. Below are some of the most notable advantages.
Stability
In addition to the built-in scalability, fault tolerance, and auto-healing features of Kubernetes, multi cloud deployments can provide service redundancy. For example, you can mirror applications or split microservices across vendors. This reduces the risk of a vendor-related outage and enables you to create failovers.
#kubernetes #multicloud-strategy #kubernetes-cluster #kubernetes-top-story #kubernetes-cluster-install #kubernetes-explained #kubernetes-infrastructure #cloud