The Top Kubernetes Security Best Practices

The Top Kubernetes Security Best Practices

This post covers a few suggestions on what can you do to make your Kubernetes workloads more secure. I won’t go deep in many of the topics—some of them are pretty straightforward, while others need more theory. But I’ll make sure you at least get the idea of why and when you need to implement these practices. And I’ll provide links for further reading in case any of these are the right fit for you.

There’s no doubt that Kubernetes adoption has increased a lot since its first release. But, as Ian Coldwater said in their talk about abusing the Kubernetes defaults: Kubernetes is insecure by design and the cloud only makes it worse. Not everyone has the same security needs, and some developers and engineers might want more granular control on specific configurations. Kubernetes offers the ability to enforce security practices as you need, and the evolution of the available security options has improved a lot over the years.

This post covers a few suggestions on what can you do to make your Kubernetes workloads more secure. I won’t go deep in many of the topics—some of them are pretty straightforward, while others need more theory. But I’ll make sure you at least get the idea of why and when you need to implement these practices. And I’ll provide links for further reading in case any of these are the right fit for you.

Enough words. Let’s get into the details.

1. Disable public access

Avoid exposing any Kubernetes node to the internet. Aim to work only with private nodes if you can. If you decide to run Kubernetes in the cloud and use its managed service offering, you can disable public access to the API’s control pane. Don’t think about it, just disable it. An attacker that has access to the API can get sensitive information from the cluster. You can either use a bastion host, configure a VPN tunnel, or use a direct connection to access the nodes and other infrastructure resources. And in the cloud, look for disabling servers’ metadata from pods with network policies—more on this later.

If you need to expose a service to the internet, use a load balancer or an API gateway and enable only the ports you need. Always look to implement the least-privileged principle, and close everything by default.

2. Implement role-based access control

Stop using the “default” namespace and plan according to your workload permission needs. Make sure that role-based access control (RBAC) is enabled in the cluster. RBAC is simply an authorization method on top of the Kubernetes API. When you enable RBAC, everything is denied by default. But you’ll be able to define more granular permissions to users that will have access to the API. You’d first start by creating roles and assigning users to those roles. A role will contain only allowed permissions, like the ability to list pods, and its scope applies to a single namespace. You can also create cluster roles where the permissions apply to all namespaces.

I suggest you read the official docs for RBAC in Kubernetes to learn more about its capabilities and how to implement it in your cluster.

3. Encrypt secrets at rest

Kubernetes architecture uses etcd as the database to store Kubernetes objects. All the information about the cluster, the workloads, and the cloud’s metadata is persisted in etcd. If an attacker has control of etcd, they can do whatever they want—such as revealing secrets for database passwords or accessing sensitive information. Since Kubernetes 1.13, you can enable encryption at rest. Backups will be encrypted, and attackers won’t be able to decrypt the secrets without the master key. A recommended practice is to use a key management service (KMS) provider like HashiCorp’s Vault or AWS KMS.

4. Configure admission controllers

After a request to the Kubernetes API has been authorized, you can use an admission controller as an extra layer of validation. An admission controller may change the request object or deny the request. As Kubernetes usage grows in your company, you need to enforce specific security policies in the cluster automatically. For example, enforce that containers always run as unprivileged users or that containers pull images only from authorized image repositories and enforce the usage of images that you’ve analyzed before. You can find other policies on the official Kubernetes docs site.

5. Implement networking policies

Similar to admission controllers, you can also configure access policies at the networking layer for pods. Networking policies are like firewall rules to pods. You can limit access to pods through label selectors, similar to how you might configure a service by defining label selectors for which pods to include in the service. When you set a network policy, you configure the labels and values a pod needs to have to communicate with a service. Another notable scenario is the one I mentioned before about the attacker accessing instance metadata in the cloud. You can define a network policy to deny egress traffic to pods, limiting the access to the instance metadata API.

6. Configure secure context for containers

Even if you’ve implemented all of the previous practices I mentioned before, an attacker can still do some damage through a container. Because of Kubernetes and Docker architectures’ nature, someone could potentially have access to the underlying infrastructure. For that reason, make sure you run containers with the privileged flag turned off.

There are other tools and technologies you can use to increase security in the cluster by adding another layer of protection like AppArmor, Seccomp, or gVisor. These types of technologies help by sandboxing containers to run securely in regards to other tenants in the system. Although these are still emerging practices, it’s worth it to keep them in mind.

7. Segregate sensitive workloads

Another option is to use Kubernetes features like namespaces, taints, and tolerations to segregate sensitive workloads. You can apply more restrictive policies and practices to those workloads where you can’t afford the luxury of a data breach or service downtime. For instance, you can tag a cluster of worker nodes (node pool) and restrict who can schedule pods to those nodes with RBAC roles.

8. Scan container images

Avoid using container images from public repositories like DockerHub, or at least only use them if they’re from official vendors like Ubuntu or Microsoft. A better approach is to use the Dockerfile definition instead, build the image, and publish it in your own private image repository where you have more control. But even though you can build your own container images, make sure you include tools like Clair or MicroScanner to scan containers for potential vulnerabilities.

9. Enable audit logging

At some point in time, your systems may get infected. And when that happens (or if it happens) you better have logs to find out what the problem is and how the attacker was able to bypass all your security layers. In Kubernetes, you can create audit policies to decide at which level and what things you’d like to log each time the Kubernetes API is called. Once you have logs enabled, you can work on having a centralized place to persist these logs. Depending on the tool you use to persist logs, you can configure alerts, send notifications, or use webhooks to automate a patch. For instance, you might set an immediate action like terminating existing pods in the cluster that could have been affected.

If you’re running in the cloud, you can enable audit logging in the control plane. This is true at least for the three major cloud providers: AWS, Azure, and GCP.

10. Keep your Kubernetes version up to date

Last but not least, make sure you’re always running the latest version of Kubernetes. You can see the list of vulnerabilities that Kubernetes has had in the CVE site. For each vulnerability, the CVE site has a score that tells you how bad the vulnerability is. Always plan to upgrade your Kubernetes version to the latest available. If you’re using a managed version from cloud vendors, some of them deal with the upgrade for you. If not, Google published a post with a few recommendations on how to upgrade the cluster with no downtime. It doesn’t matter if you’re not running on Google—all the advice they give applies independently of where you’re running Kubernetes.

What’s next?

That covers a good range of Kubernetes security best practices that everyone should consider. As you’ve noticed, I didn’t discuss many of the topics in too much detail, and by the time you’re reading this post, Kubernetes might have published another feature to increase security. For example, admission controllers is a feature that went live a few months ago. If you’d like to dive deeper into the current state of Kubernetes security options, I’d suggest you go and read the Kubernetes official documentation for more in-depth recommendations on securing a cluster. And in case you’re a podcast fan like me, there are two good podcasts from Google’s Kubernetes Podcast where they talk about Kubernetes security and how to attack and defend Kubernetes.

Thank you for reading !

Best Practices for Kubernetes Runners

Best Practices for Kubernetes Runners

Best Practices for Kubernetes Runners. Are you sure you know what is happening in your GitLab Runners? GitLab CI is extremely powerful and flexible, however with that users can easily make mistakes that could take out a GitLab Runner, and potentially clog up Sidekiq bringing your entire GitLab instance to it’s knees.

Are you sure you know what is happening in your GitLab Runners? GitLab CI is extremely powerful and flexible, however with that users can easily make mistakes that could take out a GitLab Runner, and potentially clog up Sidekiq bringing your entire GitLab instance to it’s knees. In this talk, Senior Software Engineer Sean Smith discusses some situations that F5 Networks ran into in their installation, for example CI jobs growing exponentially; as well as how monitoring was implemented after incidents occurred and how to limit the impact of incidents when they occur.

25+ Node.js Security Best Practices

25+ Node.js Security Best Practices

In this Node.js Security tutorial, we’ve compiled over 25 Node.js security best practices (+40 other generic security practices) from all top-ranked articles around the globe. Web attacks explode these days as security comes to the front of the stage

Web attacks explode these days as security comes to the front of the stage. We’ve compiled over 25 Node.js security best practices (+40 other generic security practices) from all top-ranked articles around the globe.

Note: Many items have a read more link to an elaboration on the topic with code example and other useful information.

1. Embrace linter security rules

TL;DR: Make use of security-related linter plugins such as eslint-plugin-security to catch security vulnerabilities and issues as early as possible — while they’re being coded. This can help catching security weaknesses like using eval, invoking a child process or importing a module with a non string literal (e.g. user input). Click ‘Read more’ below to see code examples that will get caught by a security linter

Otherwise: What could have been a straightforward security weakness during development becomes a major issue in production. Also, the project may not follow consistent code security practices, leading to vulnerabilities being introduced, or sensitive secrets committed into remote repositories

Read More: Linter rules

Linting doesn’t have to be just a tool to enforce pedantic rules about whitespace, semicolons or eval statements. ESLint provides a powerful framework for eliminating a wide variety of potentially dangerous patterns in your code (regular expressions, input validation, and so on). I think it provides a powerful new tool that’s worthy of consideration by security-conscious JavaScript developers. (Adam Baldwin)

More quotes and code examples here

2. Limit concurrent requests using a middleware

TL;DR: DOS attacks are very popular and relatively easy to conduct. Implement rate limiting using an external service such as cloud load balancers, cloud firewalls, nginx, rate-limiter-flexible package or (for smaller and less critical apps) a rate limiting middleware (e.g. express-rate-limit)

Otherwise: An application could be subject to an attack resulting in a denial of service where real users receive a degraded or unavailable service.

Read More: Implement rate limiting

3. Extract secrets from config files or use packages to encrypt them

TL;DR: Never store plain-text secrets in configuration files or source code. Instead, make use of secret-management systems like Vault products, Kubernetes/Docker Secrets, or using environment variables. As a last result, secrets stored in source control must be encrypted and managed (rolling keys, expiring, auditing, etc). Make use of pre-commit/push hooks to prevent committing secrets accidentally

Otherwise: Source control, even for private repositories, can mistakenly be made public, at which point all secrets are exposed. Access to source control for an external party will inadvertently provide access to related systems (databases, apis, services, etc).

Read More: Secret management

4. Prevent query injection vulnerabilities with ORM/ODM libraries

TL;DR: To prevent SQL/NoSQL injection and other malicious attacks, always make use of an ORM/ODM or a database library that escapes data or supports named or indexed parameterized queries, and takes care of validating user input for expected types. Never just use JavaScript template strings or string concatenation to inject values into queries as this opens your application to a wide spectrum of vulnerabilities. All the reputable Node.js data access libraries (e.g. Sequelize, Knex, mongoose) have built-in protection agains injection attacks

Otherwise: Unvalidated or unsanitized user input could lead to operator injection when working with MongoDB for NoSQL, and not using a proper sanitization system or ORM will easily allow SQL injection attacks, creating a giant vulnerability.

Read More: Query injection prevention using ORM/ODM libraries

5. Avoid DOS attacks by explicitly setting when a process should crash

TL;DR: The Node process will crash when errors are not handled. Many best practices even recommend to exit even though an error was caught and got handled. Express, for example, will crash on any asynchronous error — unless you wrap routes with a catch clause. This opens a very sweet attack spot for attackers who recognize what input makes the process crash and repeatedly send the same request. There’s no instant remedy for this but a few techniques can mitigate the pain: Alert with critical severity anytime a process crashes due to an unhandled error, validate the input and avoid crashing the process due to invalid user input, wrap all routes with a catch and consider not to crash when an error originated within a request (as opposed to what happens globally)

Otherwise: This is just an educated guess: given many Node.js applications, if we try passing an empty JSON body to all POST requests — a handful of applications will crash. At that point, we can just repeat sending the same request to take down the applications with ease

6. Adjust the HTTP response headers for enhanced security

TL;DR: Your application should be using secure headers to prevent attackers from using common attacks like cross-site scripting (XSS), clickjacking and other malicious attacks. These can be configured easily using modules like helmet.

Otherwise: Attackers could perform direct attacks on your application’s users, leading huge security vulnerabilities

Read More: Using secure headers in your application

7. Constantly and automatically inspect for vulnerable dependencies

TL;DR: With the npm ecosystem it is common to have many dependencies for a project. Dependencies should always be kept in check as new vulnerabilities are found. Use tools like npm audit, nsp or snyk to track, monitor and patch vulnerable dependencies. Integrate these tools with your CI setup so you catch a vulnerable dependency before it makes it to production.

Otherwise: An attacker could detect your web framework and attack all its known vulnerabilities.

Read More: Dependency security

8. Avoid using the Node.js crypto library for handling passwords, use Bcrypt

TL;DR: Passwords or secrets (API keys) should be stored using a secure hash + salt function like bcrypt, that should be a preferred choice over its JavaScript implementation due to performance and security reasons.

Otherwise: Passwords or secrets that are persisted without using a secure function are vulnerable to brute forcing and dictionary attacks that will lead to their disclosure eventually.

Read More: Use Bcrypt

9. Escape HTML, JS and CSS output

TL;DR: Untrusted data that is sent down to the browser might get executed instead of just being displayed, this is commonly being referred as a cross-site-scripting (XSS) attack. Mitigate this by using dedicated libraries that explicitly mark the data as pure content that should never get executed (i.e. encoding, escaping)

Otherwise: An attacker might store a malicious JavaScript code in your DB which will then be sent as-is to the poor clients

Read More: Escape output

10. Validate incoming JSON schemas

TL;DR: Validate the incoming requests’ body payload and ensure it qualifies the expectations, fail fast if it doesn’t. To avoid tedious validation coding within each route you may use lightweight JSON-based validation schemas such as jsonschema or joi

Otherwise: Your generosity and permissive approach greatly increases the attack surface and encourages the attacker to try out many inputs until they find some combination to crash the application

Read More: Validate incoming JSON schemas

11. Support blacklisting JWT tokens

TL;DR: When using JWT tokens (for example, with Passport.js), by default there’s no mechanism to revoke access from issued tokens. Once you discover some malicious user activity, there’s no way to stop them from accessing the system as long as they hold a valid token. Mitigate this by implementing a blacklist of untrusted tokens that are validated on each request.

Otherwise: Expired, or misplaced tokens could be used maliciously by a third party to access an application and impersonate the owner of the token.

Read More: Blacklisting JWTs

12. Prevent brute-force attacks against authorization

TL;DR: A simple and powerful technique is to limit authorization attempts using two metrics:

  1. The first is number of consecutive failed attempts by the same user unique ID/name and IP address.
  2. The second is number of failed attempts from an IP address over some long period of time. For example, block an IP address if it makes 100 failed attempts in one day.

Otherwise: An attacker can issue unlimited automated password attempts to gain access to privileged accounts on an application

Read More: Login rate limiting

13. Run Node.js as non-root user

TL;DR: There is a common scenario where Node.js runs as a root user with unlimited permissions. For example, this is the default behaviour in Docker containers. It’s recommended to create a non-root user and either bake it into the Docker image (examples given below) or run the process on this users’ behalf by invoking the container with the flag “-u username”

Otherwise: An attacker who manages to run a script on the server gets unlimited power over the local machine (e.g. change iptable and re-route traffic to his server)

Read More: Run Node.js as non-root user

14. Limit payload size using a reverse-proxy or a middleware

TL;DR: The bigger the body payload is, the harder your single thread works in processing it. This is an opportunity for attackers to bring servers to their knees without tremendous amount of requests (DOS/DDOS attacks). Mitigate this limiting the body size of incoming requests on the edge (e.g. firewall, ELB) or by configuring express body parser to accept only small-size payloads

Otherwise: Your application will have to deal with large requests, unable to process the other important work it has to accomplish, leading to performance implications and vulnerability towards DOS attacks

Read More: Limit payload size

15. Avoid JavaScript eval statements

TL;DR: eval is evil as it allows executing a custom JavaScript code during run time. This is not just a performance concern but also an important security concern due to malicious JavaScript code that may be sourced from user input. Another language feature that should be avoided is new Function constructor. setTimeout and setInterval should never be passed dynamic JavaScript code either.

Otherwise: Malicious JavaScript code finds a way into a text passed into eval or other real-time evaluating JavaScript language functions, and will gain complete access to JavaScript permissions on the page. This vulnerability is often manifested as an XSS attack.

Read More: Avoid JavaScript eval statements

16. Prevent evil RegEx from overloading your single thread execution

TL;DR: Regular Expressions, while being handy, pose a real threat to JavaScript applications at large, and the Node.js platform in particular. A user input for text to match might require an outstanding amount of CPU cycles to process. RegEx processing might be inefficient to an extent that a single request that validates 10 words can block the entire event loop for 6 seconds and set the CPU on 🔥. For that reason, prefer third-party validation packages like validator.js instead of writing your own Regex patterns, or make use of safe-regex to detect vulnerable regex patterns

Otherwise: Poorly written regexes could be susceptible to Regular Expression DoS attacks that will block the event loop completely. For example, the popular moment package was found vulnerable with malicious RegEx usage in November of 2017

Read More: Prevent malicious RegEx

17. Avoid module loading using a variable

TL;DR: Avoid requiring/importing another file with a path that was given as parameter due to the concern that it could have originated from user input. This rule can be extended for accessing files in general (i.e. fs.readFile()) or other sensitive resource access with dynamic variables originating from user input. Eslint-plugin-security linter can catch such patterns and warn early enough

Otherwise: Malicious user input could find its way to a parameter that is used to require tampered files, for example a previously uploaded file on the filesystem, or access already existing system files.

Read More: Safe module loading

18. Run unsafe code in a sandbox

TL;DR: When tasked to run external code that is given at run-time (e.g. plugin), use any sort of ‘sandbox’ execution environment that isolates and guards the main code against the plugin. This can be achieved using a dedicated process (e.g. cluster.fork()), serverless environment or dedicated npm packages that acting as a sandbox

Otherwise: A plugin can attack through an endless variety of options like infinite loops, memory overloading, and access to sensitive process environment variables

Read More: Run unsafe code in a sandbox

19. Take extra care when working with child processes

TL;DR: Avoid using child processes when possible and validate and sanitize input to mitigate shell injection attacks if you still have to. Prefer using child_process.execFile which by definition will only execute a single command with a set of attributes and will not allow shell parameter expansion.

Otherwise: Naive use of child processes could result in remote command execution or shell injection attacks due to malicious user input passed to an unsanitized system command.

Read More: Be cautious when working with child processes

20. Hide error details from clients

TL;DR: An integrated express error handler hides the error details by default. However, great are the chances that you implement your own error handling logic with custom Error objects (considered by many as a best practice). If you do so, ensure not to return the entire Error object to the client, which might contain some sensitive application details

Otherwise: Sensitive application details such as server file paths, third party modules in use, and other internal workflows of the application which could be exploited by an attacker, could be leaked from information found in a stack trace

Read More: Hide error details from clients

21. Configure 2FA for npm or Yarn

TL;DR: Any step in the development chain should be protected with MFA (multi-factor authentication), npm/Yarn are a sweet opportunity for attackers who can get their hands on some developer’s password. Using developer credentials, attackers can inject malicious code into libraries that are widely installed across projects and services. Maybe even across the web if published in public. Enabling 2-factor-authentication in npm leaves almost zero chances for attackers to alter your package code.

Otherwise: Have you heard about the eslint developer who’s password was hijacked?

22. Modify session middleware settings

TL;DR: Each web framework and technology has its known weaknesses — telling an attacker which web framework we use is a great help for them. Using the default settings for session middlewares can expose your app to module- and framework-specific hijacking attacks in a similar way to the X-Powered-By header. Try hiding anything that identifies and reveals your tech stack (E.g. Node.js, express)

Otherwise: Cookies could be sent over insecure connections, and an attacker might use session identification to identify the underlying framework of the web application, as well as module-specific vulnerabilities

Read More: Cookie and session security

23. Avoid DOS attacks by explicitly setting when a process should crash

TL;DR: The Node process will crash when errors are not handled. Many best practices even recommend to exit even though an error was caught and got handled. Express, for example, will crash on any asynchronous error — unless you wrap routes with a catch clause. This opens a very sweet attack spot for attackers who recognize what input makes the process crash and repeatedly send the same request. There’s no instant remedy for this but a few techniques can mitigate the pain: Alert with critical severity anytime a process crashes due to an unhandled error, validate the input and avoid crashing the process due to invalid user input, wrap all routes with a catch and consider not to crash when an error originated within a request (as opposed to what happens globally)

Otherwise: This is just an educated guess: given many Node.js applications, if we try passing an empty JSON body to all POST requests — a handful of applications will crash. At that point, we can just repeat sending the same request to take down the applications with ease.

24. Prevent unsafe redirects

TL;DR: Redirects that do not validate user input can enable attackers to launch phishing scams, steal user credentials, and perform other malicious actions.

Otherwise: If an attacker discovers that you are not validating external, user-supplied input, they may exploit this vulnerability by posting specially-crafted links on forums, social media, and other public places to get users to click it.

Read more: Prevent unsafe redirects

25. Avoid publishing secrets to the npm registry

TL;DR: Precautions should be taken to avoid the risk of accidentally publishing secrets to public npm registries. An .npmignore file can be used to blacklist specific files or folders, or the files array in package.json can act as a whitelist.

Otherwise: Your project’s API keys, passwords or other secrets are open to be abused by anyone who comes across them, which may result in financial loss, impersonation, and other risks.

Read More: Avoid publishing secrets

26. A list of 40 generic security advice (not specifically Node.js-related)

The following bullets are well-known and important security measures which should be applied in every application. As they are not necessarily related to Node.js and implemented similarly regardless of the application framework — we include them here as an appendix. The items are grouped by their OWASP classification. A sample includes the following points:

  • Require MFA/2FA for root account
  • Rotate passwords and access keys frequently, including SSH keys
  • Apply strong password policies, both for ops and in-application user management, [see OWASP password recommendation](https://www.owasp.org/index.php/Authentication_Cheat_Sheet#Implement_Proper_Password_Strength_Controls)
  • Do not ship or deploy with any default credentials, particularly for admin users
  • Use only standard authentication methods like OAuth, OpenID, etc. — avoid basic authentication
  • Auth rate limiting: Disallow more than X login attempts (including password recovery, etc.) in a period of Y minutes
  • On login failure, don’t let the user know whether the username or password verification failed, just return a common auth error
  • Consider using a centralized user management system to avoid managing multiple account per employee (e.g. GitHub, AWS, Jenkins, etc) and to benefit from a battle-tested user management system

The complete list of 40 generic security advice can be found in the official Node.js best practices repository!

Read More: 40 Generic security advice

Top 10 npm Security Best Practices

Top 10 npm Security Best Practices

In this article, you'll see top 10 npm security best practices for the developers

10 npm Security Best Practices. Avoid publishing secrets to the npm registry. Enforce the lockfile. Assess npm project health. Audit for vulnerabilities in open source dependencies. Use a local npm proxy. Responsibly disclose security vulnerabilities. Enable 2FA. Use npm author tokens.

Concerned about npm vulnerabilities? It is important to take npm security into account for both frontend, and backend developers. Open source security auditing is a crucial part of shifting security to the left, and npm package security should be a top concern, as we see that even the official npm command line tool has been found to be vulnerable.

In this cheat sheet edition, we’re going to focus on npm security and productivity tips for both open source maintainers and developers. So let’s get started with our list of 10 npm security best practices, starting with a classic mistake: people adding their passwords to the npm packages they publish!

DOWNLOAD THE CHEAT SHEET!


1. Avoid publishing secrets to the npm registry

Whether you’re making use of API keys, passwords or other secrets, they can very easily end up leaking into source control or even a published package on the public npm registry. You may have secrets in your working directory in designated files such as a .env which should be added to a .gitignore to avoid committing it to a SCM, but what happen when you publish an npm package from the project’s directory?

The npm CLI packs up a project into a tar archive (tarball) in order to push it to the registry. The following criteria determine which files and directories are added to the tarball:

  • If there is either a .gitignore or a .npmignore file, the contents of the file are used as an ignore pattern when preparing the package for publication.
  • If both ignore files exist, everything not located in .npmignore is published to the registry. This condition is a common source of confusion and is a problem that can lead to leaking secrets. Developers may end up updating the .gitignore file, but forget to update .npmignore as well, which can lead to a potentially sensitive file not being pushed to source control, but still being included in the npm package.

Another good practice to adopt is making use of the files property in package.json, which works as a whitelist and specifies the array of files to be included in the package that is to be created and installed (while the ignore file functions as a blacklist). The files property and an ignore file can both be used together to determine which files should explicitly be included, as well as excluded, from the package. When using both, the former the files property in package.json takes precedence over the ignore file.

When a package is published, the npm CLI will verbosely display the archive being created. To be extra careful, add a --dry-run argument to your publish command in order to first review how the tarball is created without actually publishing it to the registry.

In January 2019, npm shared on their blog that they added a mechanism that automatically revokes a token if they detect that one has been published with a package.


2. Enforce the lockfile

We embraced the birth of package lockfiles with open arms, which introduced: deterministic installations across different environments, and enforced dependency expectations across team collaboration. Life is good! Or so I thought… what would have happened had I slipped a change into the project’s package.json file but had forgotten to commit the lockfile along side of it?

Both Yarn, and npm act the same during dependency installation . When they detect an inconsistency between the project’s package.json and the lockfile, they compensate for such change based on the package.json manifest by installing different versions than those that were recorded in the lockfile.

This kind of situation can be hazardous for build and production environments as they could pull in unintended package versions and render the entire benefit of a lockfile futile.

Luckily, there is a way to tell both Yarn and npm to adhere to a specified set of dependencies and their versions by referencing them from the lockfile. Any inconsistency will abort the installation. The command line should read as follows:

  • If you’re using Yarn, run yarn install --frozen-lockfile.
  • If you’re using npm run npm ci.
3.Minimize attack surfaces by ignoring run-scripts

The npm CLI works with package run-scripts. If you’ve ever run npm start or npm test then you’ve used package run-scripts too. The npm CLI builds on scripts that a package can declare, and allows packages to define scripts to run at specific entry points during the package’s installation in a project. For example, some of these script hook entries may be postinstall scripts that a package that is being installed will execute in order to perform housekeeping chores.

With this capability, bad actors may create or alter packages to perform malicious acts by running any arbitrary command when their package is installed. A couple of cases where we’ve seen this already happening is the popular eslint-scope incident that harvested npm tokens, and the crossenv incident, along with 36 other packages that abused a typosquatting attack on the npm registry.

Apply these best practices in order to minimize the malicious module attack surface:

Always vet and perform due-diligence on third-party modules that you install in order to confirm their health and credibility.

  • Hold-off on upgrading blindly to new versions; allow new package versions some time to circulate before trying them out.
  • Before upgrading, make sure to review changelog and release notes for the upgraded version.
  • When installing packages make sure to add the --ignore-scripts suffix to disable the execution of any scripts by third-party packages.
  • Consider adding ignore-scripts to your .npmrc project file, or to your global npm configuration.
4. Assess npm project health

Outdated dependencies

Rushing to constantly upgrade dependencies to their latest releases is not necessarily a good practice if it is done without reviewing release notes, the code changes, and generally testing new upgrades in a comprehensive manner. With that said, staying out of date and not upgrading at all, or after a long time, is a source for trouble as well.

The npm CLI can provide information about the freshness of dependencies you use with regards to their semantic versioning offset. By running npm outdated, you can see which packages are out of date:

Dependencies in yellow correspond to the semantic versioning as specified in the package.json manifest, and dependencies colored in red mean that there’s an update available. Furthermore, the output also shows the latest version for each dependency.


Call the doctor

Between the variety of Node.js package managers, and different versions of Node.js you may have installed in your path, how do you verify a healthy npm installation and working environment? Whether you’re working with the npm CLI in a development environment or within a CI, it is important to assess that everything is working as expected.

Call the doctor! The npm CLI incorporates a health assessment tool to diagnose your environment for a well-working npm interaction. Run npm doctor to review your npm setup:

  • Check the official npm registry is reachable, and display the currently configured registry.
  • Check that Git is available.
  • Review installed npm and Node.js versions.
  • Run permission checks on the various folders such as the local and global node_modules, and on the folder used for package cache.
  • Check the local npm module cache for checksum correctness.
5. Audit for vulnerabilities in open source dependencies

The npm ecosystem is the single largest repository of application libraries amongst all the other language ecosystems. The registry and the libraries in it are at the core for JavaScript developers as they are able to leverage work that others have already built and incorporate it into their code-base. With that said, the increasing adoption of open source libraries in applications brings with it an increased risk of introducing security vulnerabilities.

Many popular npm packages have been found to be vulnerable and may carry a significant risk without proper security auditing of your project’s dependencies. Some examples are npm request, superagent, mongoose, and even security-related packages like jsonwebtoken, and npm validator.

Security doesn’t end by just scanning for security vulnerabilities when installing a package but should also be streamlined with developer workflows to be effectively adopted throughout the entire lifecycle of software development, and monitored continuously when code is deployed.


Scan for vulnerabilities

Scanning for security vulnerabilities with Snyk, use:

$ npm install -g snyk
$ snyk test

When you run a Snyk test, Snyk reports the vulnerabilities it found and displays the vulnerable paths so you can track the dependency tree to understand which module introduced a vulnerability. Most importantly, Snyk provides you with actionable remediation advise so you can upgrade to a fixed version through an automated pull request that Snyk opens in your repository, or apply a patch that Snyk provides to mitigate the vulnerability if no fix is available. Snyk provides a smart upgrade by recommending the minimal semver-upgrade possible for the vulnerable package.


Monitor for vulnerabilities discovered in open source libraries

The security work doesn’t end there.

What about security vulnerabilities found in an application’s dependency after the application has been deployed? That’s where the importance of security monitoring and tight integration with the project’s development lifecycle comes in.

We recommend integrating Snyk with your source code management (SCM) system such as GitHub or GitLab so that Snyk actively monitors your projects and:

  • Automatically open PRs to upgrade or patch vulnerable dependencies for you
  • Scan and detect vulnerabilities in open source libraries that a pull request may have introduced

If you can’t integrate Snyk with an SCM, it is possible to monitor snapshots of your projects as sent from the Snyk CLI tool as well, by simply running:

$ snyk monitor

How is Snyk different from npm audit?

  1. We invite you to review a blog post published by Nearform which  compares the differences between npm’s audit and Snyk
  2. Snyk’s vulnerability database offers comprehensive data about vulnerabilities through its threat intelligence system. Providing better coverage and being able to surface and report vulnerabilities that have not yet received a CVE. For example, 72% of the vulnerabilities in npm’s advisories were added first to the Snyk database. See more on this here: https://snyk.io/features/vulnerabilitiy-database/
6. Use a local npm proxy

The npm registry is the biggest collection of packages that is available for all JavaScript developers and is also the home of the most of the Open Source projects for web developers. But sometimes you might have different needs in terms of security, deployments or performance. When this is true, npm allows you to switch to a different registry:

When you run npm install, it automatically starts a communication with the main registry to resolve all your dependencies; if you wish to use a different registry, that too is pretty straightforward:

  1. Set npm set registry to set up a default registry.
  2. Use the argument --registry for one single registry.

Verdaccio is a simple lightweight zero-config-required private registry and installing it is as simple as follows:

$ npm install --global verdaccio

Hosting your own registry was never so easy! Let’s check the most important features of this tool:

  • It supports the npm registry format including private package features, scope support, package access control and authenticated users in the web interface.
  • It provides capabilities to hook remote registries and the power to route each dependency to different registries and caching tarballs. To reduce duplicate downloads and save bandwidth in your local development and CI servers, you should proxy all dependencies.
  • As an authentication provider by default, it uses an htpasswd security, but also supports Gitlab, Bitbucket, LDAP. You can also use your own.
  • It’s easy to scale using a different storage provider.
  • If your project is based in Docker, using the official image is the best choice.
  • It enables really fast bootstrap for testing environments, and is handy for testing big mono-repos projects.

It is fairly simple to run:

$ verdaccio --config /path/config --listen 5000

If you use verdaccio for a local private registry, consider having a configuration for your packages to enforce publishing to the local registry and avoid accidental publishing by developers to a public registry. To achieve this add the following to package.json:

“publishConfig”: {
 “registry”: "https://localhost:5000"
}

Your registry is running—ya !! Now, to publish a package just use the npm command npm publish and it is ready for you to share it with the world.


7. Responsibly disclose security vulnerabilities

When security vulnerabilities are found, they pose a potentially serious threat if publicly disclosed without prior warning or appropriate mitigation available for users to protect themselves.

It is recommended that security researchers follow a responsible disclosure program, which is a set of processes and guidelines that aims to connect the researchers with the vendor or maintainer of the vulnerable asset, in order to convey the vulnerability, it’s impact and applicability. Once the vulnerability is correctly triaged, the vendor and researcher coordinate a fix and a publication date for the vulnerability in an effort to provide an upgrade-path or remediation for affected users before the security issue is made public.

Security is too important to be an afterthought or handled unethically. At Snyk, we deeply value the security community and believe that a responsible disclosure of security vulnerabilities in open source packages helps us ensure the security and privacy of the users.

Snyk’s security research team regularly collaborates with the community for bug bounties, such as the case with f2e-server that resulted in hundreds of community disclosures, as well as Snyk’s very close partnership with academic researchers such as Virginia Tech to provide security expertise and the ability to coordinate with vendors and community maintainers.

We invite you to collaborate with us and offer our help with the disclosure process:

8. Enable 2FA

In October 2017, npm officially announced support for two-factor authentication (2FA) for developers using the npm registry to host their closed and open source packages.

Even though 2FA has been supported on the npm registry for a while now, it seems to be slowly adopted with one example being the eslint-scope incident in mid-2018 when a stolen developer account on the ESLint team lead to a malicious version of eslint-scope being published by bad actors.

** URGENT SECURITY WARNING ** Please share.

Today, version 3.7.2 of eslint-scope (https://t.co/Gkc9XhDRN6) was found to contain malicious code that steals your NPM credentials. Take action now if you are using version 3.7.2.

Snyk DB entry: https://t.co/dAhhA3cZQP
— Snyk (@snyksec) July 12, 2018

The registry supports two modes for enabling 2FA in a user’s account:

  • Authorization-only—when a user logs in to npm via the website or the CLI, or performs other sets of actions such as changing profile information.
  • Authorization and write-mode—profile and log-in actions, as well as write actions such as managing tokens and packages, and minor support for team and package visibility information.

Equip yourself with an authentication application, such as Google Authentication, which you can install on a mobile device, and you’re ready to get started. One easy way to get started with the 2FA extended protection for your account is through npm’s user interface, which allows enabling it very easily. If you’re a command line person, it’s also easy to enable 2FA when using a supported npm client version (>=5.5.1):

$ npm profile enable-2fa auth-and-writes

Follow the command line instructions to enable 2FA, and to save emergency authentication codes. If you wish to enable 2FA mode for login and profile changes only, you may replace the auth-and-writes with auth-only in the code as it appears above.


9. Use npm author tokens

Every time you log in with the npm CLI, a token is generated for your user and authenticates you to the npm registry. Tokens make it easy to perform npm registry-related actions during CI and automated procedures, such as accessing private modules on the registry or publishing new versions from a build step.

Tokens can be managed through the npm registry website, as well as using the npm command line client. An example of using the CLI to create a read-only token that is restricted to a specific IPv4 address range is as follows:

$ npm token create --read-only --cidr=192.0.2.0/24

To verify which tokens are created for your user or to revoke tokens in cases of emergency, you can use npm token list or npm token revoke respectively.


10. Understand module naming conventions and typosquatting attacks

Naming a module is the first thing you might do when creating a package, but before defining a final name, npm defines some rules that a package name must follow:

  • It is limited to 214 characters
  • It cannot start with dot or underscore
  • No uppercase letters in the name
  • No trailing spaces
  • Only lowercase
  • Some special characters are not allowed: “~\’!()*”)’
  • Can’t start with . or _
  • Can’t use node_modules or favicon.ico due are banned

Even if you follow these rules, be aware that npm uses a spam detection mechanism when publishing new packages, based on score and whether a package name violates the terms of the service. If conditions are violated, the registry might deny the request.

Typosquatting is an attack that relies on mistakes made by users, such as typos. With typosquatting, bad actors could publish malicious modules to the npm registry with names that look much like existing popular modules.

We have been tracking tens of malicious packages in the npm ecosystem; they have been seen on the PyPi Python registry as well. Perhaps some of the most popular incidents have been for cross-env, event-stream, and eslint-scope.

One of the main targets for typosquatting attacks are the user credentials, since any package has access to environment variables via the global variable process.env. Other examples we’ve seen in the past include the case with event-stream, where the attack targeted developers in the hopes of injecting malicious code into an application’s source code.

To reduce the risk of such attacks you might do the following:

  • Be extra-careful when copy-pasting package installation instructions into the terminal. Make sure to verify in the source code repository as well as on the npm registry that this is indeed the package you are intending to install. You might verify the metadata of the package with npm info to fetch more information about contributors and latest versions.
  • Default to having an npm logged-out user in your daily work routines so your credentials won’t be the weak spot that would lead to easily compromising your account.
  • When installing packages, append the --ignore-scripts to reduce the risk of arbitrary command execution. For example: npm install my-malicious-package --ignore-scripts

DOWNLOAD THE CHEAT SHEET!

Be sure to print out the cheat sheet and pin it up somewhere to remind you of some of the npm security best practices you should follow if you’re a javascript developer, or just enjoy using npm for fun.


Learn More

10 Node Frameworks to Use in 2019

Machine Learning In Node.js With TensorFlow.js

Full Stack Developers: Everything You Need to Know

MEAN Stack Tutorial MongoDB, ExpressJS, AngularJS and NodeJS

Build a web scraper with Node

Originally published by Liran Tal, Juan Picado at https://snyk.io