Remove Unwanted Files & Directories From Your Node_modules Folder

ModClean

Remove unwanted files and directories from your node_modules folder 

This documentation is for ModClean 2.x which requires Node v6.9+, if you need to support older versions, use ModClean 1.3.0 instead.

ModClean is a utility that finds and removes unnecessary files and folders from your node_modules directory based on predefined and custom glob patterns. This utility comes with both a CLI and a programmatic API to provide customization for your environment. ModClean is used and tested in an Enterprise environment on a daily basis.

Why?

There are a few different reasons why you would want to use ModClean:

  • Commiting Modules. Some environments (especially Enterprise), it's required to commit the node_modules directory with your application into version control. This is due to compatibility, vetting and vunerability scanning rules for open source software. This can lead to issues with project size, checking out/pulling changes and the infamous 255 character path limit if you're unlucky enough to be on Windows or SVN.
  • Wasted space on your server. Why waste space on your server with files not needed by you or the modules?
  • Packaged applications. If you're required to package your application, you can reduce the size of the package quickly by removing unneeded files.
  • Compiled applications. Other tools like, NW.js and Electron make it easy to create cross-platform desktop apps, but depending on the modules, your app can become huge. Reduce down the size of the compiled application before shipping and make it faster for users to download.
  • Save space on your machine. Depending on the amount of global modules you have installed, you can reduce their space by removing those gremlin files.
  • and much more!

The :cake: is a lie, but the Benchmarks are not.

How?

New! In ModClean 2.0.0, patterns are now provided by plugins instead of a static patterns.json file as part of the module. By default, ModClean comes with modclean-patterns-default installed, providing the same patterns as before. You now have the ability to create your own patterns plugins and use multiple plugins to clean your modules. This allows flexibility with both the programmatic API and CLI.

ModClean scans the node_modules directory of your choosing, finding all files and folders that match the defined patterns and deleting them. Both the CLI and the programmatic API provides all the options needed to customize this process to your requirements. Depending on the number of modules your app requires, files can be reduced anywhere from hundreds to thousands and disk space can be reduced considerably.

(File and disk space reduction can also be different between the version of NPM and Operating System)

IMPORTANT This module has been heavily tested in an enterprise environment used for large enterprise applications. The provided patterns in modclean-patterns-default have worked very well when cleaning up useless files in many popular modules. There are hundreds of thousands of modules in NPM and I cannot simply cover them all. If you are using ModClean for the first time on your application, you should create a copy of the application so you can ensure it still runs properly after running ModClean. The patterns are set in a way to ensure no crutial module files are removed, although there could be one-off cases where a module could be affected and that's why I am stressing that testing and backups are important. If you find any files that should be removed, please create a pull request to modclean-patterns-default or create your own patterns plugin to share with the community.

Removal Benchmark

So how well does this module work? If we npm install sails and run ModClean on it, here are the results:

All tests ran on macOS 10.12.3 with Node v6.9.1 and NPM v4.0.5

Using Default Safe Patterns

modclean -n default:safe or modclean

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean12,1921,50359.35 MB
Reduced3,98743811.88 MB

Using Safe and Caution Patterns

modclean -n default:safe,default:caution

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean11,9411,47355.28 MB
Reduced4,23846815.95 MB

Using Safe, Caution and Danger Patterns

modclean --patterns="default:*"

 Total FilesTotal FoldersTotal Size
Before ModClean16,1791,94171.24 MB
After ModClean11,6841,44451.76 MB
Reduced4,49549719.47 MB

That makes a huge difference in the amount of files and disk space.

View additional benchmarks on the Wiki: Benchmarks. If you would like to run some of your own benchmarks, you can use modclean-benchmark.

Install

Install locally

npm install modclean --save

Install globally (CLI)

npm install modclean -g

Read the CLI Documentation

Read the API Documentation

Read the Custom Patterns Plugin Documentation


Issues

If you find any bugs with either ModClean or the CLI Utility, please feel free to open an issue. Any feature requests may also be poseted in the issues.

Download Details:

Author: ModClean
Source Code: https://github.com/ModClean/modclean 
License: MIT license

#javascript #node #cli #clean 

Remove Unwanted Files & Directories From Your Node_modules Folder

FemtoCleaner.jl: The Code Behind Femtocleaner

FemtoCleaner

FemtoCleaner cleans your julia projects by upgrading deprecated syntax, removing version compatibility workarounds and anything else that has a unique upgrade path. FemtoCleaner is designed to be as style-preserving as possible. It does not perform code formatting. The logic behind recognizing and rewriting deprecated constructs can be found in the Deprecations.jl package, which makes use of CSTParser.jl under the hood.

serious femtocleaning

User Manual

To set up FemtoCleaner on your repository, go to https://github.com/integration/femtocleaner and click "Configure" to select the repositories you wish to add.

Invoking FemtoCleaner

There are currently three triggers that cause FemtoCleaner to run over your repository:

  1. FemtoCleaner is installed on your repository for the first time
  2. You change your repositories REQUIRE file to drop support for old versions of julia
  3. Manually, by opening an issue with the title Run femtocleaner on the desired repository.

In all cases, femtocleaner, will clone your repository, upgrade any deprecations it can and then open a pull request with the changes (in case 3, it will convert the existing issue into a PR instead).

Interacting with the PR

FemtoCleaner can automatically perform certain common commands in response to user request in a PR review. These commands are invoked by creating a "Changes Requested" review. FemtoCleaner will attempt to interpret each comment in such a review as a request to perform an automated function. The following commands are currently supported.

  • delete this entirely - FemtoCleaner address the review by deleting the entire expression starting on the referenced line.
  • align arguments - Assuming the preceding line contains a multi-line function signature, reformat the argument list, aligning each line to the opening parenthesis.
  • bad bot - To be used when you deem the action taken by the bot to be incorrect. At present this will automatically open an issue on this repository.

If there are other such actions you would find useful, feel free to file an issue or (even better) submit a PR.

Privacy and Security

FemtoCleaner receives the content of many GitHub hooks. These contain certain publicly available details about the repository and the user who initiated the event. AttoBot will also make several subsequent queries via the public GitHub api to the repository in question. The contents of these may be retained in server logs.

In order to perform its function, FemtoCleaner requires read/write access to your repository and its issues and pull requests. While FemtoCleaner runs in a sandboxed environment and access to the underlying hardware is controlled and restricted, you should be aware that you are extending these rights. If you are intending to install FemtoCleaner on an organizational account, please ensure you are authorized to extend these permissions to FemtoCleaner.

For the foregoing reasons, you should not install FemtoCleaner on a private repository. Doing so may result in disclosure of contents of the private repository.

Please note that the license applies to both the source code and your use of the publicly hosted version thereof. In particular:

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Running FemtoCleaner locally

It is possible to run FemtoCleaner locally (to fix, for example, deprecations in a private repository).

Install FemtoCleaner (currently working on Julia v0.6.x only) using

Pkg.clone("https://github.com/Keno/AbstractTrees.jl")
Pkg.clone("https://github.com/JuliaComputing/Deprecations.jl")
Pkg.clone("https://github.com/JuliaComputing/FemtoCleaner.jl")

A repository of Julia code can be cleaned using

FemtoCleaner.cleanrepo(path::String; show_diff = true, delete_local = true)

This clones the repo located at path, which can be a file system path or a URL, to a temporary directory and fix the deprecations. If show_diff is true, the diff from applying the deprecations is showed. If delete_local is true the cleaned repo, is deleted when the function is finished.

Developer Manual

You are encouraged to contribute changes to this repository. This software is used by many people. Even minor changes in usability can make a big difference. If you want to add additional interactions to the bot itself, this repository is the right place. If you want to contribute additional deprecation rewrites, please do so at https://github.com/JuliaComputing/Deprecations.jl.

Deployment of the publicly hosted copy

The publicly hosted copy of FemtoCleaner is automatically deployed from the master branch of this repository whenever a new commit to said branch is made.

Setting up a development copy of femtocleaner

It is possible to set up a copy of femtocleaner to test changes to the codebase before attempting to deploy them on the main version. To do so, you will need a publicly routable server, with a copy of julia and this repository (and its dependencies). You will then need to set up your own GitHub app at https://github.com/settings/apps/new. Make sure to enter your server in the "Webhook URL" portion of the form. By default, the app will listen on port 10000+app_id, where app_id is the ID GitHub assigns your app upon completion of the registration process. Once you have set up your GitHub app, you will need to download the private key and save it as privkey.pem in Pkg.dir("FemtoCleaner"). Additionally, you should create a file named app_id, containing the ID assigned to your app by GitHub (it will be visible on the confirmation page once you have set up your app with GitHub). Then, you may launch FemtoCleaner by running julia -e 'using FemtoCleaner; FemtoCleaner.run_server()'. It is recommended that you set up a separate repository for testing your staging copy that is not covered by the publicly hosted version, to avoid conflicting updates. GitHub provides a powerful interface to see the messages delivered to your app in the "Advanced" tab of your app's settings. In particular, for interactive development, you may use the Revise package to reload FemtoCleaner source code before every request (simply execute using Revise on a separate line in the REPL before running FemtoCleaner). By editing the files on the server and using GitHub's "Redeliver" option to replay events of interest, a quick edit-debug cycle can be achieved.

Download Details:

Author: JuliaComputing
Source Code: https://github.com/JuliaComputing/FemtoCleaner.jl 
License: View license

#julia #clean #github 

FemtoCleaner.jl: The Code Behind Femtocleaner

Scientist: A Ruby Library for Carefully Refactoring Critical Paths

Scientist!

A Ruby library for carefully refactoring critical paths

How do I science?

Let's pretend you're changing the way you handle permissions in a large web app. Tests can help guide your refactoring, but you really want to compare the current and refactored behaviors under load.

require "scientist"

class MyWidget
  def allows?(user)
    experiment = Scientist::Default.new "widget-permissions"
    experiment.use { model.check_user?(user).valid? } # old way
    experiment.try { user.can?(:read, model) } # new way

    experiment.run
  end
end

Wrap a use block around the code's original behavior, and wrap try around the new behavior. experiment.run will always return whatever the use block returns, but it does a bunch of stuff behind the scenes:

  • It decides whether or not to run the try block,
  • Randomizes the order in which use and try blocks are run,
  • Measures the durations of all behaviors in seconds,
  • Compares the result of try to the result of use,
  • Swallow and record exceptions raised in the try block when overriding raised, and
  • Publishes all this information.

The use block is called the control. The try block is called the candidate.

Creating an experiment is wordy, but when you include the Scientist module, the science helper will instantiate an experiment and call run for you:

require "scientist"

class MyWidget
  include Scientist

  def allows?(user)
    science "widget-permissions" do |experiment|
      experiment.use { model.check_user(user).valid? } # old way
      experiment.try { user.can?(:read, model) } # new way
    end # returns the control value
  end
end

If you don't declare any try blocks, none of the Scientist machinery is invoked and the control value is always returned.

Making science useful

The examples above will run, but they're not really doing anything. The try blocks don't run yet and none of the results get published. Replace the default experiment implementation to control execution and reporting:

require "scientist/experiment"

class MyExperiment
  include Scientist::Experiment

  attr_accessor :name

  def initialize(name)
    @name = name
  end

  def enabled?
    # see "Ramping up experiments" below
    true
  end

  def raised(operation, error)
    # see "In a Scientist callback" below
    p "Operation '#{operation}' failed with error '#{error.inspect}'"
    super # will re-raise
  end

  def publish(result)
    # see "Publishing results" below
    p result
  end
end

When Scientist::Experiment is included in a class, it automatically sets it as the default implementation via Scientist::Experiment.set_default. This set_default call is skipped if you include Scientist::Experiment in a module.

Now calls to the science helper will load instances of MyExperiment.

Controlling comparison

Scientist compares control and candidate values using ==. To override this behavior, use compare to define how to compare observed values instead:

class MyWidget
  include Scientist

  def users
    science "users" do |e|
      e.use { User.all }         # returns User instances
      e.try { UserService.list } # returns UserService::User instances

      e.compare do |control, candidate|
        control.map(&:login) == candidate.map(&:login)
      end
    end
  end
end

If either the control block or candidate block raises an error, Scientist compares the two observations' classes and messages using ==. To override this behavior, use compare_error to define how to compare observed errors instead:

class MyWidget
  include Scientist

  def slug_from_login(login)
    science "slug_from_login" do |e|
      e.use { User.slug_from_login login }         # returns String instance or ArgumentError
      e.try { UserService.slug_from_login login }  # returns String instance or ArgumentError

      compare_error_message_and_class = -> (control, candidate) do
        control.class == candidate.class && 
        control.message == candidate.message
      end

      compare_argument_errors = -> (control, candidate) do
        control.class == ArgumentError &&
        candidate.class == ArgumentError &&
        control.message.start_with?("Input has invalid characters") &&
        candidate.message.start_with?("Invalid characters in input") 
      end

      e.compare_error do |control, candidate|
        compare_error_message_and_class.call(control, candidate) ||
        compare_argument_errors.call(control, candidate)
      end
    end
  end
end

Adding context

Results aren't very useful without some way to identify them. Use the context method to add to or retrieve the context for an experiment:

science "widget-permissions" do |e|
  e.context :user => user

  e.use { model.check_user(user).valid? }
  e.try { user.can?(:read, model) }
end

context takes a Symbol-keyed Hash of extra data. The data is available in Experiment#publish via the context method. If you're using the science helper a lot in a class, you can provide a default context:

class MyWidget
  include Scientist

  def allows?(user)
    science "widget-permissions" do |e|
      e.context :user => user

      e.use { model.check_user(user).valid? }
      e.try { user.can?(:read, model) }
    end
  end

  def destroy
    science "widget-destruction" do |e|
      e.use { old_scary_destroy }
      e.try { new_safe_destroy }
    end
  end

  def default_scientist_context
    { :widget => self }
  end
end

The widget-permissions and widget-destruction experiments will both have a :widget key in their contexts.

Expensive setup

If an experiment requires expensive setup that should only occur when the experiment is going to be run, define it with the before_run method:

# Code under test modifies this in-place. We want to copy it for the
# candidate code, but only when needed:
value_for_original_code = big_object
value_for_new_code      = nil

science "expensive-but-worthwhile" do |e|
  e.before_run do
    value_for_new_code = big_object.deep_copy
  end
  e.use { original_code(value_for_original_code) }
  e.try { new_code(value_for_new_code) }
end

Keeping it clean

Sometimes you don't want to store the full value for later analysis. For example, an experiment may return User instances, but when researching a mismatch, all you care about is the logins. You can define how to clean these values in an experiment:

class MyWidget
  include Scientist

  def users
    science "users" do |e|
      e.use { User.all }
      e.try { UserService.list }

      e.clean do |value|
        value.map(&:login).sort
      end
    end
  end
end

And this cleaned value is available in observations in the final published result:

class MyExperiment
  include Scientist::Experiment

  # ...

  def publish(result)
    result.control.value         # [<User alice>, <User bob>, <User carol>]
    result.control.cleaned_value # ["alice", "bob", "carol"]
  end
end

Note that the #clean method will discard the previous cleaner block if you call it again. If for some reason you need to access the currently configured cleaner block, Scientist::Experiment#cleaner will return the block without further ado. (This probably won't come up in normal usage, but comes in handy if you're writing, say, a custom experiment runner that provides default cleaners.)

Ignoring mismatches

During the early stages of an experiment, it's possible that some of your code will always generate a mismatch for reasons you know and understand but haven't yet fixed. Instead of these known cases always showing up as mismatches in your metrics or analysis, you can tell an experiment whether or not to ignore a mismatch using the ignore method. You may include more than one block if needed:

def admin?(user)
  science "widget-permissions" do |e|
    e.use { model.check_user(user).admin? }
    e.try { user.can?(:admin, model) }

    e.ignore { user.staff? } # user is staff, always an admin in the new system
    e.ignore do |control, candidate|
      # new system doesn't handle unconfirmed users yet:
      control && !candidate && !user.confirmed_email?
    end
  end
end

The ignore blocks are only called if the values don't match. Unless a compare_error comparator is defined, two cases are considered mismatches: a) one observation raising an exception and the other not, b) observations raising exceptions with different classes or messages.

Enabling/disabling experiments

Sometimes you don't want an experiment to run. Say, disabling a new codepath for anyone who isn't staff. You can disable an experiment by setting a run_if block. If this returns false, the experiment will merely return the control value. Otherwise, it defers to the experiment's configured enabled? method.

class DashboardController
  include Scientist

  def dashboard_items
    science "dashboard-items" do |e|
      # only run this experiment for staff members
      e.run_if { current_user.staff? }
      # ...
  end
end

Ramping up experiments

As a scientist, you know it's always important to be able to turn your experiment off, lest it run amok and result in villagers with pitchforks on your doorstep. In order to control whether or not an experiment is enabled, you must include the enabled? method in your Scientist::Experiment implementation.

class MyExperiment
  include Scientist::Experiment

  attr_accessor :name, :percent_enabled

  def initialize(name)
    @name = name
    @percent_enabled = 100
  end

  def enabled?
    percent_enabled > 0 && rand(100) < percent_enabled
  end

  # ...

end

This code will be invoked for every method with an experiment every time, so be sensitive about its performance. For example, you can store an experiment in the database but wrap it in various levels of caching such as memcache or per-request thread-locals.

Publishing results

What good is science if you can't publish your results?

You must implement the publish(result) method, and can publish data however you like. For example, timing data can be sent to graphite, and mismatches can be placed in a capped collection in redis for debugging later.

The publish method is given a Scientist::Result instance with its associated Scientist::Observations:

class MyExperiment
  include Scientist::Experiment

  # ...

  def publish(result)

    # Store the timing for the control value,
    $statsd.timing "science.#{name}.control", result.control.duration
    # for the candidate (only the first, see "Breaking the rules" below,
    $statsd.timing "science.#{name}.candidate", result.candidates.first.duration

    # and counts for match/ignore/mismatch:
    if result.matched?
      $statsd.increment "science.#{name}.matched"
    elsif result.ignored?
      $statsd.increment "science.#{name}.ignored"
    else
      $statsd.increment "science.#{name}.mismatched"
      # Finally, store mismatches in redis so they can be retrieved and examined
      # later on, for debugging and research.
      store_mismatch_data(result)
    end
  end

  def store_mismatch_data(result)
    payload = {
      :name            => name,
      :context         => context,
      :control         => observation_payload(result.control),
      :candidate       => observation_payload(result.candidates.first),
      :execution_order => result.observations.map(&:name)
    }

    key = "science.#{name}.mismatch"
    $redis.lpush key, payload
    $redis.ltrim key, 0, 1000
  end

  def observation_payload(observation)
    if observation.raised?
      {
        :exception => observation.exception.class,
        :message   => observation.exception.message,
        :backtrace => observation.exception.backtrace
      }
    else
      {
        # see "Keeping it clean" above
        :value => observation.cleaned_value
      }
    end
  end
end

Testing

When running your test suite, it's helpful to know that the experimental results always match. To help with testing, Scientist defines a raise_on_mismatches class attribute when you include Scientist::Experiment. Only do this in your test suite!

To raise on mismatches:

class MyExperiment
  include Scientist::Experiment
  # ... implementation
end

MyExperiment.raise_on_mismatches = true

Scientist will raise a Scientist::Experiment::MismatchError exception if any observations don't match.

Custom mismatch errors

To instruct Scientist to raise a custom error instead of the default Scientist::Experiment::MismatchError:

class CustomMismatchError < Scientist::Experiment::MismatchError
  def to_s
    message = "There was a mismatch! Here's the diff:"

    diffs = result.candidates.map do |candidate|
      Diff.new(result.control, candidate)
    end.join("\n")

    "#{message}\n#{diffs}"
  end
end
science "widget-permissions" do |e|
  e.use { Report.find(id) }
  e.try { ReportService.new.fetch(id) }

  e.raise_with CustomMismatchError
end

This allows for pre-processing on mismatch error exception messages.

Handling errors

In candidate code

Scientist rescues and tracks all exceptions raised in a try or use block, including some where rescuing may cause unexpected behavior (like SystemExit or ScriptError). To rescue a more restrictive set of exceptions, modify the RESCUES list:

# default is [Exception]
Scientist::Observation::RESCUES.replace [StandardError]

Timeout ⏲️: If you're introducing a candidate that could possibly timeout, use caution. ⚠️ While Scientist rescues all exceptions that occur in the candidate block, it does not protect you from timeouts, as doing so would be complicated. It would likely require running the candidate code in a background job and tracking the time of a request. We feel the cost of this complexity would outweigh the benefit, so make sure that your code doesn't cause timeouts. This risk can be reduced by running the experiment on a low percentage so that users can (most likely) bypass the experiment by refreshing the page if they hit a timeout. See Ramping up experiments below for how details on how to set the percentage for your experiment.

In a Scientist callback

If an exception is raised within any of Scientist's internal helpers, like publish, compare, or clean, the raised method is called with the symbol name of the internal operation that failed and the exception that was raised. The default behavior of Scientist::Default is to simply re-raise the exception. Since this halts the experiment entirely, it's often a better idea to handle this error and continue so the experiment as a whole isn't canceled entirely:

class MyExperiment
  include Scientist::Experiment

  # ...

  def raised(operation, error)
    InternalErrorTracker.track! "science failure in #{name}: #{operation}", error
  end
end

The operations that may be handled here are:

  • :clean - an exception is raised in a clean block
  • :compare - an exception is raised in a compare block
  • :enabled - an exception is raised in the enabled? method
  • :ignore - an exception is raised in an ignore block
  • :publish - an exception is raised in the publish method
  • :run_if - an exception is raised in a run_if block

Designing an experiment

Because enabled? and run_if determine when a candidate runs, it's impossible to guarantee that it will run every time. For this reason, Scientist is only safe for wrapping methods that aren't changing data.

When using Scientist, we've found it most useful to modify both the existing and new systems simultaneously anywhere writes happen, and verify the results at read time with science. raise_on_mismatches has also been useful to ensure that the correct data was written during tests, and reviewing published mismatches has helped us find any situations we overlooked with our production data at runtime. When writing to and reading from two systems, it's also useful to write some data reconciliation scripts to verify and clean up production data alongside any running experiments.

Noise and error rates

Keep in mind that Scientist's try and use blocks run sequentially in random order. As such, any data upon which your code depends may change before the second block is invoked, potentially yielding a mismatch between the candidate and control return values. To calibrate your expectations with respect to false negatives arising from systemic conditions external to your proposed changes, consider starting with an experiment in which both the try and use blocks invoke the control method. Then proceed with introducing a candidate.

Finishing an experiment

As your candidate behavior converges on the controls, you'll start thinking about removing an experiment and using the new behavior.

  • If there are any ignore blocks, the candidate behavior is guaranteed to be different. If this is unacceptable, you'll need to remove the ignore blocks and resolve any ongoing mismatches in behavior until the observations match perfectly every time.
  • When removing a read-behavior experiment, it's a good idea to keep any write-side duplication between an old and new system in place until well after the new behavior has been in production, in case you need to roll back.

Breaking the rules

Sometimes scientists just gotta do weird stuff. We understand.

Ignoring results entirely

Science is useful even when all you care about is the timing data or even whether or not a new code path blew up. If you have the ability to incrementally control how often an experiment runs via your enabled? method, you can use it to silently and carefully test new code paths and ignore the results altogether. You can do this by setting ignore { true }, or for greater efficiency, compare { true }.

This will still log mismatches if any exceptions are raised, but will disregard the values entirely.

Trying more than one thing

It's not usually a good idea to try more than one alternative simultaneously. Behavior isn't guaranteed to be isolated and reporting + visualization get quite a bit harder. Still, it's sometimes useful.

To try more than one alternative at once, add names to some try blocks:

require "scientist"

class MyWidget
  include Scientist

  def allows?(user)
    science "widget-permissions" do |e|
      e.use { model.check_user(user).valid? } # old way

      e.try("api") { user.can?(:read, model) } # new service API
      e.try("raw-sql") { user.can_sql?(:read, model) } # raw query
    end
  end
end

When the experiment runs, all candidate behaviors are tested and each candidate observation is compared with the control in turn.

No control, just candidates

Define the candidates with named try blocks, omit a use, and pass a candidate name to run:

experiment = MyExperiment.new("various-ways") do |e|
  e.try("first-way")  { ... }
  e.try("second-way") { ... }
end

experiment.run("second-way")

The science helper also knows this trick:

science "various-ways", run: "first-way" do |e|
  e.try("first-way")  { ... }
  e.try("second-way") { ... }
end

Providing fake timing data

If you're writing tests that depend on specific timing values, you can provide canned durations using the fabricate_durations_for_testing_purposes method, and Scientist will report these in Scientist::Observation#duration instead of the actual execution times.

science "absolutely-nothing-suspicious-happening-here" do |e|
  e.use { ... } # "control"
  e.try { ... } # "candidate"
  e.fabricate_durations_for_testing_purposes( "control" => 1.0, "candidate" => 0.5 )
end

fabricate_durations_for_testing_purposes takes a Hash of duration values, keyed by behavior names. (By default, Scientist uses "control" and "candidate", but if you override these as shown in Trying more than one thing or No control, just candidates, use matching names here.) If a name is not provided, the actual execution time will be reported instead.

Like Scientist::Experiment#cleaner, this probably won't come up in normal usage. It's here to make it easier to test code that extends Scientist.

Without including Scientist

If you need to use Scientist in a place where you aren't able to include the Scientist module, you can call Scientist.run:

Scientist.run "widget-permissions" do |e|
  e.use { model.check_user(user).valid? }
  e.try { user.can?(:read, model) }
end

Hacking

Be on a Unixy box. Make sure a modern Bundler is available. script/test runs the unit tests. All development dependencies are installed automatically. Scientist requires Ruby 2.3 or newer.

Wrappers

  • RealGeeks/lab_tech is a Rails engine for using this library by controlling, storing, and analyzing experiment results with ActiveRecord.

Alternatives

Maintainers

@jbarnette, @jesseplusplus, @rick, and @zerowidth


Author:   github
Source code: https://github.com/github/scientist
License: MIT license

#ruby  #ruby-on-rails 

Scientist: A Ruby Library for Carefully Refactoring Critical Paths

Clean Go Code

Clean Go Code

Preface: Why Write Clean Code?

This document is a reference for the Go community that aims to help developers write cleaner code. Whether you're working on a personal project or as part of a larger team, writing clean code is an important skill to have. Establishing good paradigms and consistent, accessible standards for writing clean code can help prevent developers from wasting many meaningless hours on trying to understand their own (or others') work.

We don’t read code, we decode it – Peter Seibel

As developers, we're sometimes tempted to write code in a way that's convenient for the time being without regard for best practices; this makes code reviews and testing more difficult. In a sense, we're encoding—and, in doing so, making it more difficult for others to decode our work. But we want our code to be usable, readable, and maintainable. And that requires coding the right way, not the easy way.

This document begins with a simple and short introduction to the fundamentals of writing clean code. Later, we'll discuss concrete refactoring examples specific to Go.

A short word on gofmt

I'd like to take a few sentences to clarify my stance on gofmt because there are plenty of things I disagree with when it comes to this tool. I prefer snake case over camel case, and I quite like my constant variables to be uppercase. And, naturally, I also have many opinions on bracket placement. That being said, gofmt does allow us to have a common standard for writing Go code, and that's a great thing. As a developer myself, I can certainly appreciate that Go programmers may feel somewhat restricted by gofmt, especially if they disagree with some of its rules. But in my opinion, homogeneous code is more important than having complete expressive freedom.

Introduction to Clean Code

Clean code is the pragmatic concept of promoting readable and maintainable software. Clean code establishes trust in the codebase and helps minimize the chances of careless bugs being introduced. It also helps developers maintain their agility, which typically plummets as the codebase expands due to the increased risk of introducing bugs.

Test-Driven Development

Test-driven development is the practice of testing your code frequently throughout short development cycles or sprints. It ultimately contributes to code cleanliness by inviting developers to question the functionality and purpose of their code. To make testing easier, developers are encouraged to write short functions that only do one thing. For example, it's arguably much easier to test (and understand) a function that's only 4 lines long than one that's 40.

Test-driven development consists of the following cycle:

  1. Write (or execute) a test
  2. If the test fails, make it pass
  3. Refactor your code accordingly
  4. Repeat

Testing and refactoring are intertwined in this process. As you refactor your code to make it more understandable or maintainable, you need to test your changes thoroughly to ensure that you haven't altered the behavior of your functions. This can be incredibly useful as the codebase grows.

Naming Conventions

Comments

I'd like to first address the topic of commenting code, which is an essential practice but tends to be misapplied. Unnecessary comments can indicate problems with the underlying code, such as the use of poor naming conventions. However, whether or not a particular comment is "necessary" is somewhat subjective and depends on how legibly the code was written. For example, the logic of well-written code may still be so complex that it requires a comment to clarify what is going on. In that case, one might argue that the comment is helpful and therefore necessary.

In Go, according to gofmt, all public variables and functions should be annotated. I think this is absolutely fine, as it gives us consistent rules for documenting our code. However, I always want to distinguish between comments that enable auto-generated documentation and all other comments. Annotation comments, for documentation, should be written like documentation—they should be at a high level of abstraction and concern the logical implementation of the code as little as possible.

I say this because there are other ways to explain code and ensure that it's being written comprehensibly and expressively. If the code is neither of those, some people find it acceptable to introduce a comment explaining the convoluted logic. Unfortunately, that doesn't really help. For one, most people simply won't read comments, as they tend to be very intrusive to the experience of reviewing code. Additionally, as you can imagine, a developer won't be too happy if they're forced to review unclear code that's been slathered with comments. The less that people have to read to understand what your code is doing, the better off they'll be.

Let's take a step back and look at some concrete examples. Here's how you shouldn't comment your code:

// iterate over the range 0 to 9 
// and invoke the doSomething function
// for each iteration
for i := 0; i < 10; i++ {
  doSomething(i)
}

This is what I like to call a tutorial comment; it's fairly common in tutorials, which often explain the low-level functionality of a language (or programming in general). While these comments may be helpful for beginners, they're absolutely useless in production code. Hopefully, we aren't collaborating with programmers who don't understand something as simple as a looping construct by the time they've begun working on a development team. As programmers, we shouldn't have to read the comment to understand what's going on—we know that we're iterating over the range 0 to 9 because we can simply read the code. Hence the proverb:

Document why, not how. – Venkat Subramaniam

Following this logic, we can now change our comment to explain why we are iterating from the range 0 to 9:

// instatiate 10 threads to handle upcoming work load
for i := 0; i < 10; i++ {
  doSomething(i)
}

Now we understand why we have a loop and can tell what we're doing by simply reading the code... Sort of.

This still isn't what I'd consider clean code. The comment is worrying because it probably should not be necessary to express such an explanation in prose, assuming the code is well written (which it isn't). Technically, we're still saying what we're doing, not why we're doing it. We can easily express this "what" directly in our code by using more meaningful names:

for workerID := 0; workerID < 10; workerID++ {
  instantiateThread(workerID)
}

With just a few changes to our variable and function names, we've managed to explain what we're doing directly in our code. This is much clearer for the reader because they won't have to read the comment and then map the prose to the code. Instead, they can simply read the code to understand what it's doing.

Of course, this was a relatively trivial example. Writing clear and expressive code is unfortunately not always so easy; it can become increasingly difficult as the codebase itself grows in complexity. The more you practice writing comments in this mindset and avoid explaining what you're doing, the cleaner your code will become.

Function Naming

Let's now move on to function naming conventions. The general rule here is really simple: the more specific the function, the more general its name. In other words, we want to start with a very broad and short function name, such as Run or Parse, that describes the general functionality. Let's imagine that we are creating a configuration parser. Following this naming convention, our top level of abstraction might look something like the following:

func main() {
    configpath := flag.String("config-path", "", "configuration file path")
    flag.Parse()

    config, err := configuration.Parse(*configpath)
    
    ...
}

We'll focus on the naming of the Parse function. Despite this function's very short and general name, it's actually quite clear what it attempts to achieve.

When we go one layer deeper, our function naming will become slightly more specific:

func Parse(filepath string) (Config, error) {
    switch fileExtension(filepath) {
    case "json":
        return parseJSON(filepath)
    case "yaml":
        return parseYAML(filepath)
    case "toml":
        return parseTOML(filepath)
    default:
        return Config{}, ErrUnknownFileExtension
    }
}

Here, we've clearly distinguished the nested function calls from their parent without being overly specific. This allows each nested function call to make sense on its own as well as within the context of the parent. On the other hand, if we had named the parseJSON function json instead, it couldn't possibly stand on its own. The functionality would become lost in the name, and we would no longer be able to tell whether this function is parsing, creating, or marshalling JSON.

Notice that fileExtension is actually a little more specific. However, this is because its functionality is in fact quite specific in nature:

func fileExtension(filepath string) string {
    segments := strings.Split(filepath, ".")
    return segments[len(segments)-1]
}

This kind of logical progression in our function names—from a high level of abstraction to a lower, more specific one—makes the code easier to follow and read. Consider the alternative: If our highest level of abstraction is too specific, then we'll end up with a name that attempts to cover all bases, like DetermineFileExtensionAndParseConfigurationFile. This is horrendously difficult to read; we are trying to be too specific too soon and end up confusing the reader, despite trying to be clear!

Variable Naming

Rather interestingly, the opposite is true for variables. Unlike functions, our variables should be named from more to less specific the deeper we go into nested scopes.

You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets 'dog' or 'cat'. – Dave Cheney

Why should our variable names become less specific as we travel deeper into a function's scope? Simply put, as a variable's scope becomes smaller, it becomes increasingly clear for the reader what that variable represents, thereby eliminating the need for specific naming. In the example of the previous function fileExtension, we could even shorten the name of the variable segments to s if we wanted to. The context of the variable is so clear that it's unnecessary to explain it any further with longer variable names. Another good example of this is in nested for loops:

func PrintBrandsInList(brands []BeerBrand) {
    for _, b := range brands { 
        fmt.Println(b)
    }
}

In the above example, the scope of the variable b is so small that we don't need to spend any additional brain power on remembering what exactly it represents. However, because the scope of brands is slightly larger, it helps for it to be more specific. When expanding the variable scope in the function below, this distinction becomes even more apparent:

func BeerBrandListToBeerList(beerBrands []BeerBrand) []Beer {
    var beerList []Beer
    for _, brand := range beerBrands {
        for _, beer := range brand {
            beerList = append(beerList, beer)
        }
    }
    return beerList
}

Great! This function is easy to read. Now, let's apply the opposite (i.e., wrong) logic when naming our variables:

func BeerBrandListToBeerList(b []BeerBrand) []Beer {
    var bl []Beer
    for _, beerBrand := range b {
        for _, beerBrandBeerName := range beerBrand {
            bl = append(bl, beerBrandBeerName)
        }
    }
    return bl
}

Even though it's possible to figure out what this function is doing, the excessive brevity of the variable names makes it difficult to follow the logic as we travel deeper. This could very well spiral into full-blown confusion because we're mixing short and long variable names inconsistently.

Cleaning Functions

Now that we know some best practices for naming our variables and functions, as well as clarifying our code with comments, let's dive into some specifics of how we can refactor functions to make them cleaner.

Function Length

How small should a function be? Smaller than that! – Robert C. Martin

When writing clean code, our primary goal is to make our code easily digestible. The most effective way to do this is to make our functions as short as possible. It's important to understand that we don't necessarily do this to avoid code duplication. The more important reason is to improve code comprehension.

It can help to look at a function's description at a very high level to understand this better:

fn GetItem:
    - parse json input for order id
    - get user from context
    - check user has appropriate role
    - get order from database

By writing short functions (which are typically 5–8 lines in Go), we can create code that reads almost as naturally as our description above:

var (
    NullItem = Item{}
    ErrInsufficientPrivileges = errors.New("user does not have sufficient privileges")
)

func GetItem(ctx context.Context, json []bytes) (Item, error) {
    order, err := NewItemFromJSON(json)
    if err != nil {
        return NullItem, err
    }
    if !GetUserFromContext(ctx).IsAdmin() {
          return NullItem, ErrInsufficientPrivileges
    }
    return db.GetItem(order.ItemID)
}

Using smaller functions also eliminates another horrible habit of writing code: indentation hell. Indentation hell typically occurs when a chain of if statements are carelessly nested in a function. This makes it very difficult for human beings to parse the code and should be eliminated whenever spotted. Indentation hell is particularly common when working with interface{} and using type casting:

func GetItem(extension string) (Item, error) {
    if refIface, ok := db.ReferenceCache.Get(extension); ok {
        if ref, ok := refIface.(string); ok {
            if itemIface, ok := db.ItemCache.Get(ref); ok {
                if item, ok := itemIface.(Item); ok {
                    if item.Active {
                        return Item, nil
                    } else {
                      return EmptyItem, errors.New("no active item found in cache")
                    }
                } else {
                  return EmptyItem, errors.New("could not cast cache interface to Item")
                }
            } else {
              return EmptyItem, errors.New("extension was not found in cache reference")
            }
        } else {
          return EmptyItem, errors.New("could not cast cache reference interface to Item")
        }
    }
    return EmptyItem, errors.New("reference not found in cache")
}

First, indentation hell makes it difficult for other developers to understand the flow of your code. Second, if the logic in our if statements expands, it'll become exponentially more difficult to figure out which statement returns what (and to ensure that all paths return some value). Yet another problem is that this deep nesting of conditional statements forces the reader to frequently scroll and keep track of many logical states in their head. It also makes it more difficult to test the code and catch bugs because there are so many different nested possibilities that you have to account for.

Indentation hell can result in reader fatigue if a developer has to constantly parse unwieldy code like the sample above. Naturally, this is something we want to avoid at all costs.

So, how do we clean this function? Fortunately, it's actually quite simple. On our first iteration, we will try to ensure that we are returning an error as soon as possible. Instead of nesting the if and else statements, we want to "push our code to the left," so to speak. Take a look:

func GetItem(extension string) (Item, error) {
    refIface, ok := db.ReferenceCache.Get(extension)
    if !ok {
        return EmptyItem, errors.New("reference not found in cache")
    }

    ref, ok := refIface.(string)
    if !ok {
        // return cast error on reference 
    }

    itemIface, ok := db.ItemCache.Get(ref)
    if !ok {
        // return no item found in cache by reference
    }

    item, ok := itemIface.(Item)
    if !ok {
        // return cast error on item interface
    }

    if !item.Active {
        // return no item active
    }

    return Item, nil
}

Once we're done with our first attempt at refactoring the function, we can proceed to split up the function into smaller functions. Here's a good rule of thumb: If the value, err := pattern is repeated more than once in a function, this is an indication that we can split the logic of our code into smaller pieces:

func GetItem(extension string) (Item, error) {
    ref, ok := getReference(extension)
    if !ok {
        return EmptyItem, ErrReferenceNotFound
    }
    return getItemByReference(ref)
}

func getReference(extension string) (string, bool) {
    refIface, ok := db.ReferenceCache.Get(extension)
    if !ok {
        return EmptyItem, false
    }
    return refIface.(string)
}

func getItemByReference(reference string) (Item, error) {
    item, ok := getItemFromCache(reference)
    if !item.Active || !ok {
        return EmptyItem, ErrItemNotFound
    }
    return Item, nil
}

func getItemFromCache(reference string) (Item, bool) {
    if itemIface, ok := db.ItemCache.Get(ref); ok {
        return EmptyItem, false
    }
    return itemIface.(Item), true
}

As mentioned previously, indentation hell can make it difficult to test our code. When we split up our GetItem function into several helpers, we make it easier to track down bugs when testing our code. Unlike the original version, which consisted of several if statements in the same scope, the refactored version of GetItem has just two branching paths that we must consider. The helper functions are also short and digestible, making them easier to read.

Note: For production code, one should elaborate on the code even further by returning errors instead of bool values. This makes it much easier to understand where the error is originating from. However, as these are just example functions, returning bool values will suffice for now. Examples of returning errors more explicitly will be explained in more detail later.

Notice that cleaning the GetItem function resulted in more lines of code overall. However, the code itself is now much easier to read. It's layered in an onion-style fashion, where we can ignore "layers" that we aren't interested in and simply peel back the ones that we do want to examine. This makes it easier to understand low-level functionality because we only have to read maybe 3–5 lines at a time.

This example illustrates that we cannot measure the cleanliness of our code by the number of lines it uses. The first version of the code was certainly much shorter. However, it was artificially short and very difficult to read. In most cases, cleaning code will initially expand the existing codebase in terms of the number of lines. But this is highly preferable to the alternative of having messy, convoluted logic. If you're ever in doubt about this, just consider how you feel about the following function, which does exactly the same thing as our code but only uses two lines:

func GetItemIfActive(extension string) (Item, error) {
    if refIface,ok := db.ReferenceCache.Get(extension); ok {if ref,ok := refIface.(string); ok { if itemIface,ok := db.ItemCache.Get(ref); ok { if item,ok := itemIface.(Item); ok { if item.Active { return Item,nil }}}}} return EmptyItem, errors.New("reference not found in cache")
}

Function Signatures

Creating a good function naming structure makes it easier to read and understand the intent of the code. As we saw above, making our functions shorter helps us understand the function's logic. The last part of cleaning our functions involves understanding the context of the function input. With this comes another easy-to-follow rule: Function signatures should only contain one or two input parameters. In certain exceptional cases, three can be acceptable, but this is where we should start considering a refactor. Much like the rule that our functions should only be 5–8 lines long, this can seem quite extreme at first. However, I feel that this rule is much easier to justify.

Take the following function from RabbitMQ's introduction tutorial to its Go library:

q, err := ch.QueueDeclare(
  "hello", // name
  false,   // durable
  false,   // delete when unused
  false,   // exclusive
  false,   // no-wait
  nil,     // arguments
)

The function QueueDeclare takes six input parameters, which is quite a lot. With some effort, it's possible to understand what this code does thanks to the comments. However, the comments are actually part of the problem—as mentioned earlier, they should be substituted with descriptive code whenever possible. After all, there's nothing preventing us from invoking the QueueDeclare function without comments:

q, err := ch.QueueDeclare("hello", false, false, false, false, nil)

Now, without looking at the commented version, try to remember what the fourth and fifth false arguments represent. It's impossible, right? You will inevitably forget at some point. This can lead to costly mistakes and bugs that are difficult to correct. The mistakes might even occur through incorrect comments—imagine labeling the wrong input parameter. Correcting this mistake will be unbearably difficult to correct, especially when familiarity with the code has deteriorated over time or was low to begin with. Therefore, it is recommended to replace these input parameters with an 'Options' struct instead:

type QueueOptions struct {
    Name string
    Durable bool
    DeleteOnExit bool
    Exclusive bool
    NoWait bool
    Arguments []interface{} 
}

q, err := ch.QueueDeclare(QueueOptions{
    Name: "hello",
    Durable: false,
    DeleteOnExit: false,
    Exclusive: false,
    NoWait: false,
    Arguments: nil,
})

This solves two problems: misusing comments, and accidentally labeling the variables incorrectly. Of course, we can still confuse properties with the wrong value, but in these cases, it will be much easier to determine where our mistake lies within the code. The ordering of the properties also doesn't matter anymore, so incorrectly ordering the input values is no longer a concern. The last added bonus of this technique is that we can use our QueueOptions struct to infer the default values of our function's input parameters. When structures in Go are declared, all properties are initialised to their default value. This means that our QueueDeclare option can actually be invoked in the following way:

q, err := ch.QueueDeclare(QueueOptions{
    Name: "hello",
})

The rest of the values are initialised to their default value of false (except for Arguments, which as an interface has a default value of nil). Not only are we much safer with this approach, but we are also much clearer with our intentions. In this case, we could actually write less code. This is an all-around win for everyone on the project.

One final note on this: It's not always possible to change a function's signature. In this case, for example, we don't actually have control over our QueueDeclare function signature because it's from the RabbitMQ library. It's not our code, so we can't change it. However, we can wrap these functions to suit our purposes:

type RMQChannel struct {
    channel *amqp.Channel
}

func (rmqch *RMQChannel) QueueDeclare(opts QueueOptions) (Queue, error) {
    return rmqch.channel.QueueDeclare(
        opts.Name,
        opts.Durable,
        opts.DeleteOnExit,
        opts.Exclusive,
        opts.NoWait,
        opts.Arguments, 
    )
} 

Basically, we create a new structure named RMQChannel that contains the amqp.Channel type, which has the QueueDeclare method. We then create our own version of this method, which essentially just calls the old version of the RabbitMQ library function. Our new method has all the advantages described before, and we achieved this without actually having to change any of the code in the RabbitMQ library.

We'll use this idea of wrapping functions to introduce more clean and safe code later when discussing interface{}.

Variable Scope

Now, let's take a step back and revisit the idea of writing smaller functions. This has another nice side effect that we didn't cover in the previous chapter: Writing smaller functions can typically eliminate reliance on mutable variables that leak into the global scope.

Global variables are problematic and don't belong in clean code; they make it very difficult for programmers to understand the current state of a variable. If a variable is global and mutable, then by definition, its value can be changed by any part of the codebase. At no point can you guarantee that this variable is going to be a specific value... And that's a headache for everyone. This is yet another example of a trivial problem that's exacerbated when the codebase expands.

Let's look at a short example of how non-global variables with a large scope can cause problems. These variables also introduce the issue of variable shadowing, as demonstrated in the code taken from an article titled Golang scope issue:

func doComplex() (string, error) {
    return "Success", nil
}

func main() {
    var val string
    num := 32

    switch num {
    case 16:
    // do nothing
    case 32:
        val, err := doComplex()
        if err != nil {
            panic(err)
        }
        if val == "" {
            // do something else
        }
    case 64:
        // do nothing
    }

    fmt.Println(val)
}

What's the problem with this code? From a quick skim, it seems the var val string value should be printed out as Success by the end of the main function. Unfortunately, this is not the case. The reason for this lies in the following line:

val, err := doComplex()

This declares a new variable val in the switch's case 32 scope and has nothing to do with the variable declared in the first line of main. Of course, it can be argued that Go syntax is a little tricky, which I don't necessarily disagree with, but there is a much worse issue at hand. The declaration of var val string as a mutable, largely scoped variable is completely unnecessary. If we do a very simple refactor, we will no longer have this issue:

func getStringResult(num int) (string, error) {
    switch num {
    case 16:
    // do nothing
    case 32:
       return doComplex()
    case 64:
        // do nothing
    }
    return "", nil
}

func main() {
    val, err := getStringResult(32)
    if err != nil {
        panic(err)
    }
    if val == "" {
        // do something else
    }
    fmt.Println(val)
}

After our refactor, val is no longer modified, and the scope has been reduced. Again, keep in mind that these functions are very simple. Once this kind of code style becomes a part of larger, more complex systems, it can be impossible to figure out why errors are occurring. We don't want this to happen—not only because we generally dislike software errors but also because it's disrespectful to our colleagues, and ourselves; we are potentially wasting each other's time having to debug this type of code. Developers need to take responsibility for their own code rather than blaming these issues on the variable declaration syntax of a particular language like Go.

On a side note, if the // do something else part is another attempt to mutate the val variable, we should extract that logic out as its own self-contained function, as well as the previous part of it. This way, instead of expanding the mutable scope of our variables, we can just return a new value:

func getVal(num int) (string, error) {
    val, err := getStringResult(num)
    if err != nil {
        return "", err
    }
    if val == "" {
        return NewValue() // pretend function
    }
}

func main() {
    val, err := getVal(32)
    if err != nil {
        panic(err)
    }
    fmt.Println(val)
}

Variable Declaration

Other than avoiding issues with variable scope and mutability, we can also improve readability by declaring variables as close to their usage as possible. In C programming, it's common to see the following approach to declaring variables:

func main() {
  var err error
  var items []Item
  var sender, receiver chan Item
  
  items = store.GetItems()
  sender = make(chan Item)
  receiver = make(chan Item)
  
  for _, item := range items {
    ...
  }
}

This suffers from the same symptom as described in our discussion of variable scope. Even though these variables might not actually be reassigned at any point, this kind of coding style keeps the readers on their toes, in all the wrong ways. Much like computer memory, our brain's short-term memory has a limited capacity. Having to keep track of which variables are mutable and whether or not a particular fragment of code will mutate them makes it more difficult to understand what the code is doing. Figuring out the eventually returned value can be a nightmare. Therefore, to makes this easier for our readers (and our future selves), it's recommended that you declare variables as close to their usage as possible:

func main() {
    var sender chan Item
    sender = make(chan Item)

    go func() {
        for {
            select {
            case item := <-sender:
                // do something
            }
        }
    }()
}

However, we can do even better by invoking the function directly after its declaration. This makes it much clearer that the function logic is associated with the declared variable:

func main() {
  sender := func() chan Item {
    channel := make(chan Item)
    go func() {
      for {
        select { ... }
      }
    }()
    return channel
  }
}

And coming full circle, we can move the anonymous function to make it a named function instead:

func main() {
  sender := NewSenderChannel()
}

func NewSenderChannel() chan Item {
  channel := make(chan Item)
  go func() {
    for {
      select { ... }
    }
  }()
  return channel
}

It is still clear that we are declaring a variable, and the logic associated with the returned channel is simple, unlike in the first example. This makes it easier to traverse the code and understand the role of each variable.

Of course, this doesn't actually prevent us from mutating our sender variable. There is nothing that we can do about this, as there is no way of declaring a const struct or static variables in Go. This means that we'll have to restrain ourselves from modifying this variable at a later point in the code.

NOTE: The keyword const does exist but is limited in use to primitive types only.

One way of getting around this can at least limit the mutability of a variable to the package level. The trick involves creating a structure with the variable as a private property. This private property is thenceforth only accessible through other methods provided by this wrapping structure. Expanding on our channel example, this would look something like the following:

type Sender struct {
  sender chan Item
}

func NewSender() *Sender {
  return &Sender{
    sender: NewSenderChannel(),
  }
}

func (s *Sender) Send(item Item) {
  s.sender <- item
}

We have now ensured that the sender property of our Sender struct is never mutated—at least not from outside of the package. As of writing this document, this is the only way of creating publicly immutable non-primitive variables. It's a little verbose, but it's truly worth the effort to ensure that we don't end up with strange bugs resulting from accidental variable modification.

func main() {
  sender := NewSender()
  sender.Send(&Item{})
}

Looking at the example above, it's clear how this also simplifies the usage of our package. This way of hiding the implementation is beneficial not only for the maintainers of the package but also for the users. Now, when initialising and using the Sender structure, there is no concern over its implementation. This opens up for a much looser architecture. Because our users aren't concerned with the implementation, we are free to change it at any point, since we have reduced the point of contact that users have with the package. If we no longer wish to use a channel implementation in our package, we can easily change this without breaking the usage of the Send method (as long as we adhere to its current function signature).

NOTE: There is a fantastic explanation of how to handle the abstraction in client libraries, taken from the talk AWS re:Invent 2017: Embracing Change without Breaking the World (DEV319).

Clean Go

This section focuses less on the generic aspects of writing clean Go code and more on the specifics, with an emphasis on the underlying clean code principles.

Return Values

Returning Defined Errors

We'll start things off nice and easy by describing a cleaner way to return errors. As we discussed earlier, our main goal with writing clean code is to ensure readability, testability, and maintainability of the codebase. The technique for returning errors that we'll discuss here will achieve all three of those goals with very little effort.

Let's consider the normal way to return a custom error. This is a hypothetical example taken from a thread-safe map implementation that we've named Store:

package smelly

func (store *Store) GetItem(id string) (Item, error) {
    store.mtx.Lock()
    defer store.mtx.Unlock()

    item, ok := store.items[id]
    if !ok {
        return Item{}, errors.New("item could not be found in the store") 
    }
    return item, nil
}

There is nothing inherently smelly about this function when we consider it in isolation. We look into the items map of our Store struct to see if we already have an item with the given id. If we do, we return it; otherwise, we return an error. Pretty standard. So, what is the issue with returning custom errors as string values? Well, let's look at what happens when we use this function inside another package:

func GetItemHandler(w http.ReponseWriter, r http.Request) {
    item, err := smelly.GetItem("123")
    if err != nil {
        if err.Error() == "item could not be found in the store" {
            http.Error(w, err.Error(), http.StatusNotFound)
            return
        }
        http.Error(w, errr.Error(), http.StatusInternalServerError)
        return
    } 
    json.NewEncoder(w).Encode(item)
}

This is actually not too bad. However, there is one glaring problem: An error in Go is simply an interface that implements a function (Error()) returning a string; thus, we are now hardcoding the expected error code into our codebase, which isn't ideal. This hardcoded string is known as a magic string. And its main problem is flexibility: If at some point we decide to change the string value used to represent an error, our code will break (softly) unless we update it in possibly many different places. Our code is tightly coupled—it relies on that specific magic string and the assumption that it will never change as the codebase grows.

An even worse situation would arise if a client were to use our package in their own code. Imagine that we decided to update our package and changed the string that represents an error—the client's software would now suddenly break. This is quite obviously something that we want to avoid. Fortunately, the fix is very simple:

package clean

var (
    NullItem = Item{}

    ErrItemNotFound = errors.New("item could not be found in the store") 
)

func (store *Store) GetItem(id string) (Item, error) {
    store.mtx.Lock()
    defer store.mtx.Unlock()

    item, ok := store.items[id]
    if !ok {
        return NullItem, ErrItemNotFound
    }
    return item, nil
}

By simply representing the error as a variable (ErrItemNotFound), we've ensured that anyone using this package can check against the variable rather than the actual string that it returns:

func GetItemHandler(w http.ReponseWriter, r http.Request) {
    item, err := clean.GetItem("123")
    if err != nil {
        if errors.Is(err, clean.ErrItemNotFound) {
           http.Error(w, err.Error(), http.StatusNotFound)
            return
        }
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    } 
    json.NewEncoder(w).Encode(item)
}

This feels much nicer and is also much safer. Some would even say that it's easier to read as well. In the case of a more verbose error message, it certainly would be preferable for a developer to simply read ErrItemNotFound rather than a novel on why a certain error has been returned.

This approach is not limited to errors and can be used for other returned values. As an example, we are also returning a NullItem instead of Item{} as we did before. There are many different scenarios in which it might be preferable to return a defined object, rather than initialising it on return.

Returning default NullItem values like we did in the previous examples can also be safer in certain cases. As an example, a user of our package could forget to check for errors and end up initialising a variable that points to an empty struct containing a default value of nil as one or more property values. When attempting to access this nil value later in the code, the client software would panic. However, when we return our custom default value instead, we can ensure that all values that would otherwise default to nil are initialised. Thus, we'd ensure that we do not cause panics in our users' software.

This also benefits us. Consider this: If we wanted to achieve the same safety without returning a default value, we would have to change our code everywhere we return this type of empty value. However, with our default value approach, we now only have to change our code in a single place:

var NullItem = Item{
    itemMap: map[string]Item{},
}

NOTE: In many scenarios, invoking a panic will actually be preferable to indicate that there is an error check missing.

NOTE: Every interface property in Go has a default value of nil. This means that this is useful for any struct that has an interface property. This is also true for structs that contain channels, maps, and slices, which could potentially also have a nil value.

Returning Dynamic Errors

There are certainly some scenarios where returning an error variable might not actually be viable. In cases where the information in customised errors is dynamic, if we want to describe error events more specifically, we can no longer define and return our static errors. Here's an example:

func (store *Store) GetItem(id string) (Item, error) {
    store.mtx.Lock()
    defer store.mtx.Unlock()

    item, ok := store.items[id]
    if !ok {
        return NullItem, fmt.Errorf("Could not find item with ID: %s", id)
    }
    return item, nil
}

So, what to do? There is no well-defined or standard method for handling and returning these kinds of dynamic errors. My personal preference is to return a new interface, with a bit of added functionality:

type ErrorDetails interface {
    Error() string
    Type() string
}

type errDetails struct {
    errtype error
    details interface{}
}

func NewErrorDetails(err error, details ...interface{}) ErrorDetails {
    return &errDetails{
        errtype: err,
        details: details,
    }
}

func (err *errDetails) Error() string {
    return fmt.Sprintf("%v: %v", err.errtype, err.details)
}

func (err *errDetails) Type() error {
    return err.errtype
}

This new data structure still works as our standard error. We can still compare it to nil since it's an interface implementation, and we can still call .Error() on it, so it won't break any existing implementations. However, the advantage is that we can now check our error type as we could previously, despite our error now containing the dynamic details:

func (store *Store) GetItem(id string) (Item, error) {
    store.mtx.Lock()
    defer store.mtx.Unlock()

    item, ok := store.items[id]
    if !ok {
        return NullItem, NewErrorDetails(
            ErrItemNotFound,
            fmt.Sprintf("could not find item with id: %s", id))
    }
    return item, nil
}

And our HTTP handler function can then be refactored to check for a specific error again:

func GetItemHandler(w http.ReponseWriter, r http.Request) {
    item, err := clean.GetItem("123")
    if err != nil {
        if errors.Is(err.Type(), clean.ErrItemNotFound) {
            http.Error(w, err.Error(), http.StatusNotFound)
            return
        }
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    } 
    json.NewEncoder(w).Encode(item)
}

Nil Values

A controversial aspect of Go is the addition of nil. This value corresponds to the value NULL in C and is essentially an uninitialised pointer. We've already seen some of the problems that nil can cause, but to sum up: Things break when you try to access methods or properties of a nil value. Thus, it's recommended to avoid returning a nil value when possible. This way, the users of our code are less likely to accidentally access nil values.

There are other scenarios in which it is common to find nil values that can cause some unnecessary pain. An example of this is incorrectly initialising a struct (as in the example below), which can lead to it containing nil properties. If accessed, those nils will cause a panic.

type App struct {
    Cache *KVCache
}

type KVCache struct {
  mtx sync.RWMutex
    store map[string]string
}

func (cache *KVCache) Add(key, value string) {
  cache.mtx.Lock()
  defer cache.mtx.Unlock()
  
    cache.store[key] = value
}

This code is absolutely fine. However, the danger is that our App can be initialised incorrectly, without initialising the Cache property within. Should the following code be invoked, our application will panic:

    app := App{}
    app.Cache.Add("panic", "now")

The Cache property has never been initialised and is therefore a nil pointer. Thus, invoking the Add method like we did here will cause a panic, with the following message:

panic: runtime error: invalid memory address or nil pointer dereference

Instead, we can turn the Cache property of our App structure into a private property and create a getter-like method to access it. This gives us more control over what we are returning; specifically, it ensures that we aren't returning a nil value:

type App struct {
    cache *KVCache
}

func (app *App) Cache() *KVCache {
    if app.cache == nil {
        app.cache = NewKVCache()
    }
    return app.cache
}

The code that previously panicked will now be refactored to the following:

app := App{}
app.Cache().Add("panic", "now")

This ensures that users of our package don't have to worry about the implementation and whether they're using our package in an unsafe manner. All they need to worry about is writing their own clean code.

NOTE: There are other methods of achieving a similarly safe outcome. However, I believe this is the most straightforward approach.

Pointers in Go

Pointers in Go are a rather extensive topic. They're a very big part of working with the language—so much so that it is essentially impossible to write Go without some knowledge of pointers and their workings in the language. Therefore, it is important to understand how to use pointers without adding unnecessary complexity (and thereby keeping your codebase clean). Note that we will not review the details of how pointers are implemented in Go. Instead, we will focus on the quirks of Go pointers and how we can handle them.

Pointers add complexity to code. If we aren't cautious, incorrectly using pointers can introduce nasty side effects or bugs that are particularly difficult to debug. By sticking to the basic principles of writing clean code that we covered in the first part of this document, we can at least reduce the chances of introducing unnecessary complexity to our code.

Pointer Mutability

We've already looked at the problem of mutability in the context of globally or largely scoped variables. However, mutability is not necessarily always a bad thing, and I am by no means an advocate for writing 100% pure functional programs. Mutability is a powerful tool, but we should really only ever use it when it's necessary. Let's have a look at a code example illustrating why:

func (store *UserStore) Insert(user *User) error {
    if store.userExists(user.ID) {
        return ErrItemAlreaydExists
    }
    store.users[user.ID] = user
    return nil
}

func (store *UserStore) userExists(id int64) bool {
    _, ok := store.users[id]
    return ok
}

At first glance, this doesn't seem too bad. In fact, it might even seem like a rather simple insert function for a common list structure. We accept a pointer as input, and if no other users with this id exist, then we insert the provided user pointer into our list. Then, we use this functionality in our public API for creating new users:

func CreateUser(w http.ResponseWriter, r *http.Request) {
    user, err := parseUserFromRequest(r)
    if err != nil {
        http.Error(w, err, http.StatusBadRequest)
        return
    }
    if err := insertUser(w, user); err != nil {
      http.Error(w, err, http.StatusInternalServerError)
      return
    }
}

func insertUser(w http.ResponseWriter, user User) error {
      if err := store.Insert(user); err != nil {
        return err
    }
      user.Password = ""
      return json.NewEncoder(w).Encode(user)
}

Once again, at first glance, everything looks fine. We parse the user from the received request and insert the user struct into our store. Once we have successfully inserted our user into the store, we then set the password to be an empty string before returning the user as a JSON object to our client. This is all quite common practice, typically when returning a user object whose password has been hashed, since we don't want to return the hashed password.

However, imagine that we are using an in-memory store based on a map. This code will produce some unexpected results. If we check our user store, we'll see that the change we made to the users password in the HTTP handler function also affected the object in our store. This is because the pointer address returned by parseUserFromRequest is what we populated our store with, rather than an actual value. Therefore, when making changes to the dereferenced password value, we end up changing the value of the object we are pointing to in our store.

This is a great example of why both mutability and variable scope can cause some serious issues and bugs when used incorrectly. When passing pointers as an input parameter of a function, we are expanding the scope of the variable whose data is being pointed to. Even more worrying is the fact that we are expanding the scope to an undefined level. We are almost expanding the scope of the variable to the global level. As demonstrated by the above example, this can lead to disastrous bugs that are particularly difficult to find and eradicate.

Fortunately, the fix for this is rather simple:

func (store *UserStore) Insert(user User) error {
    if store.userExists(user.ID) {
        return ErrItemAlreaydExists
    }
    store.users[user.ID] = &user
    return nil
}

Instead of passing a pointer to a User struct, we are now passing in a copy of a User. We are still storing a pointer to our store; however, instead of storing the pointer from outside of the function, we are storing the pointer to the copied value, whose scope is inside the function. This fixes the immediate problem but might still cause issues further down the line if we aren't careful. Consider this code:

func (store *UserStore) Get(id int64) (*User, error) {
    user, ok := store.users[id]
    if !ok {
        return EmptyUser, ErrUserNotFound
    }
    return store.users[id], nil
}

Again, this is a very standard implementation of a getter function for our store. However, it's still bad code because we are once again expanding the scope of our pointer, which may end up causing unexpected side effects. When returning the actual pointer value, which we are storing in our user store, we are essentially giving other parts of our application the ability to change our store values. This is bound to cause confusion. Our store should be the only entity allowed to make changes to its values. The easiest fix for this is to return a value of User rather than returning a pointer.

NOTE: Consider the case where our application uses multiple threads. In this scenario, passing pointers to the same memory location can also potentially result in a race condition. In other words, we aren't only potentially corrupting our data—we could also cause a panic from a data race.

Please keep in mind that there is intrinsically nothing wrong with returning pointers. However, the expanded scope of variables (and the number of owners pointing to those variables) is the most important consideration when working with pointers. This is what categorises our previous example as a smelly operation. This is also why common Go constructors are absolutely fine:

func AddName(user *User, name string) {
    user.Name = name
}

This is okay because the variable scope, which is defined by whoever invokes the function, remains the same after the function returns. Combined with the fact that the function invoker remains the sole owner of the variable, this means that the pointer cannot be manipulated in an unexpected manner.

Closures Are Function Pointers

Before we get into the next topic of using interfaces in Go, I would like to introduce a common alternative. It's what C programmers know as "function pointers" and what most other programming languages call closures. A closure is simply an input parameter like any other, except it represents (points to) a function that can be invoked. In JavaScript, it's quite common to use closures as callbacks, which are just functions that are invoked after some asynchronous operation has finished. In Go, we don't really have this notion. We can, however, use closures to partially overcome a different hurdle: The lack of generics.

Consider the following function signature:

func something(closure func(float64) float64) float64 { ... }

Here, something takes another function (a closure) as input and returns a float64. The input function takes a float64 as input and also returns a float64. This pattern can be particularly useful for creating a loosely coupled architecture, making it easier to to add functionality without affecting other parts of the code. Suppose we have a struct containing data that we want to manipulate in some form. Through this structure's Do() method, we can perform operations on that data. If we know the operation ahead of time, we can obviously handle that logic directly in our Do() method:

func (datastore *Datastore) Do(operation Operation, data []byte) error {
  switch(operation) {
  case COMPARE:
    return datastore.compare(data)
  case CONCAT:
    return datastore.add(data)
  default:
    return ErrUnknownOperation
  }
}

But as you can imagine, this function is quite rigid—it performs a predetermined operation on the data contained in the Datastore struct. If at some point we would like to introduce more operations, we'd end up bloating our Do method with quite a lot of irrelevant logic that would be hard to maintain. The function would have to always care about what operation it's performing and to cycle through a number of nested options for each operation. It might also be an issue for developers wanting to use our Datastore object who don't have access to edit our package code, since there is no way of extending structure methods in Go as there is in most OOP languages.

So instead, let's try a different approach using closures:

func (datastore *Datastore) Do(operation func(data []byte, data []byte) ([]byte, error), data []byte) error {
  result, err := operation(datastore.data, data)
  if err != nil {
    return err
  }
  datastore.data = result
  return nil
}

func concat(a []byte, b []byte) ([]byte, error) {
  ...
}

func main() {
  ...
  datastore.Do(concat, data)
  ...
}

You'll notice immediately that the function signature for Do ends up being quite messy. We also have another issue: The closure isn't particularly generic. What happens if we find out that we actually want the concat to be able to take more than just two byte arrays as input? Or if we want to add some completely new functionality that may also need more or fewer input values than (data []byte, data []byte)?

One way to solve this issue is to change our concat function. In the example below, I have changed it to only take a single byte array as an input argument, but it could just as well have been the opposite case:

func concat(data []byte) func(data []byte) ([]byte, error) {
  return func(concatting []byte) ([]byte, error) {
    return append(data, concatting), nil
  }
}

func (datastore *Datastore) Do(operation func(data []byte) ([]byte, error)) error {
  result, err := operation(datastore.data)
  if err != nil {
    return err
  }
  datastore.data = result
  return nil
}

func main() {
  ...
  datastore.Do(compare(data))
  ...
}

Notice how we've effectively moved some of the clutter out of the Do method signature and into the concat method signature. Here, the concat function returns yet another function. Within the returned function, we store the input values originally passed in to our concat function. The returned function can therefore now take a single input parameter; within our function logic, we will append it to our original input value. As a newly introduced concept, this may seem quite strange. However, it's good to get used to having this as an option; it can help loosen up logic coupling and get rid of bloated functions.

In the next section, we'll get into interfaces. Before we do so, let's take a short moment to discuss the difference between interfaces and closures. First, it's worth noting that interfaces and closures definitely solve some common problems. However, the way that interfaces are implemented in Go can sometimes make it tricky to decide whether to use interfaces or closures for a particular problem. Usually, whether an interface or a closure is used isn't really of importance; the right choice is whichever one solves the problem at hand. Typically, closures will be simpler to implement if the operation is simple by nature. However, as soon as the logic contained within a closure becomes complex, one should strongly consider using an interface instead.

Dave Cheney has an excellent write-up on this topic, as well as a talk:

Jon Bodner also has a related talk:

Interfaces in Go

In general, Go's approach to handling interfaces is quite different from those of other languages. Interfaces aren't explicitly implemented like they would be in Java or C#; rather, they are implicitly created if they fulfill the contract of the interface. As an example, this means that any struct that has an Error() method implements (or "fulfills") the Error interface and can be returned as an error. This manner of implementing interfaces is extremely easy and makes Go feel more fast paced and dynamic.

However, there are certainly disadvantages with this approach. As the interface implementation is no longer explicit, it can be difficult to see which interfaces are implemented by a struct. Therefore, it's common to define interfaces with as few methods as possible; this makes it easier to understand whether a particular struct fulfills the contract of the interface.

An alternative is to create constructors that return an interface rather than the concrete type:

type Writer interface {
    Write(p []byte) (n int, err error)
}

type NullWriter struct {}

func (writer *NullWriter) Write(data []byte) (n int, err error) {
    // do nothing
    return len(data), nil
}

func NewNullWriter() io.Writer {
    return &NullWriter{}
}

The above function ensures that the NullWriter struct implements the Writer interface. If we were to delete the Write method from NullWriter, we would get a compilation error. This is a good way of ensuring that our code behaves as expected and that we can rely on the compiler as a safety net in case we try to write invalid code.

In certain cases, it might not be desirable to write a constructor, or perhaps we would like for our constructor to return the concrete type, rather than the interface. As an example, the NullWriter struct has no properties to populate on initialisation, so writing a constructor is a little redundant. Therefore, we can use the less verbose method of checking interface compatibility:

type Writer interface {
    Write(p []byte) (n int, err error)
}

type NullWriter struct {}
var _ io.Writer = &NullWriter{}

In the above code, we are initialising a variable with the Go blank identifier, with the type assignment of io.Writer. This results in our variable being checked to fulfill the io.Writer interface contract, before being discarded. This method of checking interface fulfillment also makes it possible to check that several interface contracts are fulfilled:

type NullReaderWriter struct{}
var _ io.Writer = &NullWriter{}
var _ io.Reader = &NullWriter{}

From the above code, it's very easy to understand which interfaces must be fulfilled; this ensures that the compiler will help us out during compile time. Therefore, this is generally the preferred solution for checking interface contract fulfillment.

There's yet another method of trying to be more explicit about which interfaces a given struct implements. However, this third method actually achieves the opposite of what we want. It involves using embedded interfaces as a struct property.

Wait what? – Presumably most people

Let's rewind a bit before we dive deep into the forbidden forest of smelly Go. In Go, we can use embedded structs as a type of inheritance in our struct definitions. This is really nice, as we can decouple our code by defining reusable structs.

type Metadata struct {
    CreatedBy types.User
}

type Document struct {
    *Metadata
    Title string
    Body string
}

type AudioFile struct {
    *Metadata
    Title string
    Body string
}

Above, we are defining a Metadata object that will provide us with property fields that we are likely to use on many different struct types. The neat thing about using the embedded struct, rather than explicitly defining the properties directly in our struct, is that it has decoupled the Metadata fields. Should we choose to update our Metadata object, we can change it in just a single place. As we've seen several times so far, we want to ensure that a change in one place in our code doesn't break other parts. Keeping these properties centralised makes it clear that structures with an embedded Metadata have the same properties—much like how structures that fulfill interfaces have the same methods.

Now, let's look at an example of how we can use a constructor to further prevent breaking our code when making changes to our Metadata struct:

func NewMetadata(user types.User) Metadata {
    return &Metadata{
        CreatedBy: user,
    }
}

func NewDocument(title string, body string) Document {
    return Document{
        Metadata: NewMetadata(),
        Title: title,
        Body: body,
    }
}

Suppose that at a later point in time, we decide that we'd also like a CreatedAt field on our Metadata object. We can now easily achieve this by simply updating our NewMetadata constructor:

func NewMetadata(user types.User) Metadata {
    return &Metadata{
        CreatedBy: user,
        CreatedAt: time.Now(),
    }
}

Now, both our Document and AudioFile structures are updated to also populate these fields on construction. This is the core principle behind decoupling and an excellent example of ensuring maintainability of code. We can also add new methods without breaking our existing code:

type Metadata struct {
    CreatedBy types.User
    CreatedAt time.Time
    UpdatedBy types.User
    UpdatedAt time.Time
}

func (metadata *Metadata) AddUpdateInfo(user types.User) {
    metadata.UpdatedBy = user
    metadata.UpdatedAt = time.Now()
}

Again, without breaking the rest of our codebase, we've managed to introduce new functionality. This kind of programming makes implementing new features very quick and painless, which is exactly what we are trying to achieve by writing clean code.

Let's return to the topic of interface contract fulfillment using embedded interfaces. Consider the following code as an example:

type NullWriter struct {
    Writer
}

func NewNullWriter() io.Writer {
    return &NullWriter{}
}

The above code compiles. Technically, we are implementing the interface of Writer in our NullWriter, as NullWriter will inherit all the functions that are associated with this interface. Some see this as a clear way of showing that our NullWriter is implementing the Writer interface. However, we must be careful when using this technique.

func main() {
    w := NewNullWriter()

    w.Write([]byte{1, 2, 3})
}

As mentioned before, the above code will compile. The NewNullWriter returns a Writer, and everything is hunky-dory according to the compiler because NullWriter fulfills the contract of io.Writer, via the embedded interface. However, running the code above will result in the following:

panic: runtime error: invalid memory address or nil pointer dereference

What happened? An interface method in Go is essentially a function pointer. In this case, since we are pointing to the function of an interface, rather than an actual method implementation, we are trying to invoke a function that's actually a nil pointer. To prevent this from happening, we would have to provide the NulllWriter with a struct that fulfills the interface contract, with actual implemented methods.

func main() {
  w := NullWriter{
    Writer: &bytes.Buffer{},
  }

    w.Write([]byte{1, 2, 3})
}

NOTE: In the above example, Writer is referring to the embedded io.Writer interface. It is also possible to invoke the Write method by accessing this property with w.Writer.Write().

We are no longer triggering a panic and can now use the NullWriter as a Writer. This initialisation process is not much different from having properties that are initialised as nil, as discussed previously. Therefore, logically, we should try to handle them in a similar way. However, this is where embedded interfaces become a little difficult to work with. In a previous section, it was explained that the best way to handle potential nil values is to make the property in question private and create a public getter method. This way, we could ensure that our property is, in fact, not nil. Unfortunately, this is simply not possible with embedded interfaces, as they are by nature always public.

Another concern raised by using embedded interfaces is the potential confusion caused by partially overwritten interface methods:

type MyReadCloser struct {
  io.ReadCloser
}

func (closer *ReadCloser) Read(data []byte) { ... }

func main() {
  closer := MyReadCloser{}
  
  closer.Read([]byte{1, 2, 3})     // works fine
  closer.Close()         // causes panic
  closer.ReadCloser.Closer()         // no panic 
}

Even though this might look like we're overriding methods, which is common in languages such as C# and Java, we actually aren't. Go doesn't support inheritance (and thus has no notion of a superclass). We can imitate the behaviour, but it is not a built-in part of the language. By using methods such as interface embedding without caution, we can create confusing and potentially buggy code, just to save a few more lines.

NOTE: Some argue that using embedded interfaces is a good way of creating a mock structure for testing a subset of interface methods. Essentially, by using an embedded interface, you won't have to implement all of the methods of the interface; rather, you can choose to implement only the few methods that you'd like to test. Within the context of testing/mocking, I can see this argument, but I am still not a fan of this approach.

Let's quickly get back to clean code and proper usage of interfaces. It's time to discuss using interfaces as function parameters and return values. The most common proverb for interface usage with functions in Go is the following:

Be conservative in what you do; be liberal in what you accept from others – Jon Postel

FUN FACT: This proverb actually has nothing to do with Go. It's taken from an early specification of the TCP networking protocol.

In other words, you should write functions that accept an interface and return a concrete type. This is generally good practice and is especially useful when doing tests with mocking. As an example, we can create a function that takes a writer interface as its input and invokes the Write method of that interface:

type Pipe struct {
    writer io.Writer
    buffer bytes.Buffer
}

func NewPipe(w io.Writer) *Pipe {
    return &Pipe{
        writer: w,
    }
} 

func (pipe *Pipe) Save() error {
    if _, err := pipe.writer.Write(pipe.FlushBuffer()); err != nil {
        return err
    }
    return nil
}

Let's assume that we are writing to a file when our application is running, but we don't want to write to a new file for all tests that invoke this function. We can implement a new mock type that will basically do nothing. Essentially, this is just basic dependency injection and mocking, but the point is that it is extremely easy to achieve in Go:

type NullWriter struct {}

func (w *NullWriter) Write(data []byte) (int, error) {
    return len(data), nil
}

func TestFn(t *testing.T) {
    ...
    pipe := NewPipe(NullWriter{})
    ...
}

NOTE: There is actually already a null writer implementation built into the ioutil package named Discard.

When constructing our Pipe struct with NullWriter (rather than a different writer), when invoking our Save function, nothing will happen. The only thing we had to do was add four lines of code. This is why it is encouraged to make interfaces as small as possible in idiomatic Go—it makes it especially easy to implement patterns like the one we just saw. However, this implementation of interfaces also comes with a huge downside.

The Empty interface{}

Unlike other languages, Go does not have an implementation for generics. There have been many proposals for one, but all have been turned down by the Go language team. Unfortunately, without generics, developers must try to find creative alternatives, which very often involves using the empty interface{}. This section describes why these often too creative implementations should be considered bad practice and unclean code. There will also be examples of appropriate usage of the empty interface{} and how to avoid some pitfalls of writing code with it.

As mentioned in a previous section, Go determines whether a concrete type implements a particular interface by checking whether the type implements the methods of that interface. So what happens if our interface declares no methods, as is the case with the empty interface?

type EmptyInterface interface {}

The above is equivalent to the built-in type interface{}. A natural consequence of this is that we can write generic functions that accept any type as arguments. This is extremely useful for certain kinds of functions, such as print helpers. Interestingly, this is actually what makes it possible to pass in any type to the Println function from the fmt package:

func Println(v ...interface{}) {
    ...
}

In this case, Println isn't just accepting a single interface{}; rather, the function accepts a slice of types that implement the empty interface{}. As there are no methods associated with the empty interface{}, all types are accepted, even making it possible to feed Println with a slice of different types. This is a very common pattern when handling string conversion (both from and to a string). Good examples of this come from the json standard library package:

func InsertItemHandler(w http.ResponseWriter, r *http.Request) {
    var item Item
    if err := json.NewDecoder(r.Body).Decode(&item); err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }

    if err := db.InsertItem(item); err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }
    w.WriteHeader(http.StatsOK)
}

All the less elegant code is contained within the Decode function. Thus, developers using this functionality won't have to worry about type reflection or type casting; we just have to worry about providing a pointer to a concrete type. This is good because the Decode() function is technically returning a concrete type. We are passing in our Item value, which will be populated from the body of the HTTP request. This means we won't have to deal with the potential risks of handling the interface{} value ourselves.

However, even when using the empty interface{} with good programming practices, we still have some issues. If we pass in a JSON string that has nothing to do with our Item type but is still valid JSON, we won't receive an error—our item variable will just be left with the default values. So, while we don't have to worry about reflection and casting errors, we will still have to make sure that the message sent from our client is a valid Item type. Unfortunately, as of writing this document, there is no simple or good way to implement these types of generic decoders without using the empty interface{} type.

The problem with using interface{} in this manner is that we are leaning towards using Go, a statically typed language, as a dynamically typed language. This becomes even clearer when looking at poor implementations of the interface{} type. The most common example of this comes from developers trying to implement a generic store or list of some sort.

Let's look at an example of trying to implement a generic HashMap package that can store any type using interface{}.

type HashMap struct {
    store map[string]interface{}
}

func (hashmap *HashMap) Insert(key string, value interface{}) {
    hashmap.store[key] = value
}

func (hashmap *HashMap) Get(key string) (interface{}, error) {
    value, ok := hashmap.store[key]
    if !ok {
        return nil, ErrKeyNotFoundInHashMap
    }
    return value
}

NOTE: I have omitted thread safety from this example to keep it simple.

Please keep in mind that the implementation pattern shown above is actually used in quite a lot of Go packages. It is even used in the standard library sync package for the sync.Map type. So what's the problem with this implementation? Well, let's have a look at an example of using the package:

func SomeFunction(id string) (Item, error) {
    itemIface, err := hashmap.Get(id)
    if err != nil {
        return EmptyItem, err
    }
    item, ok := itemIface.(Item)
    if !ok {
        return EmptyItem, ErrCastingItem
    }
    return item, nil
}

At first glance, this looks fine. However, we'll start getting into trouble if we add different types to our store, something that's currently allowed. There is nothing preventing us from adding something other than the Item type. So what happens when someone starts adding other types into our HashMap, like a pointer *Item instead of an Item? Our function now might return an error. Worst of all, this might not even be caught by our tests. Depending on the complexity of the system, this could introduce some bugs that are particularly difficult to debug.

This type of code should never reach production. Remember: Go does not (yet) support generics. That's just a fact that developers must accept for the time being. If we want to use generics, then we should use a different language that does support generics rather than relying on dangerous hacks.

So, how do we prevent this code from reaching production? The simplest solution is to just write the functions with concrete types instead of using interface{} values. Of course, this is not always the best approach, as there might be some functionality within the package that is not trivial to implement ourselves. Therefore, a better approach may be to create wrappers that expose the functionality we need but still ensure type safety:

type ItemCache struct {
  kv tinykv.KV
} 

func (cache *ItemCache) Get(id string) (Item, error) {
  value, ok := cache.kv.Get(id)
  if !ok {
    return EmptyItem, ErrItemNotFound
  }
  return interfaceToItem(value)
}

func interfaceToItem(v interface{}) (Item, error) {
  item, ok := v.(Item)
  if !ok {
    return EmptyItem, ErrCouldNotCastItem
  }
  return item, nil
}

func (cache *ItemCache) Put(id string, item Item) error {
  return cache.kv.Put(id, item)
}

NOTE: Implementations of other functionalities of the tinykv.KV cache have been omitted for the sake of brevity.

The wrapper above now ensures that we are using the actual types and that we are no longer passing in interface{} types. It is therefore no longer possible to accidentally populate our store with a wrong value type, and we have restricted our casting of types as much as possible. This is a very straightforward way of solving our issue, even if somewhat manually.

Summary

First of all, thank you for making it all the way through this article! I hope it has provided some insight into clean code and how it helps ensure maintainability, readability, and stability in any codebase.

Let's briefly sum up all the topics we've covered:

Functions—A function's name should reflect its scope; the smaller the scope of a function, the more specific its name. Ensure that all functions serve a single purpose in as few lines as possible. A good rule of thumb is to limit your functions to 5–8 lines and to only accept 2–3 arguments.

Variables—Unlike functions, variables should assume more generic names as their scope becomes smaller. It's also recommended that you limit the scope of a variable as much as possible to prevent unintentional modification. On a similar note, you should keep the modification of variables to a minimum; this becomes an especially important consideration as the scope of a variable grows.

Return Values—Concrete types should be returned whenever possible. Make it as difficult as possible for users of your package to make mistakes and as easy as possible for them to understand the values returned by your functions.

Pointers—Use pointers with caution, and limit their scope and mutability to an absolute minimum. Remember: Garbage collection only assists with memory management; it does not assist with all of the other complexities associated with pointers.

Interfaces—Use interfaces as much as possible to loosen the coupling of your code. Hide any code using the empty interface{} as much as possible from end users to prevent it from being exposed.

As a final note, it's worth mentioning that the notion of clean code is particularly subjective, and that likely won't ever change. However, much like my statement concerning gofmt, I think it's more important to find a common standard than something that everyone agrees with; the latter is extremely difficult to achieve.

It's also important to understand that fanaticism is never the goal with clean code. A codebase will most likely never be fully 'clean,' in the same way that your office desk probably isn't either. There's certainly room for you to step outside the rules and boundaries covered in this article. However, remember that the most important reason for writing clean code is to help yourself and other developers. We support engineers by ensuring stability in the software we produce and by making it easier to debug faulty code. We help our fellow developers by ensuring that our code is readable and easily digestible. We help everyone involved in the project by establishing a flexible codebase that allows us to quickly introduce new features without breaking our current platform. We move quickly by going slowly, and everyone is satisfied.

I hope you will join this discussion to help the Go community define (and refine) the concept of clean code. Let's establish a common ground so that we can improve software—not only for ourselves but for the sake of everyone.

Author: Pungyeon
Source Code: https://github.com/Pungyeon/clean-go-article 
License: 

#go #golang #clean 

Clean Go Code
Nigel  Uys

Nigel Uys

1651018380

Go-cleanhttp: Functions for Accessing "clean" Go Http.Client Values

cleanhttp

Functions for accessing "clean" Go http.Client values


The Go standard library contains a default http.Client called http.DefaultClient. It is a common idiom in Go code to start with http.DefaultClient and tweak it as necessary, and in fact, this is encouraged; from the http package documentation:

The Client's Transport typically has internal state (cached TCP connections), so Clients should be reused instead of created as needed. Clients are safe for concurrent use by multiple goroutines.

Unfortunately, this is a shared value, and it is not uncommon for libraries to assume that they are free to modify it at will. With enough dependencies, it can be very easy to encounter strange problems and race conditions due to manipulation of this shared value across libraries and goroutines (clients are safe for concurrent use, but writing values to the client struct itself is not protected).

Making things worse is the fact that a bare http.Client will use a default http.Transport called http.DefaultTransport, which is another global value that behaves the same way. So it is not simply enough to replace http.DefaultClient with &http.Client{}.

This repository provides some simple functions to get a "clean" http.Client -- one that uses the same default values as the Go standard library, but returns a client that does not share any state with other clients.

Author: Hashicorp
Source Code: https://github.com/hashicorp/go-cleanhttp 
License: MPL-2.0 License

#go #golang #clean #http 

Go-cleanhttp: Functions for Accessing "clean" Go Http.Client Values
Rupert  Beatty

Rupert Beatty

1647669262

BlazorHero: Clean Architecture Template

BlazorHero - Clean Architecture Template

Open Sourced Solution Template For Blazor Web-Assembly 5.0 built with MudBlazor Components 

About The Project ⚡

BlazorHero is a Clean Architecture Solution Template for Blazor Webassembly 5.0 built with MudBlazor Components.

Complete Overview - Youtube Video 🆕 📈

So, here is an in-depth video that takes you through the BlazorHero Project! Do Like & Subscribe to my Youtube channel! It would be great if you could leave behind your valuable feedback in the comments section of the Video. This helps me reach a much wider audience with time :)

Watch it here!

Blazor Hero - Clean Architecture Solution Template for Blazor WebAssembly

Tech Stack 💪

BlazorHero v2.2

  • UI Improvements
  • Docker Support
  • Better Permissions Management
  • Code Cleanups
  • RTL Support
  • Minor Bug Fixes
  • Better Project Structure

What to Expect in BlazorHero 3.0?

  • Modular Architecture
  • Cleaner Separation Of Code
  • Dedicated Documentation Website - Here
  • Tutorials to add new entities, controllers
  • UI Updates
  • Support for PostgreSQL / MySQL - Easy DB Switching
  • Theme Manager Integration to change UI Color Palletes / Fonts on the go.
  • You can suggest your requirements as well!

Down the Roadmap

  • Migration to .NET 6
  • Multi Tenancy
  • Better Localization - JSON

Getting Started 🦸

Important If you are already using Blazor Hero v1.x, make sure that you drop your existing database and re-update your database using the CLI as there are a couple of new migrations added that might clash with your existing schema. Also, install the latest version of BlazorHero.

The easiest way to get started with Blazor Hero is to install the NuGet package and run dotnet new BlazorHero.CleanArchitecture:

  1. Install the latest .NET 5 SDK
  2. Install the latest DOTNET & EF CLI Tools by using this command dotnet tool install --global dotnet-ef
  3. Install the latest version of Visual Studio IDE 2019 (v16.8 and above) 🚀
  4. Open up Command Prompt and run dotnet new --install BlazorHero.CleanArchitecture to install the project template
  5. Create a folder for your solution and cd into it (the template will use it as project name)
  6. Run dotnet new BlazorHero.CleanArchitecture to create a new Solution with all the Awesomeness 🕶️ of BlazorHero 🦸

What to do next? Read the entire guide on my blog.

Getting Started with Docker in Windows :rocket:

  • Install Docker on Windows via https://docs.docker.com/docker-for-windows/install/
  • Open up Powershell on Windows and run the following
    • cd c:\
    • dotnet dev-certs https -ep $env:USERPROFILE\.aspnet\https\aspnetapp.pfx -p securePassword123
    • dotnet dev-certs https --trust
    • Note - Make sure that you use the same password that has been configured in the docker-compose.yml file. By default, securePassword123 is configured.
  • 5005 & 5006 are the ports setup to run blazorHero on Docker, so make sure that these ports are free. You could also change the ports in the docker-compose.yml and Server\Dockerfile files.
  • Now navigate back to the root of the BlazorHero Project on your local machine and run the following via terminal - docker-compose -f 'docker-compose.yml' up --build
  • This will start pulling MSSQL Server Image from Docker Hub if you don't already have this image. It's around 500+ Mbs of download.
  • Once that is done, dotnet SDKs and runtimes are downloaded, if not present already. That's almost 200+ more Mbs of download.
  • PS If you find any issues while Docker installs the nuget packages, it is most likely that your ssl certificates are not installed properly. Apart from that I also added the --disable-parallel in the Server\Dockerfileto ensure network issues don't pop-up. You can remove this option to speed up the build process.
  • That's almost everything. Once the containers are available, migrations are updated in the MSSQL DB, default data is seeded.
  • Browse to https://localhost:5005/ to use your version of BlazorHero !

Complete Documentation :rocket:

Getting started with Blazor Hero – A Clean Architecture Template built for Blazor WebAssembly using MudBlazor Components. This project will make your Blazor Learning Process much easier than you anticipate. Blazor Hero is meant to be an Enterprise Level Boilerplate, which comes free of cost, completely open sourced.

The provided documentation / guide will get you started with BlazorHero in no time. It provides a complete walkthrough for the project with to-the-point guides and notes.

Read the Quick Start Guide

Features

All the completed and the upcoming features are mentioned in the Features.MD File

Contributing

Contributions are what make the open-source community such an amazing place to be, learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Here are the few contributions that I would highly appreciate ;)

  •  Need someone to add in the API Documentation for Swagger.
  •  Need someone to implement localization throughout every Razor Component of the solution under the WASM(Client) Project. You can take the Pages/Authentication/Login.razor as the point of reference. It is as simple as adding @inject Microsoft.Extensions.Localization.IStringLocalizer<Login> localizer to every page, changing the texts to @localizer["Text Here"] and finally adding resx files to the Resources Folder as per the folder structure.
  •  Need few contributors to add in various language translations as per the implemented Location. I got time to only add a few translations for French as of now.
  •  Need a UI contributor to look at the UX/UI of the entire project
  •  Need someone to buildup a cool Material Logo for BlazorHero (BH):D Do contact me on LinkedIn (https://www.linkedin.com/in/iammukeshm/).
  •  And finally, Stars from everyone! :D

Contact

Mukesh Murugan

Support ⭐

Has this Project helped you learn something New? or Helped you at work? Do Consider Supporting. Here are a few ways by which you can support.

  • Leave a star! ⭐
  • Recommend this awesome project to your colleagues. 🥇
  • Leave your feedback / comments regarding this project in the comments section on my blog Blazor Hero Blog
  • Do consider endorsing me on LinkedIn for ASP.NET Core - Connect via LinkedIn 🦸
  • Or, If you want to support this project on the long run, consider buying me a coffee! ☕

Read the Documentation » 

Author: Blazorhero
Source Code: https://github.com/blazorhero/CleanArchitecture 
License: MIT License

#blazor #clean #webassembly 

BlazorHero: Clean Architecture Template
Desmond  Gerber

Desmond Gerber

1646555760

Clean-css: Fast and Efficient CSS Optimizer for Node.js and The Web

clean-css is a fast and efficient CSS optimizer for Node.js platform and any modern browser.

According to tests it is one of the best available.

Table of Contents

Node.js version support

clean-css requires Node.js 10.0+ (tested on Linux, OS X, and Windows)

Install

npm install --save-dev clean-css

Use

var CleanCSS = require('clean-css');
var input = 'a{font-weight:bold;}';
var options = { /* options */ };
var output = new CleanCSS(options).minify(input);

What's new in version 5.0

clean-css 5.0 will introduce some breaking changes:

  • Node.js 6.x and 8.x are officially no longer supported;
  • transform callback in level-1 optimizations is removed in favor of new plugins interface;
  • changes default Internet Explorer compatibility from 10+ to >11, to revert the old default use { compatibility: 'ie10' } flag;
  • changes default rebase option from true to false so URLs are not rebased by default. Please note that if you set rebaseTo option it still counts as setting rebase: true to preserve some of the backward compatibility.

And on the new features side of things:

  • format options now accepts numerical values for all breaks, which will allow you to have more control over output formatting, e.g. format: {breaks: {afterComment: 2}} means clean-css will add two line breaks after each comment
  • a new batch option (defaults to false) is added, when set to true it will process all inputs, given either as an array or a hash, without concatenating them.

What's new in version 4.2

clean-css 4.2 introduces the following changes / features:

  • Adds process method for compatibility with optimize-css-assets-webpack-plugin;
  • new transition property optimizer;
  • preserves any CSS content between /* clean-css ignore:start */ and /* clean-css ignore:end */ comments;
  • allows filtering based on selector in transform callback, see example;
  • adds configurable line breaks via format: { breakWith: 'lf' } option.

What's new in version 4.1

clean-css 4.1 introduces the following changes / features:

  • inline: false as an alias to inline: ['none'];
  • multiplePseudoMerging compatibility flag controlling merging of rules with multiple pseudo classes / elements;
  • removeEmpty flag in level 1 optimizations controlling removal of rules and nested blocks;
  • removeEmpty flag in level 2 optimizations controlling removal of rules and nested blocks;
  • compatibility: { selectors: { mergeLimit: <number> } } flag in compatibility settings controlling maximum number of selectors in a single rule;
  • minify method improved signature accepting a list of hashes for a predictable traversal;
  • selectorsSortingMethod level 1 optimization allows false or 'none' for disabling selector sorting;
  • fetch option controlling a function for handling remote requests;
  • new font shorthand and font-* longhand optimizers;
  • removal of optimizeFont flag in level 1 optimizations due to new font shorthand optimizer;
  • skipProperties flag in level 2 optimizations controlling which properties won't be optimized;
  • new animation shorthand and animation-* longhand optimizers;
  • removeUnusedAtRules level 2 optimization controlling removal of unused @counter-style, @font-face, @keyframes, and @namespace at rules;
  • the web interface gets an improved settings panel with "reset to defaults", instant option changes, and settings being persisted across sessions.

Important: 4.0 breaking changes

clean-css 4.0 introduces some breaking changes:

  • API and CLI interfaces are split, so API stays in this repository while CLI moves to clean-css-cli;
  • root, relativeTo, and target options are replaced by a single rebaseTo option - this means that rebasing URLs and import inlining is much simpler but may not be (YMMV) as powerful as in 3.x;
  • debug option is gone as stats are always provided in output object under stats property;
  • roundingPrecision is disabled by default;
  • roundingPrecision applies to all units now, not only px as in 3.x;
  • processImport and processImportFrom are merged into inline option which defaults to local. Remote @import rules are NOT inlined by default anymore;
  • splits inliner: { request: ..., timeout: ... } option into inlineRequest and inlineTimeout options;
  • remote resources without a protocol, e.g. //fonts.googleapis.com/css?family=Domine:700, are not inlined anymore;
  • changes default Internet Explorer compatibility from 9+ to 10+, to revert the old default use { compatibility: 'ie9' } flag;
  • renames keepSpecialComments to specialComments;
  • moves roundingPrecision and specialComments to level 1 optimizations options, see examples;
  • moves mediaMerging, restructuring, semanticMerging, and shorthandCompacting to level 2 optimizations options, see examples below;
  • renames shorthandCompacting option to mergeIntoShorthands;
  • level 1 optimizations are the new default, up to 3.x it was level 2;
  • keepBreaks option is replaced with { format: 'keep-breaks' } to ease transition;
  • sourceMap option has to be a boolean from now on - to specify an input source map pass it a 2nd argument to minify method or via a hash instead;
  • aggressiveMerging option is removed as aggressive merging is replaced by smarter override merging.

Constructor options

clean-css constructor accepts a hash as a parameter with the following options available:

  • compatibility - controls compatibility mode used; defaults to ie10+; see compatibility modes for examples;
  • fetch - controls a function for handling remote requests; see fetch option for examples (since 4.1.0);
  • format - controls output CSS formatting; defaults to false; see formatting options for examples;
  • inline - controls @import inlining rules; defaults to 'local'; see inlining options for examples;
  • inlineRequest - controls extra options for inlining remote @import rules, can be any of HTTP(S) request options;
  • inlineTimeout - controls number of milliseconds after which inlining a remote @import fails; defaults to 5000;
  • level - controls optimization level used; defaults to 1; see optimization levels for examples;
  • rebase - controls URL rebasing; defaults to false;
  • rebaseTo - controls a directory to which all URLs are rebased, most likely the directory under which the output file will live; defaults to the current directory;
  • returnPromise - controls whether minify method returns a Promise object or not; defaults to false; see promise interface for examples;
  • sourceMap - controls whether an output source map is built; defaults to false;
  • sourceMapInlineSources - controls embedding sources inside a source map's sourcesContent field; defaults to false.

Compatibility modes

There is a certain number of compatibility mode shortcuts, namely:

  • new CleanCSS({ compatibility: '*' }) (default) - Internet Explorer 10+ compatibility mode
  • new CleanCSS({ compatibility: 'ie9' }) - Internet Explorer 9+ compatibility mode
  • new CleanCSS({ compatibility: 'ie8' }) - Internet Explorer 8+ compatibility mode
  • new CleanCSS({ compatibility: 'ie7' }) - Internet Explorer 7+ compatibility mode

Each of these modes is an alias to a fine grained configuration, with the following options available:

new CleanCSS({
  compatibility: {
    colors: {
      hexAlpha: false, // controls 4- and 8-character hex color support
      opacity: true // controls `rgba()` / `hsla()` color support
    },
    properties: {
      backgroundClipMerging: true, // controls background-clip merging into shorthand
      backgroundOriginMerging: true, // controls background-origin merging into shorthand
      backgroundSizeMerging: true, // controls background-size merging into shorthand
      colors: true, // controls color optimizations
      ieBangHack: false, // controls keeping IE bang hack
      ieFilters: false, // controls keeping IE `filter` / `-ms-filter`
      iePrefixHack: false, // controls keeping IE prefix hack
      ieSuffixHack: false, // controls keeping IE suffix hack
      merging: true, // controls property merging based on understandability
      shorterLengthUnits: false, // controls shortening pixel units into `pc`, `pt`, or `in` units
      spaceAfterClosingBrace: true, // controls keeping space after closing brace - `url() no-repeat` into `url()no-repeat`
      urlQuotes: true, // controls keeping quoting inside `url()`
      zeroUnits: true // controls removal of units `0` value
    },
    selectors: {
      adjacentSpace: false, // controls extra space before `nav` element
      ie7Hack: true, // controls removal of IE7 selector hacks, e.g. `*+html...`
      mergeablePseudoClasses: [':active', ...], // controls a whitelist of mergeable pseudo classes
      mergeablePseudoElements: ['::after', ...], // controls a whitelist of mergeable pseudo elements
      mergeLimit: 8191, // controls maximum number of selectors in a single rule (since 4.1.0)
      multiplePseudoMerging: true // controls merging of rules with multiple pseudo classes / elements (since 4.1.0)
    },
    units: {
      ch: true, // controls treating `ch` as a supported unit
      in: true, // controls treating `in` as a supported unit
      pc: true, // controls treating `pc` as a supported unit
      pt: true, // controls treating `pt` as a supported unit
      rem: true, // controls treating `rem` as a supported unit
      vh: true, // controls treating `vh` as a supported unit
      vm: true, // controls treating `vm` as a supported unit
      vmax: true, // controls treating `vmax` as a supported unit
      vmin: true // controls treating `vmin` as a supported unit
    }
  }
})

You can also use a string when setting a compatibility mode, e.g.

new CleanCSS({
  compatibility: 'ie9,-properties.merging' // sets compatibility to IE9 mode with disabled property merging
})

Fetch option

The fetch option accepts a function which handles remote resource fetching, e.g.

var request = require('request');
var source = '@import url(http://example.com/path/to/stylesheet.css);';
new CleanCSS({
  fetch: function (uri, inlineRequest, inlineTimeout, callback) {
    request(uri, function (error, response, body) {
      if (error) {
        callback(error, null);
      } else if (response && response.statusCode != 200) {
        callback(response.statusCode, null);
      } else {
        callback(null, body);
      }
    });
  }
}).minify(source);

This option provides a convenient way of overriding the default fetching logic if it doesn't support a particular feature, say CONNECT proxies.

Unless given, the default loadRemoteResource logic is used.

Formatting options

By default output CSS is formatted without any whitespace unless a format option is given. First of all there are two shorthands:

new CleanCSS({
  format: 'beautify' // formats output in a really nice way
})

and

new CleanCSS({
  format: 'keep-breaks' // formats output the default way but adds line breaks for improved readability
})

however format option also accept a fine-grained set of options:

new CleanCSS({
  format: {
    breaks: { // controls where to insert breaks
      afterAtRule: false, // controls if a line break comes after an at-rule; e.g. `@charset`; defaults to `false`
      afterBlockBegins: false, // controls if a line break comes after a block begins; e.g. `@media`; defaults to `false`
      afterBlockEnds: false, // controls if a line break comes after a block ends, defaults to `false`
      afterComment: false, // controls if a line break comes after a comment; defaults to `false`
      afterProperty: false, // controls if a line break comes after a property; defaults to `false`
      afterRuleBegins: false, // controls if a line break comes after a rule begins; defaults to `false`
      afterRuleEnds: false, // controls if a line break comes after a rule ends; defaults to `false`
      beforeBlockEnds: false, // controls if a line break comes before a block ends; defaults to `false`
      betweenSelectors: false // controls if a line break comes between selectors; defaults to `false`
    },
    breakWith: '\n', // controls the new line character, can be `'\r\n'` or `'\n'` (aliased as `'windows'` and `'unix'` or `'crlf'` and `'lf'`); defaults to system one, so former on Windows and latter on Unix
    indentBy: 0, // controls number of characters to indent with; defaults to `0`
    indentWith: 'space', // controls a character to indent with, can be `'space'` or `'tab'`; defaults to `'space'`
    spaces: { // controls where to insert spaces
      aroundSelectorRelation: false, // controls if spaces come around selector relations; e.g. `div > a`; defaults to `false`
      beforeBlockBegins: false, // controls if a space comes before a block begins; e.g. `.block {`; defaults to `false`
      beforeValue: false // controls if a space comes before a value; e.g. `width: 1rem`; defaults to `false`
    },
    wrapAt: false, // controls maximum line length; defaults to `false`
    semicolonAfterLastProperty: false // controls removing trailing semicolons in rule; defaults to `false` - means remove
  }
})

Also since clean-css 5.0 you can use numerical values for all line breaks, which will repeat a line break that many times, e.g:

  new CleanCSS({
    format: {
      breaks: {
        afterAtRule: 2,
        afterBlockBegins: 1, // 1 is synonymous with `true`
        afterBlockEnds: 2,
        afterComment: 1,
        afterProperty: 1,
        afterRuleBegins: 1,
        afterRuleEnds: 1,
        beforeBlockEnds: 1,
        betweenSelectors: 0 // 0 is synonymous with `false`
      }
    }
  })

which will add nicer spacing between at rules and blocks.

Inlining options

inline option whitelists which @import rules will be processed, e.g.

new CleanCSS({
  inline: ['local'] // default; enables local inlining only
})
new CleanCSS({
  inline: ['none'] // disables all inlining
})
// introduced in clean-css 4.1.0

new CleanCSS({
  inline: false // disables all inlining (alias to `['none']`)
})
new CleanCSS({
  inline: ['all'] // enables all inlining, same as ['local', 'remote']
})
new CleanCSS({
  inline: ['local', 'mydomain.example.com'] // enables local inlining plus given remote source
})
new CleanCSS({
  inline: ['local', 'remote', '!fonts.googleapis.com'] // enables all inlining but from given remote source
})

Optimization levels

The level option can be either 0, 1 (default), or 2, e.g.

new CleanCSS({
  level: 2
})

or a fine-grained configuration given via a hash.

Please note that level 1 optimization options are generally safe while level 2 optimizations should be safe for most users.

Level 0 optimizations

Level 0 optimizations simply means "no optimizations". Use it when you'd like to inline imports and / or rebase URLs but skip everything else.

Level 1 optimizations

Level 1 optimizations (default) operate on single properties only, e.g. can remove units when not required, turn rgb colors to a shorter hex representation, remove comments, etc

Here is a full list of available options:

new CleanCSS({
  level: {
    1: {
      cleanupCharsets: true, // controls `@charset` moving to the front of a stylesheet; defaults to `true`
      normalizeUrls: true, // controls URL normalization; defaults to `true`
      optimizeBackground: true, // controls `background` property optimizations; defaults to `true`
      optimizeBorderRadius: true, // controls `border-radius` property optimizations; defaults to `true`
      optimizeFilter: true, // controls `filter` property optimizations; defaults to `true`
      optimizeFont: true, // controls `font` property optimizations; defaults to `true`
      optimizeFontWeight: true, // controls `font-weight` property optimizations; defaults to `true`
      optimizeOutline: true, // controls `outline` property optimizations; defaults to `true`
      removeEmpty: true, // controls removing empty rules and nested blocks; defaults to `true`
      removeNegativePaddings: true, // controls removing negative paddings; defaults to `true`
      removeQuotes: true, // controls removing quotes when unnecessary; defaults to `true`
      removeWhitespace: true, // controls removing unused whitespace; defaults to `true`
      replaceMultipleZeros: true, // contols removing redundant zeros; defaults to `true`
      replaceTimeUnits: true, // controls replacing time units with shorter values; defaults to `true`
      replaceZeroUnits: true, // controls replacing zero values with units; defaults to `true`
      roundingPrecision: false, // rounds pixel values to `N` decimal places; `false` disables rounding; defaults to `false`
      selectorsSortingMethod: 'standard', // denotes selector sorting method; can be `'natural'` or `'standard'`, `'none'`, or false (the last two since 4.1.0); defaults to `'standard'`
      specialComments: 'all', // denotes a number of /*! ... */ comments preserved; defaults to `all`
      tidyAtRules: true, // controls at-rules (e.g. `@charset`, `@import`) optimizing; defaults to `true`
      tidyBlockScopes: true, // controls block scopes (e.g. `@media`) optimizing; defaults to `true`
      tidySelectors: true, // controls selectors optimizing; defaults to `true`
    }
  }
});

There is an all shortcut for toggling all options at the same time, e.g.

new CleanCSS({
  level: {
    1: {
      all: false, // set all values to `false`
      tidySelectors: true // turns on optimizing selectors
    }
  }
});

Level 2 optimizations

Level 2 optimizations operate at rules or multiple properties level, e.g. can remove duplicate rules, remove properties redefined further down a stylesheet, or restructure rules by moving them around.

Please note that if level 2 optimizations are turned on then, unless explicitely disabled, level 1 optimizations are applied as well.

Here is a full list of available options:

new CleanCSS({
  level: {
    2: {
      mergeAdjacentRules: true, // controls adjacent rules merging; defaults to true
      mergeIntoShorthands: true, // controls merging properties into shorthands; defaults to true
      mergeMedia: true, // controls `@media` merging; defaults to true
      mergeNonAdjacentRules: true, // controls non-adjacent rule merging; defaults to true
      mergeSemantically: false, // controls semantic merging; defaults to false
      overrideProperties: true, // controls property overriding based on understandability; defaults to true
      removeEmpty: true, // controls removing empty rules and nested blocks; defaults to `true`
      reduceNonAdjacentRules: true, // controls non-adjacent rule reducing; defaults to true
      removeDuplicateFontRules: true, // controls duplicate `@font-face` removing; defaults to true
      removeDuplicateMediaBlocks: true, // controls duplicate `@media` removing; defaults to true
      removeDuplicateRules: true, // controls duplicate rules removing; defaults to true
      removeUnusedAtRules: false, // controls unused at rule removing; defaults to false (available since 4.1.0)
      restructureRules: false, // controls rule restructuring; defaults to false
      skipProperties: [] // controls which properties won't be optimized, defaults to `[]` which means all will be optimized (since 4.1.0)
    }
  }
});

There is an all shortcut for toggling all options at the same time, e.g.

new CleanCSS({
  level: {
    2: {
      all: false, // sets all values to `false`
      removeDuplicateRules: true // turns on removing duplicate rules
    }
  }
});

Plugins

In clean-css version 5 and above you can define plugins which run alongside level 1 and level 2 optimizations, e.g.

var myPlugin = {
  level1: {
    property: function removeRepeatedBackgroundRepeat(_rule, property, _options) {
      // So `background-repeat:no-repeat no-repeat` becomes `background-repeat:no-repeat`
      if (property.name == 'background-repeat' && property.value.length == 2 && property.value[0][1] == property.value[1][1]) {
        property.value.pop();
        property.dirty = true;
      }
    }
  }
}

new CleanCSS({plugins: [myPlugin]})

Search test\module-test.js for plugins or check out lib/optimizer/level-1/property-optimizers and lib/optimizer/level-1/value-optimizers for more examples.

Important: To rewrite your old transform as a plugin, check out this commit.

Minify method

Once configured clean-css provides a minify method to optimize a given CSS, e.g.

var output = new CleanCSS(options).minify(source);

The output of the minify method is a hash with following fields:

console.log(output.styles); // optimized output CSS as a string
console.log(output.sourceMap); // output source map if requested with `sourceMap` option
console.log(output.errors); // a list of errors raised
console.log(output.warnings); // a list of warnings raised
console.log(output.stats.originalSize); // original content size after import inlining
console.log(output.stats.minifiedSize); // optimized content size
console.log(output.stats.timeSpent); // time spent on optimizations in milliseconds
console.log(output.stats.efficiency); // `(originalSize - minifiedSize) / originalSize`, e.g. 0.25 if size is reduced from 100 bytes to 75 bytes

Example: Minifying a CSS string:

const CleanCSS = require("clean-css");

const output = new CleanCSS().minify(`

  a {
    color: blue;
  }
  div {
    margin: 5px
  }

`);

console.log(output);

// Log:
{
  styles: 'a{color:#00f}div{margin:5px}',
  stats: {
    efficiency: 0.6704545454545454,
    minifiedSize: 29,
    originalSize: 88,
    timeSpent: 6
  },
  errors: [],
  inlinedStylesheets: [],
  warnings: []
}

The minify method also accepts an input source map, e.g.

var output = new CleanCSS(options).minify(source, inputSourceMap);

or a callback invoked when optimizations are finished, e.g.

new CleanCSS(options).minify(source, function (error, output) {
  // `output` is the same as in the synchronous call above
});

Promise interface

If you prefer clean-css to return a Promise object then you need to explicitely ask for it, e.g.

new CleanCSS({ returnPromise: true })
  .minify(source)
  .then(function (output) { console.log(output.styles); })
  .catch(function (error) { // deal with errors });

CLI utility

Clean-css has an associated command line utility that can be installed separately using npm install clean-css-cli. For more detailed information, please visit https://github.com/clean-css/clean-css-cli.

FAQ

How to optimize multiple files?

It can be done either by passing an array of paths, or, when sources are already available, a hash or an array of hashes:

new CleanCSS().minify(['path/to/file/one', 'path/to/file/two']);
new CleanCSS().minify({
  'path/to/file/one': {
    styles: 'contents of file one'
  },
  'path/to/file/two': {
    styles: 'contents of file two'
  }
});
new CleanCSS().minify([
  {'path/to/file/one': {styles: 'contents of file one'}},
  {'path/to/file/two': {styles: 'contents of file two'}}
]);

Passing an array of hashes allows you to explicitly specify the order in which the input files are concatenated. Whereas when you use a single hash the order is determined by the traversal order of object properties - available since 4.1.0.

Important note - any @import rules already present in the hash will be resolved in memory.

How to process multiple files without concatenating them into one output file?

Since clean-css 5.0 you can, when passing an array of paths, hash, or array of hashes (see above), ask clean-css not to join styles into one output, but instead return stylesheets optimized one by one, e.g.

var output = new CleanCSS({ batch: true }).minify(['path/to/file/one', 'path/to/file/two']);
var outputOfFile1 = output['path/to/file/one'].styles // all other fields, like errors, warnings, or stats are there too
var outputOfFile2 = output['path/to/file/two'].styles

How to process remote @imports correctly?

In order to inline remote @import statements you need to provide a callback to minify method as fetching remote assets is an asynchronous operation, e.g.:

var source = '@import url(http://example.com/path/to/remote/styles);';
new CleanCSS({ inline: ['remote'] }).minify(source, function (error, output) {
  // output.styles
});

If you don't provide a callback, then remote @imports will be left as is.

How to apply arbitrary transformations to CSS properties?

Please see plugins.

How to specify a custom rounding precision?

The level 1 roundingPrecision optimization option accept a string with per-unit rounding precision settings, e.g.

new CleanCSS({
  level: {
    1: {
      roundingPrecision: 'all=3,px=5'
    }
  }
}).minify(source)

which sets all units rounding precision to 3 digits except px unit precision of 5 digits.

How to optimize a stylesheet with custom rpx units?

Since rpx is a non standard unit (see #1074), it will be dropped by default as an invalid value.

However you can treat rpx units as regular ones:

new CleanCSS({
  compatibility: {
    customUnits: {
      rpx: true
    }
  }
}).minify(source)

How to keep a CSS fragment intact?

Note: available since 4.2.0.

Wrap the CSS fragment in special comments which instruct clean-css to preserve it, e.g.

.block-1 {
  color: red
}
/* clean-css ignore:start */
.block-special {
  color: transparent
}
/* clean-css ignore:end */
.block-2 {
  margin: 0
}

Optimizing this CSS will result in the following output:

.block-1{color:red}
.block-special {
  color: transparent
}
.block-2{margin:0}

How to preserve a comment block?

Use the /*! notation instead of the standard one /*:

/*!
  Important comments included in optimized output.
*/

How to rebase relative image URLs?

clean-css will handle it automatically for you in the following cases:

  • when full paths to input files are passed in as options;
  • when correct paths are passed in via a hash;
  • when rebaseTo is used with any of above two.

How to work with source maps?

To generate a source map, use sourceMap: true option, e.g.:

new CleanCSS({ sourceMap: true, rebaseTo: pathToOutputDirectory })
  .minify(source, function (error, output) {
    // access output.sourceMap for SourceMapGenerator object
    // see https://github.com/mozilla/source-map/#sourcemapgenerator for more details
});

You can also pass an input source map directly as a 2nd argument to minify method:

new CleanCSS({ sourceMap: true, rebaseTo: pathToOutputDirectory })
  .minify(source, inputSourceMap, function (error, output) {
    // access output.sourceMap to access SourceMapGenerator object
    // see https://github.com/mozilla/source-map/#sourcemapgenerator for more details
});

or even multiple input source maps at once:

new CleanCSS({ sourceMap: true, rebaseTo: pathToOutputDirectory }).minify({
  'path/to/source/1': {
    styles: '...styles...',
    sourceMap: '...source-map...'
  },
  'path/to/source/2': {
    styles: '...styles...',
    sourceMap: '...source-map...'
  }
}, function (error, output) {
  // access output.sourceMap as above
});

How to apply level 1 & 2 optimizations at the same time?

Using the hash configuration specifying both optimization levels, e.g.

new CleanCSS({
  level: {
    1: {
      all: true,
      normalizeUrls: false
    },
    2: {
      restructureRules: true
    }
  }
})

will apply level 1 optimizations, except url normalization, and default level 2 optimizations with rule restructuring.

What level 2 optimizations do?

All level 2 optimizations are dispatched here, and this is what they do:

  • recursivelyOptimizeBlocks - does all the following operations on a nested block, like @media or @keyframe;
  • recursivelyOptimizeProperties - optimizes properties in rulesets and flat at-rules, like @font-face, by splitting them into components (e.g. margin into margin-(bottom|left|right|top)), optimizing, and restoring them back. You may want to use mergeIntoShorthands option to control whether you want to turn multiple components into shorthands;
  • removeDuplicates - gets rid of duplicate rulesets with exactly the same set of properties, e.g. when including a Sass / Less partial twice for no good reason;
  • mergeAdjacent - merges adjacent rulesets with the same selector or rules;
  • reduceNonAdjacent - identifies which properties are overridden in same-selector non-adjacent rulesets, and removes them;
  • mergeNonAdjacentBySelector - identifies same-selector non-adjacent rulesets which can be moved (!) to be merged, requires all intermediate rulesets to not redefine the moved properties, or if redefined to have the same value;
  • mergeNonAdjacentByBody - same as the one above but for same-selector non-adjacent rulesets;
  • restructure - tries to reorganize different-selector different-rules rulesets so they take less space, e.g. .one{padding:0}.two{margin:0}.one{margin-bottom:3px} into .two{margin:0}.one{padding:0;margin-bottom:3px};
  • removeDuplicateFontAtRules - removes duplicated @font-face rules;
  • removeDuplicateMediaQueries - removes duplicated @media nested blocks;
  • mergeMediaQueries - merges non-adjacent @media at-rules by the same rules as mergeNonAdjacentBy* above;

What errors and warnings are?

If clean-css encounters invalid CSS, it will try to remove the invalid part and continue optimizing the rest of the code. It will make you aware of the problem by generating an error or warning. Although clean-css can work with invalid CSS, it is always recommended that you fix warnings and errors in your CSS.

Example: Minify invalid CSS, resulting in two warnings:

const CleanCSS = require("clean-css");

const output = new CleanCSS().minify(`

  a {
    -notarealproperty-: 5px;
    color:
  }
  div {
    margin: 5px
  }

`);

console.log(output);

// Log:
{
  styles: 'div{margin:5px}',
  stats: {
    efficiency: 0.8695652173913043,
    minifiedSize: 15,
    originalSize: 115,
    timeSpent: 1
  },
  errors: [],
  inlinedStylesheets: [],
  warnings: [
    "Invalid property name '-notarealproperty-' at 4:8. Ignoring.",
    "Empty property 'color' at 5:8. Ignoring."
  ]
}

Example: Minify invalid CSS, resulting in one error:

const CleanCSS = require("clean-css");

const output = new CleanCSS().minify(`

  @import "idontexist.css";
  a {
    color: blue;
  }
  div {
    margin: 5px
  }

`);

console.log(output);

// Log:
{
  styles: 'a{color:#00f}div{margin:5px}',
  stats: {
    efficiency: 0.7627118644067796,
    minifiedSize: 28,
    originalSize: 118,
    timeSpent: 2
  },
  errors: [
    'Ignoring local @import of "idontexist.css" as resource is missing.'
  ],
  inlinedStylesheets: [],
  warnings: []
}

Clean-css for Gulp

An example of how you can include clean-css in gulp

const { src, dest, series } = require('gulp');
const CleanCSS = require('clean-css');
const concat = require('gulp-concat');

function css() {
    const options = {
        compatibility: '*', // (default) - Internet Explorer 10+ compatibility mode
        inline: ['all'], // enables all inlining, same as ['local', 'remote']
        level: 2 // Optimization levels. The level option can be either 0, 1 (default), or 2, e.g.
        // Please note that level 1 optimization options are generally safe while level 2 optimizations should be safe for most users.
    };

    return src('app/**/*.css')
        .pipe(concat('style.min.css'))
        .on('data', function(file) {
            const buferFile = new CleanCSS(options).minify(file.contents)
            return file.contents = Buffer.from(buferFile.styles)
        })
        .pipe(dest('build'))
}
exports.css = series(css)

How to use clean-css with build tools?

There is a number of 3rd party plugins to popular build tools:

How to use clean-css from web browser?

Contributing

See CONTRIBUTING.md.

How to get started?

First clone the sources:

git clone git@github.com:clean-css/clean-css.git

then install dependencies:

cd clean-css
npm install

then use any of the following commands to verify your copy:

npm run bench # for clean-css benchmarks (see [test/bench.js](https://github.com/clean-css/clean-css/blob/master/test/bench.js) for details)
npm run browserify # to create the browser-ready clean-css version
npm run check # to lint JS sources with [JSHint](https://github.com/jshint/jshint/)
npm test # to run all tests

Acknowledgments

Sorted alphabetically by GitHub handle:

  • @abarre (Anthony Barre) for improvements to @import processing;
  • @alexlamsl (Alex Lam S.L.) for testing early clean-css 4 versions, reporting bugs, and suggesting numerous improvements.
  • @altschuler (Simon Altschuler) for fixing @import processing inside comments;
  • @ben-eb (Ben Briggs) for sharing ideas about CSS optimizations;
  • @davisjam (Jamie Davis) for disclosing ReDOS vulnerabilities;
  • @facelessuser (Isaac) for pointing out a flaw in clean-css' stateless mode;
  • @grandrath (Martin Grandrath) for improving minify method source traversal in ES6;
  • @jmalonzo (Jan Michael Alonzo) for a patch removing node.js' old sys package;
  • @lukeapage (Luke Page) for suggestions and testing the source maps feature; Plus everyone else involved in #125 for pushing it forward;
  • @madwizard-thomas for sharing ideas about @import inlining and URL rebasing.
  • @ngyikp (Ng Yik Phang) for testing early clean-css 4 versions, reporting bugs, and suggesting numerous improvements.
  • @wagenet (Peter Wagenet) for suggesting improvements to @import inlining behavior;
  • @venemo (Timur Kristóf) for an outstanding contribution of advanced property optimizer for 2.2 release;
  • @vvo (Vincent Voyer) for a patch with better empty element regex and for inspiring us to do many performance improvements in 0.4 release;
  • @xhmikosr for suggesting new features, like option to remove special comments and strip out URLs quotation, and pointing out numerous improvements like JSHint, media queries, etc.

Author: Clean-css
Source Code: https://github.com/clean-css/clean-css 
License: MIT License

#node #css #clean 

Clean-css: Fast and Efficient CSS Optimizer for Node.js and The Web
Gordon  Taylor

Gordon Taylor

1644835040

Clean Code JavaScript: Clean Code Concepts Adapted for JavaScript

clean-code-javascript

Table of Contents

  1. Introduction
  2. Variables
  3. Functions
  4. Objects and Data Structures
  5. Classes
  6. SOLID
  7. Testing
  8. Concurrency
  9. Error Handling
  10. Formatting
  11. Comments
  12. Translation

Introduction

Humorous image of software quality estimation as a count of how many expletives
you shout when reading code

Software engineering principles, from Robert C. Martin's book Clean Code, adapted for JavaScript. This is not a style guide. It's a guide to producing readable, reusable, and refactorable software in JavaScript.

Not every principle herein has to be strictly followed, and even fewer will be universally agreed upon. These are guidelines and nothing more, but they are ones codified over many years of collective experience by the authors of Clean Code.

Our craft of software engineering is just a bit over 50 years old, and we are still learning a lot. When software architecture is as old as architecture itself, maybe then we will have harder rules to follow. For now, let these guidelines serve as a touchstone by which to assess the quality of the JavaScript code that you and your team produce.

One more thing: knowing these won't immediately make you a better software developer, and working with them for many years doesn't mean you won't make mistakes. Every piece of code starts as a first draft, like wet clay getting shaped into its final form. Finally, we chisel away the imperfections when we review it with our peers. Don't beat yourself up for first drafts that need improvement. Beat up the code instead!

Variables

Use meaningful and pronounceable variable names

Bad:

const yyyymmdstr = moment().format("YYYY/MM/DD");

Good:

const currentDate = moment().format("YYYY/MM/DD");

⬆ back to top

Use the same vocabulary for the same type of variable

Bad:

getUserInfo();
getClientData();
getCustomerRecord();

Good:

getUser();

⬆ back to top

Use searchable names

We will read more code than we will ever write. It's important that the code we do write is readable and searchable. By not naming variables that end up being meaningful for understanding our program, we hurt our readers. Make your names searchable. Tools like buddy.js and ESLint can help identify unnamed constants.

Bad:

// What the heck is 86400000 for?
setTimeout(blastOff, 86400000);

Good:

// Declare them as capitalized named constants.
const MILLISECONDS_PER_DAY = 60 * 60 * 24 * 1000; //86400000;

setTimeout(blastOff, MILLISECONDS_PER_DAY);

⬆ back to top

Use explanatory variables

Bad:

const address = "One Infinite Loop, Cupertino 95014";
const cityZipCodeRegex = /^[^,\\]+[,\\\s]+(.+?)\s*(\d{5})?$/;
saveCityZipCode(
  address.match(cityZipCodeRegex)[1],
  address.match(cityZipCodeRegex)[2]
);

Good:

const address = "One Infinite Loop, Cupertino 95014";
const cityZipCodeRegex = /^[^,\\]+[,\\\s]+(.+?)\s*(\d{5})?$/;
const [_, city, zipCode] = address.match(cityZipCodeRegex) || [];
saveCityZipCode(city, zipCode);

⬆ back to top

Avoid Mental Mapping

Explicit is better than implicit.

Bad:

const locations = ["Austin", "New York", "San Francisco"];
locations.forEach(l => {
  doStuff();
  doSomeOtherStuff();
  // ...
  // ...
  // ...
  // Wait, what is `l` for again?
  dispatch(l);
});

Good:

const locations = ["Austin", "New York", "San Francisco"];
locations.forEach(location => {
  doStuff();
  doSomeOtherStuff();
  // ...
  // ...
  // ...
  dispatch(location);
});

⬆ back to top

Don't add unneeded context

If your class/object name tells you something, don't repeat that in your variable name.

Bad:

const Car = {
  carMake: "Honda",
  carModel: "Accord",
  carColor: "Blue"
};

function paintCar(car, color) {
  car.carColor = color;
}

Good:

const Car = {
  make: "Honda",
  model: "Accord",
  color: "Blue"
};

function paintCar(car, color) {
  car.color = color;
}

⬆ back to top

Use default arguments instead of short circuiting or conditionals

Default arguments are often cleaner than short circuiting. Be aware that if you use them, your function will only provide default values for undefined arguments. Other "falsy" values such as '', "", false, null, 0, and NaN, will not be replaced by a default value.

Bad:

function createMicrobrewery(name) {
  const breweryName = name || "Hipster Brew Co.";
  // ...
}

Good:

function createMicrobrewery(name = "Hipster Brew Co.") {
  // ...
}

⬆ back to top

Functions

Function arguments (2 or fewer ideally)

Limiting the amount of function parameters is incredibly important because it makes testing your function easier. Having more than three leads to a combinatorial explosion where you have to test tons of different cases with each separate argument.

One or two arguments is the ideal case, and three should be avoided if possible. Anything more than that should be consolidated. Usually, if you have more than two arguments then your function is trying to do too much. In cases where it's not, most of the time a higher-level object will suffice as an argument.

Since JavaScript allows you to make objects on the fly, without a lot of class boilerplate, you can use an object if you are finding yourself needing a lot of arguments.

To make it obvious what properties the function expects, you can use the ES2015/ES6 destructuring syntax. This has a few advantages:

  1. When someone looks at the function signature, it's immediately clear what properties are being used.
  2. It can be used to simulate named parameters.
  3. Destructuring also clones the specified primitive values of the argument object passed into the function. This can help prevent side effects. Note: objects and arrays that are destructured from the argument object are NOT cloned.
  4. Linters can warn you about unused properties, which would be impossible without destructuring.

Bad:

function createMenu(title, body, buttonText, cancellable) {
  // ...
}

createMenu("Foo", "Bar", "Baz", true);

Good:

function createMenu({ title, body, buttonText, cancellable }) {
  // ...
}

createMenu({
  title: "Foo",
  body: "Bar",
  buttonText: "Baz",
  cancellable: true
});

⬆ back to top

Functions should do one thing

This is by far the most important rule in software engineering. When functions do more than one thing, they are harder to compose, test, and reason about. When you can isolate a function to just one action, it can be refactored easily and your code will read much cleaner. If you take nothing else away from this guide other than this, you'll be ahead of many developers.

Bad:

function emailClients(clients) {
  clients.forEach(client => {
    const clientRecord = database.lookup(client);
    if (clientRecord.isActive()) {
      email(client);
    }
  });
}

Good:

function emailActiveClients(clients) {
  clients.filter(isActiveClient).forEach(email);
}

function isActiveClient(client) {
  const clientRecord = database.lookup(client);
  return clientRecord.isActive();
}

⬆ back to top

Function names should say what they do

Bad:

function addToDate(date, month) {
  // ...
}

const date = new Date();

// It's hard to tell from the function name what is added
addToDate(date, 1);

Good:

function addMonthToDate(month, date) {
  // ...
}

const date = new Date();
addMonthToDate(1, date);

⬆ back to top

Functions should only be one level of abstraction

When you have more than one level of abstraction your function is usually doing too much. Splitting up functions leads to reusability and easier testing.

Bad:

function parseBetterJSAlternative(code) {
  const REGEXES = [
    // ...
  ];

  const statements = code.split(" ");
  const tokens = [];
  REGEXES.forEach(REGEX => {
    statements.forEach(statement => {
      // ...
    });
  });

  const ast = [];
  tokens.forEach(token => {
    // lex...
  });

  ast.forEach(node => {
    // parse...
  });
}

Good:

function parseBetterJSAlternative(code) {
  const tokens = tokenize(code);
  const syntaxTree = parse(tokens);
  syntaxTree.forEach(node => {
    // parse...
  });
}

function tokenize(code) {
  const REGEXES = [
    // ...
  ];

  const statements = code.split(" ");
  const tokens = [];
  REGEXES.forEach(REGEX => {
    statements.forEach(statement => {
      tokens.push(/* ... */);
    });
  });

  return tokens;
}

function parse(tokens) {
  const syntaxTree = [];
  tokens.forEach(token => {
    syntaxTree.push(/* ... */);
  });

  return syntaxTree;
}

⬆ back to top

Remove duplicate code

Do your absolute best to avoid duplicate code. Duplicate code is bad because it means that there's more than one place to alter something if you need to change some logic.

Imagine if you run a restaurant and you keep track of your inventory: all your tomatoes, onions, garlic, spices, etc. If you have multiple lists that you keep this on, then all have to be updated when you serve a dish with tomatoes in them. If you only have one list, there's only one place to update!

Oftentimes you have duplicate code because you have two or more slightly different things, that share a lot in common, but their differences force you to have two or more separate functions that do much of the same things. Removing duplicate code means creating an abstraction that can handle this set of different things with just one function/module/class.

Getting the abstraction right is critical, that's why you should follow the SOLID principles laid out in the Classes section. Bad abstractions can be worse than duplicate code, so be careful! Having said this, if you can make a good abstraction, do it! Don't repeat yourself, otherwise you'll find yourself updating multiple places anytime you want to change one thing.

Bad:

function showDeveloperList(developers) {
  developers.forEach(developer => {
    const expectedSalary = developer.calculateExpectedSalary();
    const experience = developer.getExperience();
    const githubLink = developer.getGithubLink();
    const data = {
      expectedSalary,
      experience,
      githubLink
    };

    render(data);
  });
}

function showManagerList(managers) {
  managers.forEach(manager => {
    const expectedSalary = manager.calculateExpectedSalary();
    const experience = manager.getExperience();
    const portfolio = manager.getMBAProjects();
    const data = {
      expectedSalary,
      experience,
      portfolio
    };

    render(data);
  });
}

Good:

function showEmployeeList(employees) {
  employees.forEach(employee => {
    const expectedSalary = employee.calculateExpectedSalary();
    const experience = employee.getExperience();

    const data = {
      expectedSalary,
      experience
    };

    switch (employee.type) {
      case "manager":
        data.portfolio = employee.getMBAProjects();
        break;
      case "developer":
        data.githubLink = employee.getGithubLink();
        break;
    }

    render(data);
  });
}

⬆ back to top

Set default objects with Object.assign

Bad:

const menuConfig = {
  title: null,
  body: "Bar",
  buttonText: null,
  cancellable: true
};

function createMenu(config) {
  config.title = config.title || "Foo";
  config.body = config.body || "Bar";
  config.buttonText = config.buttonText || "Baz";
  config.cancellable =
    config.cancellable !== undefined ? config.cancellable : true;
}

createMenu(menuConfig);

Good:

const menuConfig = {
  title: "Order",
  // User did not include 'body' key
  buttonText: "Send",
  cancellable: true
};

function createMenu(config) {
  let finalConfig = Object.assign(
    {
      title: "Foo",
      body: "Bar",
      buttonText: "Baz",
      cancellable: true
    },
    config
  );
  return finalConfig
  // config now equals: {title: "Order", body: "Bar", buttonText: "Send", cancellable: true}
  // ...
}

createMenu(menuConfig);

⬆ back to top

Don't use flags as function parameters

Flags tell your user that this function does more than one thing. Functions should do one thing. Split out your functions if they are following different code paths based on a boolean.

Bad:

function createFile(name, temp) {
  if (temp) {
    fs.create(`./temp/${name}`);
  } else {
    fs.create(name);
  }
}

Good:

function createFile(name) {
  fs.create(name);
}

function createTempFile(name) {
  createFile(`./temp/${name}`);
}

⬆ back to top

Avoid Side Effects (part 1)

A function produces a side effect if it does anything other than take a value in and return another value or values. A side effect could be writing to a file, modifying some global variable, or accidentally wiring all your money to a stranger.

Now, you do need to have side effects in a program on occasion. Like the previous example, you might need to write to a file. What you want to do is to centralize where you are doing this. Don't have several functions and classes that write to a particular file. Have one service that does it. One and only one.

The main point is to avoid common pitfalls like sharing state between objects without any structure, using mutable data types that can be written to by anything, and not centralizing where your side effects occur. If you can do this, you will be happier than the vast majority of other programmers.

Bad:

// Global variable referenced by following function.
// If we had another function that used this name, now it'd be an array and it could break it.
let name = "Ryan McDermott";

function splitIntoFirstAndLastName() {
  name = name.split(" ");
}

splitIntoFirstAndLastName();

console.log(name); // ['Ryan', 'McDermott'];

Good:

function splitIntoFirstAndLastName(name) {
  return name.split(" ");
}

const name = "Ryan McDermott";
const newName = splitIntoFirstAndLastName(name);

console.log(name); // 'Ryan McDermott';
console.log(newName); // ['Ryan', 'McDermott'];

⬆ back to top

Avoid Side Effects (part 2)

In JavaScript, some values are unchangeable (immutable) and some are changeable (mutable). Objects and arrays are two kinds of mutable values so it's important to handle them carefully when they're passed as parameters to a function. A JavaScript function can change an object's properties or alter the contents of an array which could easily cause bugs elsewhere.

Suppose there's a function that accepts an array parameter representing a shopping cart. If the function makes a change in that shopping cart array - by adding an item to purchase, for example - then any other function that uses that same cart array will be affected by this addition. That may be great, however it could also be bad. Let's imagine a bad situation:

The user clicks the "Purchase" button which calls a purchase function that spawns a network request and sends the cart array to the server. Because of a bad network connection, the purchase function has to keep retrying the request. Now, what if in the meantime the user accidentally clicks an "Add to Cart" button on an item they don't actually want before the network request begins? If that happens and the network request begins, then that purchase function will send the accidentally added item because the cart array was modified.

A great solution would be for the addItemToCart function to always clone the cart, edit it, and return the clone. This would ensure that functions that are still using the old shopping cart wouldn't be affected by the changes.

Two caveats to mention to this approach:

There might be cases where you actually want to modify the input object, but when you adopt this programming practice you will find that those cases are pretty rare. Most things can be refactored to have no side effects!

Cloning big objects can be very expensive in terms of performance. Luckily, this isn't a big issue in practice because there are great libraries that allow this kind of programming approach to be fast and not as memory intensive as it would be for you to manually clone objects and arrays.

Bad:

const addItemToCart = (cart, item) => {
  cart.push({ item, date: Date.now() });
};

Good:

const addItemToCart = (cart, item) => {
  return [...cart, { item, date: Date.now() }];
};

⬆ back to top

Don't write to global functions

Polluting globals is a bad practice in JavaScript because you could clash with another library and the user of your API would be none-the-wiser until they get an exception in production. Let's think about an example: what if you wanted to extend JavaScript's native Array method to have a diff method that could show the difference between two arrays? You could write your new function to the Array.prototype, but it could clash with another library that tried to do the same thing. What if that other library was just using diff to find the difference between the first and last elements of an array? This is why it would be much better to just use ES2015/ES6 classes and simply extend the Array global.

Bad:

Array.prototype.diff = function diff(comparisonArray) {
  const hash = new Set(comparisonArray);
  return this.filter(elem => !hash.has(elem));
};

Good:

class SuperArray extends Array {
  diff(comparisonArray) {
    const hash = new Set(comparisonArray);
    return this.filter(elem => !hash.has(elem));
  }
}

⬆ back to top

Favor functional programming over imperative programming

JavaScript isn't a functional language in the way that Haskell is, but it has a functional flavor to it. Functional languages can be cleaner and easier to test. Favor this style of programming when you can.

Bad:

const programmerOutput = [
  {
    name: "Uncle Bobby",
    linesOfCode: 500
  },
  {
    name: "Suzie Q",
    linesOfCode: 1500
  },
  {
    name: "Jimmy Gosling",
    linesOfCode: 150
  },
  {
    name: "Gracie Hopper",
    linesOfCode: 1000
  }
];

let totalOutput = 0;

for (let i = 0; i < programmerOutput.length; i++) {
  totalOutput += programmerOutput[i].linesOfCode;
}

Good:

const programmerOutput = [
  {
    name: "Uncle Bobby",
    linesOfCode: 500
  },
  {
    name: "Suzie Q",
    linesOfCode: 1500
  },
  {
    name: "Jimmy Gosling",
    linesOfCode: 150
  },
  {
    name: "Gracie Hopper",
    linesOfCode: 1000
  }
];

const totalOutput = programmerOutput.reduce(
  (totalLines, output) => totalLines + output.linesOfCode,
  0
);

⬆ back to top

Encapsulate conditionals

Bad:

if (fsm.state === "fetching" && isEmpty(listNode)) {
  // ...
}

Good:

function shouldShowSpinner(fsm, listNode) {
  return fsm.state === "fetching" && isEmpty(listNode);
}

if (shouldShowSpinner(fsmInstance, listNodeInstance)) {
  // ...
}

⬆ back to top

Avoid negative conditionals

Bad:

function isDOMNodeNotPresent(node) {
  // ...
}

if (!isDOMNodeNotPresent(node)) {
  // ...
}

Good:

function isDOMNodePresent(node) {
  // ...
}

if (isDOMNodePresent(node)) {
  // ...
}

⬆ back to top

Avoid conditionals

This seems like an impossible task. Upon first hearing this, most people say, "how am I supposed to do anything without an if statement?" The answer is that you can use polymorphism to achieve the same task in many cases. The second question is usually, "well that's great but why would I want to do that?" The answer is a previous clean code concept we learned: a function should only do one thing. When you have classes and functions that have if statements, you are telling your user that your function does more than one thing. Remember, just do one thing.

Bad:

class Airplane {
  // ...
  getCruisingAltitude() {
    switch (this.type) {
      case "777":
        return this.getMaxAltitude() - this.getPassengerCount();
      case "Air Force One":
        return this.getMaxAltitude();
      case "Cessna":
        return this.getMaxAltitude() - this.getFuelExpenditure();
    }
  }
}

Good:

class Airplane {
  // ...
}

class Boeing777 extends Airplane {
  // ...
  getCruisingAltitude() {
    return this.getMaxAltitude() - this.getPassengerCount();
  }
}

class AirForceOne extends Airplane {
  // ...
  getCruisingAltitude() {
    return this.getMaxAltitude();
  }
}

class Cessna extends Airplane {
  // ...
  getCruisingAltitude() {
    return this.getMaxAltitude() - this.getFuelExpenditure();
  }
}

⬆ back to top

Avoid type-checking (part 1)

JavaScript is untyped, which means your functions can take any type of argument. Sometimes you are bitten by this freedom and it becomes tempting to do type-checking in your functions. There are many ways to avoid having to do this. The first thing to consider is consistent APIs.

Bad:

function travelToTexas(vehicle) {
  if (vehicle instanceof Bicycle) {
    vehicle.pedal(this.currentLocation, new Location("texas"));
  } else if (vehicle instanceof Car) {
    vehicle.drive(this.currentLocation, new Location("texas"));
  }
}

Good:

function travelToTexas(vehicle) {
  vehicle.move(this.currentLocation, new Location("texas"));
}

⬆ back to top

Avoid type-checking (part 2)

If you are working with basic primitive values like strings and integers, and you can't use polymorphism but you still feel the need to type-check, you should consider using TypeScript. It is an excellent alternative to normal JavaScript, as it provides you with static typing on top of standard JavaScript syntax. The problem with manually type-checking normal JavaScript is that doing it well requires so much extra verbiage that the faux "type-safety" you get doesn't make up for the lost readability. Keep your JavaScript clean, write good tests, and have good code reviews. Otherwise, do all of that but with TypeScript (which, like I said, is a great alternative!).

Bad:

function combine(val1, val2) {
  if (
    (typeof val1 === "number" && typeof val2 === "number") ||
    (typeof val1 === "string" && typeof val2 === "string")
  ) {
    return val1 + val2;
  }

  throw new Error("Must be of type String or Number");
}

Good:

function combine(val1, val2) {
  return val1 + val2;
}

⬆ back to top

Don't over-optimize

Modern browsers do a lot of optimization under-the-hood at runtime. A lot of times, if you are optimizing then you are just wasting your time. There are good resources for seeing where optimization is lacking. Target those in the meantime, until they are fixed if they can be.

Bad:

// On old browsers, each iteration with uncached `list.length` would be costly
// because of `list.length` recomputation. In modern browsers, this is optimized.
for (let i = 0, len = list.length; i < len; i++) {
  // ...
}

Good:

for (let i = 0; i < list.length; i++) {
  // ...
}

⬆ back to top

Remove dead code

Dead code is just as bad as duplicate code. There's no reason to keep it in your codebase. If it's not being called, get rid of it! It will still be safe in your version history if you still need it.

Bad:

function oldRequestModule(url) {
  // ...
}

function newRequestModule(url) {
  // ...
}

const req = newRequestModule;
inventoryTracker("apples", req, "www.inventory-awesome.io");

Good:

function newRequestModule(url) {
  // ...
}

const req = newRequestModule;
inventoryTracker("apples", req, "www.inventory-awesome.io");

⬆ back to top

Objects and Data Structures

Use getters and setters

Using getters and setters to access data on objects could be better than simply looking for a property on an object. "Why?" you might ask. Well, here's an unorganized list of reasons why:

  • When you want to do more beyond getting an object property, you don't have to look up and change every accessor in your codebase.
  • Makes adding validation simple when doing a set.
  • Encapsulates the internal representation.
  • Easy to add logging and error handling when getting and setting.
  • You can lazy load your object's properties, let's say getting it from a server.

Bad:

function makeBankAccount() {
  // ...

  return {
    balance: 0
    // ...
  };
}

const account = makeBankAccount();
account.balance = 100;

Good:

function makeBankAccount() {
  // this one is private
  let balance = 0;

  // a "getter", made public via the returned object below
  function getBalance() {
    return balance;
  }

  // a "setter", made public via the returned object below
  function setBalance(amount) {
    // ... validate before updating the balance
    balance = amount;
  }

  return {
    // ...
    getBalance,
    setBalance
  };
}

const account = makeBankAccount();
account.setBalance(100);

⬆ back to top

Make objects have private members

This can be accomplished through closures (for ES5 and below).

Bad:

const Employee = function(name) {
  this.name = name;
};

Employee.prototype.getName = function getName() {
  return this.name;
};

const employee = new Employee("John Doe");
console.log(`Employee name: ${employee.getName()}`); // Employee name: John Doe
delete employee.name;
console.log(`Employee name: ${employee.getName()}`); // Employee name: undefined

Good:

function makeEmployee(name) {
  return {
    getName() {
      return name;
    }
  };
}

const employee = makeEmployee("John Doe");
console.log(`Employee name: ${employee.getName()}`); // Employee name: John Doe
delete employee.name;
console.log(`Employee name: ${employee.getName()}`); // Employee name: John Doe

⬆ back to top

Classes

Prefer ES2015/ES6 classes over ES5 plain functions

It's very difficult to get readable class inheritance, construction, and method definitions for classical ES5 classes. If you need inheritance (and be aware that you might not), then prefer ES2015/ES6 classes. However, prefer small functions over classes until you find yourself needing larger and more complex objects.

Bad:

const Animal = function(age) {
  if (!(this instanceof Animal)) {
    throw new Error("Instantiate Animal with `new`");
  }

  this.age = age;
};

Animal.prototype.move = function move() {};

const Mammal = function(age, furColor) {
  if (!(this instanceof Mammal)) {
    throw new Error("Instantiate Mammal with `new`");
  }

  Animal.call(this, age);
  this.furColor = furColor;
};

Mammal.prototype = Object.create(Animal.prototype);
Mammal.prototype.constructor = Mammal;
Mammal.prototype.liveBirth = function liveBirth() {};

const Human = function(age, furColor, languageSpoken) {
  if (!(this instanceof Human)) {
    throw new Error("Instantiate Human with `new`");
  }

  Mammal.call(this, age, furColor);
  this.languageSpoken = languageSpoken;
};

Human.prototype = Object.create(Mammal.prototype);
Human.prototype.constructor = Human;
Human.prototype.speak = function speak() {};

Good:

class Animal {
  constructor(age) {
    this.age = age;
  }

  move() {
    /* ... */
  }
}

class Mammal extends Animal {
  constructor(age, furColor) {
    super(age);
    this.furColor = furColor;
  }

  liveBirth() {
    /* ... */
  }
}

class Human extends Mammal {
  constructor(age, furColor, languageSpoken) {
    super(age, furColor);
    this.languageSpoken = languageSpoken;
  }

  speak() {
    /* ... */
  }
}

⬆ back to top

Use method chaining

This pattern is very useful in JavaScript and you see it in many libraries such as jQuery and Lodash. It allows your code to be expressive, and less verbose. For that reason, I say, use method chaining and take a look at how clean your code will be. In your class functions, simply return this at the end of every function, and you can chain further class methods onto it.

Bad:

class Car {
  constructor(make, model, color) {
    this.make = make;
    this.model = model;
    this.color = color;
  }

  setMake(make) {
    this.make = make;
  }

  setModel(model) {
    this.model = model;
  }

  setColor(color) {
    this.color = color;
  }

  save() {
    console.log(this.make, this.model, this.color);
  }
}

const car = new Car("Ford", "F-150", "red");
car.setColor("pink");
car.save();

Good:

class Car {
  constructor(make, model, color) {
    this.make = make;
    this.model = model;
    this.color = color;
  }

  setMake(make) {
    this.make = make;
    // NOTE: Returning this for chaining
    return this;
  }

  setModel(model) {
    this.model = model;
    // NOTE: Returning this for chaining
    return this;
  }

  setColor(color) {
    this.color = color;
    // NOTE: Returning this for chaining
    return this;
  }

  save() {
    console.log(this.make, this.model, this.color);
    // NOTE: Returning this for chaining
    return this;
  }
}

const car = new Car("Ford", "F-150", "red").setColor("pink").save();

⬆ back to top

Prefer composition over inheritance

As stated famously in Design Patterns by the Gang of Four, you should prefer composition over inheritance where you can. There are lots of good reasons to use inheritance and lots of good reasons to use composition. The main point for this maxim is that if your mind instinctively goes for inheritance, try to think if composition could model your problem better. In some cases it can.

You might be wondering then, "when should I use inheritance?" It depends on your problem at hand, but this is a decent list of when inheritance makes more sense than composition:

  1. Your inheritance represents an "is-a" relationship and not a "has-a" relationship (Human->Animal vs. User->UserDetails).
  2. You can reuse code from the base classes (Humans can move like all animals).
  3. You want to make global changes to derived classes by changing a base class. (Change the caloric expenditure of all animals when they move).

Bad:

class Employee {
  constructor(name, email) {
    this.name = name;
    this.email = email;
  }

  // ...
}

// Bad because Employees "have" tax data. EmployeeTaxData is not a type of Employee
class EmployeeTaxData extends Employee {
  constructor(ssn, salary) {
    super();
    this.ssn = ssn;
    this.salary = salary;
  }

  // ...
}

Good:

class EmployeeTaxData {
  constructor(ssn, salary) {
    this.ssn = ssn;
    this.salary = salary;
  }

  // ...
}

class Employee {
  constructor(name, email) {
    this.name = name;
    this.email = email;
  }

  setTaxData(ssn, salary) {
    this.taxData = new EmployeeTaxData(ssn, salary);
  }
  // ...
}

⬆ back to top

SOLID

Single Responsibility Principle (SRP)

As stated in Clean Code, "There should never be more than one reason for a class to change". It's tempting to jam-pack a class with a lot of functionality, like when you can only take one suitcase on your flight. The issue with this is that your class won't be conceptually cohesive and it will give it many reasons to change. Minimizing the amount of times you need to change a class is important. It's important because if too much functionality is in one class and you modify a piece of it, it can be difficult to understand how that will affect other dependent modules in your codebase.

Bad:

class UserSettings {
  constructor(user) {
    this.user = user;
  }

  changeSettings(settings) {
    if (this.verifyCredentials()) {
      // ...
    }
  }

  verifyCredentials() {
    // ...
  }
}

Good:

class UserAuth {
  constructor(user) {
    this.user = user;
  }

  verifyCredentials() {
    // ...
  }
}

class UserSettings {
  constructor(user) {
    this.user = user;
    this.auth = new UserAuth(user);
  }

  changeSettings(settings) {
    if (this.auth.verifyCredentials()) {
      // ...
    }
  }
}

⬆ back to top

Open/Closed Principle (OCP)

As stated by Bertrand Meyer, "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification." What does that mean though? This principle basically states that you should allow users to add new functionalities without changing existing code.

Bad:

class AjaxAdapter extends Adapter {
  constructor() {
    super();
    this.name = "ajaxAdapter";
  }
}

class NodeAdapter extends Adapter {
  constructor() {
    super();
    this.name = "nodeAdapter";
  }
}

class HttpRequester {
  constructor(adapter) {
    this.adapter = adapter;
  }

  fetch(url) {
    if (this.adapter.name === "ajaxAdapter") {
      return makeAjaxCall(url).then(response => {
        // transform response and return
      });
    } else if (this.adapter.name === "nodeAdapter") {
      return makeHttpCall(url).then(response => {
        // transform response and return
      });
    }
  }
}

function makeAjaxCall(url) {
  // request and return promise
}

function makeHttpCall(url) {
  // request and return promise
}

Good:

class AjaxAdapter extends Adapter {
  constructor() {
    super();
    this.name = "ajaxAdapter";
  }

  request(url) {
    // request and return promise
  }
}

class NodeAdapter extends Adapter {
  constructor() {
    super();
    this.name = "nodeAdapter";
  }

  request(url) {
    // request and return promise
  }
}

class HttpRequester {
  constructor(adapter) {
    this.adapter = adapter;
  }

  fetch(url) {
    return this.adapter.request(url).then(response => {
      // transform response and return
    });
  }
}

⬆ back to top

Liskov Substitution Principle (LSP)

This is a scary term for a very simple concept. It's formally defined as "If S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may substitute objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.)." That's an even scarier definition.

The best explanation for this is if you have a parent class and a child class, then the base class and child class can be used interchangeably without getting incorrect results. This might still be confusing, so let's take a look at the classic Square-Rectangle example. Mathematically, a square is a rectangle, but if you model it using the "is-a" relationship via inheritance, you quickly get into trouble.

Bad:

class Rectangle {
  constructor() {
    this.width = 0;
    this.height = 0;
  }

  setColor(color) {
    // ...
  }

  render(area) {
    // ...
  }

  setWidth(width) {
    this.width = width;
  }

  setHeight(height) {
    this.height = height;
  }

  getArea() {
    return this.width * this.height;
  }
}

class Square extends Rectangle {
  setWidth(width) {
    this.width = width;
    this.height = width;
  }

  setHeight(height) {
    this.width = height;
    this.height = height;
  }
}

function renderLargeRectangles(rectangles) {
  rectangles.forEach(rectangle => {
    rectangle.setWidth(4);
    rectangle.setHeight(5);
    const area = rectangle.getArea(); // BAD: Returns 25 for Square. Should be 20.
    rectangle.render(area);
  });
}

const rectangles = [new Rectangle(), new Rectangle(), new Square()];
renderLargeRectangles(rectangles);

Good:

class Shape {
  setColor(color) {
    // ...
  }

  render(area) {
    // ...
  }
}

class Rectangle extends Shape {
  constructor(width, height) {
    super();
    this.width = width;
    this.height = height;
  }

  getArea() {
    return this.width * this.height;
  }
}

class Square extends Shape {
  constructor(length) {
    super();
    this.length = length;
  }

  getArea() {
    return this.length * this.length;
  }
}

function renderLargeShapes(shapes) {
  shapes.forEach(shape => {
    const area = shape.getArea();
    shape.render(area);
  });
}

const shapes = [new Rectangle(4, 5), new Rectangle(4, 5), new Square(5)];
renderLargeShapes(shapes);

⬆ back to top

Interface Segregation Principle (ISP)

JavaScript doesn't have interfaces so this principle doesn't apply as strictly as others. However, it's important and relevant even with JavaScript's lack of type system.

ISP states that "Clients should not be forced to depend upon interfaces that they do not use." Interfaces are implicit contracts in JavaScript because of duck typing.

A good example to look at that demonstrates this principle in JavaScript is for classes that require large settings objects. Not requiring clients to setup huge amounts of options is beneficial, because most of the time they won't need all of the settings. Making them optional helps prevent having a "fat interface".

Bad:

class DOMTraverser {
  constructor(settings) {
    this.settings = settings;
    this.setup();
  }

  setup() {
    this.rootNode = this.settings.rootNode;
    this.settings.animationModule.setup();
  }

  traverse() {
    // ...
  }
}

const $ = new DOMTraverser({
  rootNode: document.getElementsByTagName("body"),
  animationModule() {} // Most of the time, we won't need to animate when traversing.
  // ...
});

Good:

class DOMTraverser {
  constructor(settings) {
    this.settings = settings;
    this.options = settings.options;
    this.setup();
  }

  setup() {
    this.rootNode = this.settings.rootNode;
    this.setupOptions();
  }

  setupOptions() {
    if (this.options.animationModule) {
      // ...
    }
  }

  traverse() {
    // ...
  }
}

const $ = new DOMTraverser({
  rootNode: document.getElementsByTagName("body"),
  options: {
    animationModule() {}
  }
});

⬆ back to top

Dependency Inversion Principle (DIP)

This principle states two essential things:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend upon details. Details should depend on abstractions.

This can be hard to understand at first, but if you've worked with AngularJS, you've seen an implementation of this principle in the form of Dependency Injection (DI). While they are not identical concepts, DIP keeps high-level modules from knowing the details of its low-level modules and setting them up. It can accomplish this through DI. A huge benefit of this is that it reduces the coupling between modules. Coupling is a very bad development pattern because it makes your code hard to refactor.

As stated previously, JavaScript doesn't have interfaces so the abstractions that are depended upon are implicit contracts. That is to say, the methods and properties that an object/class exposes to another object/class. In the example below, the implicit contract is that any Request module for an InventoryTracker will have a requestItems method.

Bad:

class InventoryRequester {
  constructor() {
    this.REQ_METHODS = ["HTTP"];
  }

  requestItem(item) {
    // ...
  }
}

class InventoryTracker {
  constructor(items) {
    this.items = items;

    // BAD: We have created a dependency on a specific request implementation.
    // We should just have requestItems depend on a request method: `request`
    this.requester = new InventoryRequester();
  }

  requestItems() {
    this.items.forEach(item => {
      this.requester.requestItem(item);
    });
  }
}

const inventoryTracker = new InventoryTracker(["apples", "bananas"]);
inventoryTracker.requestItems();

Good:

class InventoryTracker {
  constructor(items, requester) {
    this.items = items;
    this.requester = requester;
  }

  requestItems() {
    this.items.forEach(item => {
      this.requester.requestItem(item);
    });
  }
}

class InventoryRequesterV1 {
  constructor() {
    this.REQ_METHODS = ["HTTP"];
  }

  requestItem(item) {
    // ...
  }
}

class InventoryRequesterV2 {
  constructor() {
    this.REQ_METHODS = ["WS"];
  }

  requestItem(item) {
    // ...
  }
}

// By constructing our dependencies externally and injecting them, we can easily
// substitute our request module for a fancy new one that uses WebSockets.
const inventoryTracker = new InventoryTracker(
  ["apples", "bananas"],
  new InventoryRequesterV2()
);
inventoryTracker.requestItems();

⬆ back to top

Testing

Testing is more important than shipping. If you have no tests or an inadequate amount, then every time you ship code you won't be sure that you didn't break anything. Deciding on what constitutes an adequate amount is up to your team, but having 100% coverage (all statements and branches) is how you achieve very high confidence and developer peace of mind. This means that in addition to having a great testing framework, you also need to use a good coverage tool.

There's no excuse to not write tests. There are plenty of good JS test frameworks, so find one that your team prefers. When you find one that works for your team, then aim to always write tests for every new feature/module you introduce. If your preferred method is Test Driven Development (TDD), that is great, but the main point is to just make sure you are reaching your coverage goals before launching any feature, or refactoring an existing one.

Single concept per test

Bad:

import assert from "assert";

describe("MomentJS", () => {
  it("handles date boundaries", () => {
    let date;

    date = new MomentJS("1/1/2015");
    date.addDays(30);
    assert.equal("1/31/2015", date);

    date = new MomentJS("2/1/2016");
    date.addDays(28);
    assert.equal("02/29/2016", date);

    date = new MomentJS("2/1/2015");
    date.addDays(28);
    assert.equal("03/01/2015", date);
  });
});

Good:

import assert from "assert";

describe("MomentJS", () => {
  it("handles 30-day months", () => {
    const date = new MomentJS("1/1/2015");
    date.addDays(30);
    assert.equal("1/31/2015", date);
  });

  it("handles leap year", () => {
    const date = new MomentJS("2/1/2016");
    date.addDays(28);
    assert.equal("02/29/2016", date);
  });

  it("handles non-leap year", () => {
    const date = new MomentJS("2/1/2015");
    date.addDays(28);
    assert.equal("03/01/2015", date);
  });
});

⬆ back to top

Concurrency

Use Promises, not callbacks

Callbacks aren't clean, and they cause excessive amounts of nesting. With ES2015/ES6, Promises are a built-in global type. Use them!

Bad:

import { get } from "request";
import { writeFile } from "fs";

get(
  "https://en.wikipedia.org/wiki/Robert_Cecil_Martin",
  (requestErr, response, body) => {
    if (requestErr) {
      console.error(requestErr);
    } else {
      writeFile("article.html", body, writeErr => {
        if (writeErr) {
          console.error(writeErr);
        } else {
          console.log("File written");
        }
      });
    }
  }
);

Good:

import { get } from "request-promise";
import { writeFile } from "fs-extra";

get("https://en.wikipedia.org/wiki/Robert_Cecil_Martin")
  .then(body => {
    return writeFile("article.html", body);
  })
  .then(() => {
    console.log("File written");
  })
  .catch(err => {
    console.error(err);
  });

⬆ back to top

Async/Await are even cleaner than Promises

Promises are a very clean alternative to callbacks, but ES2017/ES8 brings async and await which offer an even cleaner solution. All you need is a function that is prefixed in an async keyword, and then you can write your logic imperatively without a then chain of functions. Use this if you can take advantage of ES2017/ES8 features today!

Bad:

import { get } from "request-promise";
import { writeFile } from "fs-extra";

get("https://en.wikipedia.org/wiki/Robert_Cecil_Martin")
  .then(body => {
    return writeFile("article.html", body);
  })
  .then(() => {
    console.log("File written");
  })
  .catch(err => {
    console.error(err);
  });

Good:

import { get } from "request-promise";
import { writeFile } from "fs-extra";

async function getCleanCodeArticle() {
  try {
    const body = await get(
      "https://en.wikipedia.org/wiki/Robert_Cecil_Martin"
    );
    await writeFile("article.html", body);
    console.log("File written");
  } catch (err) {
    console.error(err);
  }
}

getCleanCodeArticle()

⬆ back to top

Error Handling

Thrown errors are a good thing! They mean the runtime has successfully identified when something in your program has gone wrong and it's letting you know by stopping function execution on the current stack, killing the process (in Node), and notifying you in the console with a stack trace.

Don't ignore caught errors

Doing nothing with a caught error doesn't give you the ability to ever fix or react to said error. Logging the error to the console (console.log) isn't much better as often times it can get lost in a sea of things printed to the console. If you wrap any bit of code in a try/catch it means you think an error may occur there and therefore you should have a plan, or create a code path, for when it occurs.

Bad:

try {
  functionThatMightThrow();
} catch (error) {
  console.log(error);
}

Good:

try {
  functionThatMightThrow();
} catch (error) {
  // One option (more noisy than console.log):
  console.error(error);
  // Another option:
  notifyUserOfError(error);
  // Another option:
  reportErrorToService(error);
  // OR do all three!
}

Don't ignore rejected promises

For the same reason you shouldn't ignore caught errors from try/catch.

Bad:

getdata()
  .then(data => {
    functionThatMightThrow(data);
  })
  .catch(error => {
    console.log(error);
  });

Good:

getdata()
  .then(data => {
    functionThatMightThrow(data);
  })
  .catch(error => {
    // One option (more noisy than console.log):
    console.error(error);
    // Another option:
    notifyUserOfError(error);
    // Another option:
    reportErrorToService(error);
    // OR do all three!
  });

⬆ back to top

Formatting

Formatting is subjective. Like many rules herein, there is no hard and fast rule that you must follow. The main point is DO NOT ARGUE over formatting. There are tons of tools to automate this. Use one! It's a waste of time and money for engineers to argue over formatting.

For things that don't fall under the purview of automatic formatting (indentation, tabs vs. spaces, double vs. single quotes, etc.) look here for some guidance.

Use consistent capitalization

JavaScript is untyped, so capitalization tells you a lot about your variables, functions, etc. These rules are subjective, so your team can choose whatever they want. The point is, no matter what you all choose, just be consistent.

Bad:

const DAYS_IN_WEEK = 7;
const daysInMonth = 30;

const songs = ["Back In Black", "Stairway to Heaven", "Hey Jude"];
const Artists = ["ACDC", "Led Zeppelin", "The Beatles"];

function eraseDatabase() {}
function restore_database() {}

class animal {}
class Alpaca {}

Good:

const DAYS_IN_WEEK = 7;
const DAYS_IN_MONTH = 30;

const SONGS = ["Back In Black", "Stairway to Heaven", "Hey Jude"];
const ARTISTS = ["ACDC", "Led Zeppelin", "The Beatles"];

function eraseDatabase() {}
function restoreDatabase() {}

class Animal {}
class Alpaca {}

⬆ back to top

Function callers and callees should be close

If a function calls another, keep those functions vertically close in the source file. Ideally, keep the caller right above the callee. We tend to read code from top-to-bottom, like a newspaper. Because of this, make your code read that way.

Bad:

class PerformanceReview {
  constructor(employee) {
    this.employee = employee;
  }

  lookupPeers() {
    return db.lookup(this.employee, "peers");
  }

  lookupManager() {
    return db.lookup(this.employee, "manager");
  }

  getPeerReviews() {
    const peers = this.lookupPeers();
    // ...
  }

  perfReview() {
    this.getPeerReviews();
    this.getManagerReview();
    this.getSelfReview();
  }

  getManagerReview() {
    const manager = this.lookupManager();
  }

  getSelfReview() {
    // ...
  }
}

const review = new PerformanceReview(employee);
review.perfReview();

Good:

class PerformanceReview {
  constructor(employee) {
    this.employee = employee;
  }

  perfReview() {
    this.getPeerReviews();
    this.getManagerReview();
    this.getSelfReview();
  }

  getPeerReviews() {
    const peers = this.lookupPeers();
    // ...
  }

  lookupPeers() {
    return db.lookup(this.employee, "peers");
  }

  getManagerReview() {
    const manager = this.lookupManager();
  }

  lookupManager() {
    return db.lookup(this.employee, "manager");
  }

  getSelfReview() {
    // ...
  }
}

const review = new PerformanceReview(employee);
review.perfReview();

⬆ back to top

Comments

Only comment things that have business logic complexity.

Comments are an apology, not a requirement. Good code mostly documents itself.

Bad:

function hashIt(data) {
  // The hash
  let hash = 0;

  // Length of string
  const length = data.length;

  // Loop through every character in data
  for (let i = 0; i < length; i++) {
    // Get character code.
    const char = data.charCodeAt(i);
    // Make the hash
    hash = (hash << 5) - hash + char;
    // Convert to 32-bit integer
    hash &= hash;
  }
}

Good:

function hashIt(data) {
  let hash = 0;
  const length = data.length;

  for (let i = 0; i < length; i++) {
    const char = data.charCodeAt(i);
    hash = (hash << 5) - hash + char;

    // Convert to 32-bit integer
    hash &= hash;
  }
}

⬆ back to top

Don't leave commented out code in your codebase

Version control exists for a reason. Leave old code in your history.

Bad:

doStuff();
// doOtherStuff();
// doSomeMoreStuff();
// doSoMuchStuff();

Good:

doStuff();

⬆ back to top

Don't have journal comments

Remember, use version control! There's no need for dead code, commented code, and especially journal comments. Use git log to get history!

Bad:

/**
 * 2016-12-20: Removed monads, didn't understand them (RM)
 * 2016-10-01: Improved using special monads (JP)
 * 2016-02-03: Removed type-checking (LI)
 * 2015-03-14: Added combine with type-checking (JR)
 */
function combine(a, b) {
  return a + b;
}

Good:

function combine(a, b) {
  return a + b;
}

⬆ back to top

Avoid positional markers

They usually just add noise. Let the functions and variable names along with the proper indentation and formatting give the visual structure to your code.

Bad:

////////////////////////////////////////////////////////////////////////////////
// Scope Model Instantiation
////////////////////////////////////////////////////////////////////////////////
$scope.model = {
  menu: "foo",
  nav: "bar"
};

////////////////////////////////////////////////////////////////////////////////
// Action setup
////////////////////////////////////////////////////////////////////////////////
const actions = function() {
  // ...
};

Good:

$scope.model = {
  menu: "foo",
  nav: "bar"
};

const actions = function() {
  // ...
};

⬆ back to top

Translation

This is also available in other languages:

⬆ back to top

Author: Ryanmcdermott
Source Code: https://github.com/ryanmcdermott/clean-code-javascript 
License: MIT License

#javascript #clean 

Clean Code JavaScript: Clean Code Concepts Adapted for JavaScript
Mike  Kozey

Mike Kozey

1642666752

Clean Code Concepts Adapted for PHP

Clean Code PHP

Table of Contents

  1. Introduction
  2. Variables
  3. Comparison
  4. Functions
  5. Objects and Data Structures
  6. Classes
  7. SOLID
  8. Don’t repeat yourself (DRY)
  9. Translations

Introduction

Software engineering principles, from Robert C. Martin's book Clean Code, adapted for PHP. This is not a style guide. It's a guide to producing readable, reusable, and refactorable software in PHP.

Not every principle herein has to be strictly followed, and even fewer will be universally agreed upon. These are guidelines and nothing more, but they are ones codified over many years of collective experience by the authors of Clean Code.

Inspired from clean-code-javascript.

Although many developers still use PHP 5, most of the examples in this article only work with PHP 7.1+.

Variables

Use meaningful and pronounceable variable names

Bad:

$ymdstr = $moment->format('y-m-d');

Good:

$currentDate = $moment->format('y-m-d');

⬆ back to top

Use the same vocabulary for the same type of variable

Bad:

getUserInfo();
getUserData();
getUserRecord();
getUserProfile();

Good:

getUser();

⬆ back to top

Use searchable names (part 1)

We will read more code than we will ever write. It's important that the code we do write is readable and searchable. By not naming variables that end up being meaningful for understanding our program, we hurt our readers. Make your names searchable.

Bad:

// What the heck is 448 for?
$result = $serializer->serialize($data, 448);

Good:

$json = $serializer->serialize($data, JSON_UNESCAPED_SLASHES | JSON_PRETTY_PRINT | JSON_UNESCAPED_UNICODE);

Use searchable names (part 2)

Bad:

class User
{
    // What the heck is 7 for?
    public $access = 7;
}

// What the heck is 4 for?
if ($user->access & 4) {
    // ...
}

// What's going on here?
$user->access ^= 2;

Good:

class User
{
    public const ACCESS_READ = 1;

    public const ACCESS_CREATE = 2;

    public const ACCESS_UPDATE = 4;

    public const ACCESS_DELETE = 8;

    // User as default can read, create and update something
    public $access = self::ACCESS_READ | self::ACCESS_CREATE | self::ACCESS_UPDATE;
}

if ($user->access & User::ACCESS_UPDATE) {
    // do edit ...
}

// Deny access rights to create something
$user->access ^= User::ACCESS_CREATE;

⬆ back to top

Use explanatory variables

Bad:

$address = 'One Infinite Loop, Cupertino 95014';
$cityZipCodeRegex = '/^[^,]+,\s*(.+?)\s*(\d{5})$/';
preg_match($cityZipCodeRegex, $address, $matches);

saveCityZipCode($matches[1], $matches[2]);

Not bad:

It's better, but we are still heavily dependent on regex.

$address = 'One Infinite Loop, Cupertino 95014';
$cityZipCodeRegex = '/^[^,]+,\s*(.+?)\s*(\d{5})$/';
preg_match($cityZipCodeRegex, $address, $matches);

[, $city, $zipCode] = $matches;
saveCityZipCode($city, $zipCode);

Good:

Decrease dependence on regex by naming subpatterns.

$address = 'One Infinite Loop, Cupertino 95014';
$cityZipCodeRegex = '/^[^,]+,\s*(?<city>.+?)\s*(?<zipCode>\d{5})$/';
preg_match($cityZipCodeRegex, $address, $matches);

saveCityZipCode($matches['city'], $matches['zipCode']);

⬆ back to top

Avoid nesting too deeply and return early (part 1)

Too many if-else statements can make your code hard to follow. Explicit is better than implicit.

Bad:

function isShopOpen($day): bool
{
    if ($day) {
        if (is_string($day)) {
            $day = strtolower($day);
            if ($day === 'friday') {
                return true;
            } elseif ($day === 'saturday') {
                return true;
            } elseif ($day === 'sunday') {
                return true;
            }
            return false;
        }
        return false;
    }
    return false;
}

Good:

function isShopOpen(string $day): bool
{
    if (empty($day)) {
        return false;
    }

    $openingDays = ['friday', 'saturday', 'sunday'];

    return in_array(strtolower($day), $openingDays, true);
}

⬆ back to top

Avoid nesting too deeply and return early (part 2)

Bad:

function fibonacci(int $n)
{
    if ($n < 50) {
        if ($n !== 0) {
            if ($n !== 1) {
                return fibonacci($n - 1) + fibonacci($n - 2);
            }
            return 1;
        }
        return 0;
    }
    return 'Not supported';
}

Good:

function fibonacci(int $n): int
{
    if ($n === 0 || $n === 1) {
        return $n;
    }

    if ($n >= 50) {
        throw new Exception('Not supported');
    }

    return fibonacci($n - 1) + fibonacci($n - 2);
}

⬆ back to top

Avoid Mental Mapping

Don’t force the reader of your code to translate what the variable means. Explicit is better than implicit.

Bad:

$l = ['Austin', 'New York', 'San Francisco'];

for ($i = 0; $i < count($l); $i++) {
    $li = $l[$i];
    doStuff();
    doSomeOtherStuff();
    // ...
    // ...
    // ...
    // Wait, what is `$li` for again?
    dispatch($li);
}

Good:

$locations = ['Austin', 'New York', 'San Francisco'];

foreach ($locations as $location) {
    doStuff();
    doSomeOtherStuff();
    // ...
    // ...
    // ...
    dispatch($location);
}

⬆ back to top

Don't add unneeded context

If your class/object name tells you something, don't repeat that in your variable name.

Bad:

class Car
{
    public $carMake;

    public $carModel;

    public $carColor;

    //...
}

Good:

class Car
{
    public $make;

    public $model;

    public $color;

    //...
}

⬆ back to top

Comparison

Use identical comparison

Not good:

The simple comparison will convert the string in an integer.

$a = '42';
$b = 42;

if ($a != $b) {
    // The expression will always pass
}

The comparison $a != $b returns FALSE but in fact it's TRUE! The string 42 is different than the integer 42.

Good:

The identical comparison will compare type and value.

$a = '42';
$b = 42;

if ($a !== $b) {
    // The expression is verified
}

The comparison $a !== $b returns TRUE.

⬆ back to top

Null coalescing operator

Null coalescing is a new operator introduced in PHP 7. The null coalescing operator ?? has been added as syntactic sugar for the common case of needing to use a ternary in conjunction with isset(). It returns its first operand if it exists and is not null; otherwise it returns its second operand.

Bad:

if (isset($_GET['name'])) {
    $name = $_GET['name'];
} elseif (isset($_POST['name'])) {
    $name = $_POST['name'];
} else {
    $name = 'nobody';
}

Good:

$name = $_GET['name'] ?? $_POST['name'] ?? 'nobody';

⬆ back to top

Functions

Use default arguments instead of short circuiting or conditionals

Not good:

This is not good because $breweryName can be NULL.

function createMicrobrewery($breweryName = 'Hipster Brew Co.'): void
{
    // ...
}

Not bad:

This opinion is more understandable than the previous version, but it better controls the value of the variable.

function createMicrobrewery($name = null): void
{
    $breweryName = $name ?: 'Hipster Brew Co.';
    // ...
}

Good:

You can use type hinting and be sure that the $breweryName will not be NULL.

function createMicrobrewery(string $breweryName = 'Hipster Brew Co.'): void
{
    // ...
}

⬆ back to top

Function arguments (2 or fewer ideally)

Limiting the amount of function parameters is incredibly important because it makes testing your function easier. Having more than three leads to a combinatorial explosion where you have to test tons of different cases with each separate argument.

Zero arguments is the ideal case. One or two arguments is ok, and three should be avoided. Anything more than that should be consolidated. Usually, if you have more than two arguments then your function is trying to do too much. In cases where it's not, most of the time a higher-level object will suffice as an argument.

Bad:

class Questionnaire
{
    public function __construct(
        string $firstname,
        string $lastname,
        string $patronymic,
        string $region,
        string $district,
        string $city,
        string $phone,
        string $email
    ) {
        // ...
    }
}

Good:

class Name
{
    private $firstname;

    private $lastname;

    private $patronymic;

    public function __construct(string $firstname, string $lastname, string $patronymic)
    {
        $this->firstname = $firstname;
        $this->lastname = $lastname;
        $this->patronymic = $patronymic;
    }

    // getters ...
}

class City
{
    private $region;

    private $district;

    private $city;

    public function __construct(string $region, string $district, string $city)
    {
        $this->region = $region;
        $this->district = $district;
        $this->city = $city;
    }

    // getters ...
}

class Contact
{
    private $phone;

    private $email;

    public function __construct(string $phone, string $email)
    {
        $this->phone = $phone;
        $this->email = $email;
    }

    // getters ...
}

class Questionnaire
{
    public function __construct(Name $name, City $city, Contact $contact)
    {
        // ...
    }
}

⬆ back to top

Function names should say what they do

Bad:

class Email
{
    //...

    public function handle(): void
    {
        mail($this->to, $this->subject, $this->body);
    }
}

$message = new Email(...);
// What is this? A handle for the message? Are we writing to a file now?
$message->handle();

Good:

class Email
{
    //...

    public function send(): void
    {
        mail($this->to, $this->subject, $this->body);
    }
}

$message = new Email(...);
// Clear and obvious
$message->send();

⬆ back to top

Functions should only be one level of abstraction

When you have more than one level of abstraction your function is usually doing too much. Splitting up functions leads to reusability and easier testing.

Bad:

function parseBetterPHPAlternative(string $code): void
{
    $regexes = [
        // ...
    ];

    $statements = explode(' ', $code);
    $tokens = [];
    foreach ($regexes as $regex) {
        foreach ($statements as $statement) {
            // ...
        }
    }

    $ast = [];
    foreach ($tokens as $token) {
        // lex...
    }

    foreach ($ast as $node) {
        // parse...
    }
}

Bad too:

We have carried out some of the functionality, but the parseBetterPHPAlternative() function is still very complex and not testable.

function tokenize(string $code): array
{
    $regexes = [
        // ...
    ];

    $statements = explode(' ', $code);
    $tokens = [];
    foreach ($regexes as $regex) {
        foreach ($statements as $statement) {
            $tokens[] = /* ... */;
        }
    }

    return $tokens;
}

function lexer(array $tokens): array
{
    $ast = [];
    foreach ($tokens as $token) {
        $ast[] = /* ... */;
    }

    return $ast;
}

function parseBetterPHPAlternative(string $code): void
{
    $tokens = tokenize($code);
    $ast = lexer($tokens);
    foreach ($ast as $node) {
        // parse...
    }
}

Good:

The best solution is move out the dependencies of parseBetterPHPAlternative() function.

class Tokenizer
{
    public function tokenize(string $code): array
    {
        $regexes = [
            // ...
        ];

        $statements = explode(' ', $code);
        $tokens = [];
        foreach ($regexes as $regex) {
            foreach ($statements as $statement) {
                $tokens[] = /* ... */;
            }
        }

        return $tokens;
    }
}

class Lexer
{
    public function lexify(array $tokens): array
    {
        $ast = [];
        foreach ($tokens as $token) {
            $ast[] = /* ... */;
        }

        return $ast;
    }
}

class BetterPHPAlternative
{
    private $tokenizer;
    private $lexer;

    public function __construct(Tokenizer $tokenizer, Lexer $lexer)
    {
        $this->tokenizer = $tokenizer;
        $this->lexer = $lexer;
    }

    public function parse(string $code): void
    {
        $tokens = $this->tokenizer->tokenize($code);
        $ast = $this->lexer->lexify($tokens);
        foreach ($ast as $node) {
            // parse...
        }
    }
}

⬆ back to top

Don't use flags as function parameters

Flags tell your user that this function does more than one thing. Functions should do one thing. Split out your functions if they are following different code paths based on a boolean.

Bad:

function createFile(string $name, bool $temp = false): void
{
    if ($temp) {
        touch('./temp/' . $name);
    } else {
        touch($name);
    }
}

Good:

function createFile(string $name): void
{
    touch($name);
}

function createTempFile(string $name): void
{
    touch('./temp/' . $name);
}

⬆ back to top

Avoid Side Effects

A function produces a side effect if it does anything other than take a value in and return another value or values. A side effect could be writing to a file, modifying some global variable, or accidentally wiring all your money to a stranger.

Now, you do need to have side effects in a program on occasion. Like the previous example, you might need to write to a file. What you want to do is to centralize where you are doing this. Don't have several functions and classes that write to a particular file. Have one service that does it. One and only one.

The main point is to avoid common pitfalls like sharing state between objects without any structure, using mutable data types that can be written to by anything, and not centralizing where your side effects occur. If you can do this, you will be happier than the vast majority of other programmers.

Bad:

// Global variable referenced by following function.
// If we had another function that used this name, now it'd be an array and it could break it.
$name = 'Ryan McDermott';

function splitIntoFirstAndLastName(): void
{
    global $name;

    $name = explode(' ', $name);
}

splitIntoFirstAndLastName();

var_dump($name);
// ['Ryan', 'McDermott'];

Good:

function splitIntoFirstAndLastName(string $name): array
{
    return explode(' ', $name);
}

$name = 'Ryan McDermott';
$newName = splitIntoFirstAndLastName($name);

var_dump($name);
// 'Ryan McDermott';

var_dump($newName);
// ['Ryan', 'McDermott'];

⬆ back to top

Don't write to global functions

Polluting globals is a bad practice in many languages because you could clash with another library and the user of your API would be none-the-wiser until they get an exception in production. Let's think about an example: what if you wanted to have configuration array? You could write global function like config(), but it could clash with another library that tried to do the same thing.

Bad:

function config(): array
{
    return [
        'foo' => 'bar',
    ];
}

Good:

class Configuration
{
    private $configuration = [];

    public function __construct(array $configuration)
    {
        $this->configuration = $configuration;
    }

    public function get(string $key): ?string
    {
        // null coalescing operator
        return $this->configuration[$key] ?? null;
    }
}

Load configuration and create instance of Configuration class

$configuration = new Configuration([
    'foo' => 'bar',
]);

And now you must use instance of Configuration in your application.

⬆ back to top

Don't use a Singleton pattern

Singleton is an anti-pattern. Paraphrased from Brian Button:

  1. They are generally used as a global instance, why is that so bad? Because you hide the dependencies of your application in your code, instead of exposing them through the interfaces. Making something global to avoid passing it around is a code smell.
  2. They violate the single responsibility principle: by virtue of the fact that they control their own creation and lifecycle.
  3. They inherently cause code to be tightly coupled. This makes faking them out under test rather difficult in many cases.
  4. They carry state around for the lifetime of the application. Another hit to testing since you can end up with a situation where tests need to be ordered which is a big no for unit tests. Why? Because each unit test should be independent from the other.

There is also very good thoughts by Misko Hevery about the root of problem.

Bad:

class DBConnection
{
    private static $instance;

    private function __construct(string $dsn)
    {
        // ...
    }

    public static function getInstance(): self
    {
        if (self::$instance === null) {
            self::$instance = new self();
        }

        return self::$instance;
    }

    // ...
}

$singleton = DBConnection::getInstance();

Good:

class DBConnection
{
    public function __construct(string $dsn)
    {
        // ...
    }

    // ...
}

Create instance of DBConnection class and configure it with DSN.

$connection = new DBConnection($dsn);

And now you must use instance of DBConnection in your application.

⬆ back to top

Encapsulate conditionals

Bad:

if ($article->state === 'published') {
    // ...
}

Good:

if ($article->isPublished()) {
    // ...
}

⬆ back to top

Avoid negative conditionals

Bad:

function isDOMNodeNotPresent(DOMNode $node): bool
{
    // ...
}

if (! isDOMNodeNotPresent($node)) {
    // ...
}

Good:

function isDOMNodePresent(DOMNode $node): bool
{
    // ...
}

if (isDOMNodePresent($node)) {
    // ...
}

⬆ back to top

Avoid conditionals

This seems like an impossible task. Upon first hearing this, most people say, "how am I supposed to do anything without an if statement?" The answer is that you can use polymorphism to achieve the same task in many cases. The second question is usually, "well that's great but why would I want to do that?" The answer is a previous clean code concept we learned: a function should only do one thing. When you have classes and functions that have if statements, you are telling your user that your function does more than one thing. Remember, just do one thing.

Bad:

class Airplane
{
    // ...

    public function getCruisingAltitude(): int
    {
        switch ($this->type) {
            case '777':
                return $this->getMaxAltitude() - $this->getPassengerCount();
            case 'Air Force One':
                return $this->getMaxAltitude();
            case 'Cessna':
                return $this->getMaxAltitude() - $this->getFuelExpenditure();
        }
    }
}

Good:

interface Airplane
{
    // ...

    public function getCruisingAltitude(): int;
}

class Boeing777 implements Airplane
{
    // ...

    public function getCruisingAltitude(): int
    {
        return $this->getMaxAltitude() - $this->getPassengerCount();
    }
}

class AirForceOne implements Airplane
{
    // ...

    public function getCruisingAltitude(): int
    {
        return $this->getMaxAltitude();
    }
}

class Cessna implements Airplane
{
    // ...

    public function getCruisingAltitude(): int
    {
        return $this->getMaxAltitude() - $this->getFuelExpenditure();
    }
}

⬆ back to top

Avoid type-checking (part 1)

PHP is untyped, which means your functions can take any type of argument. Sometimes you are bitten by this freedom and it becomes tempting to do type-checking in your functions. There are many ways to avoid having to do this. The first thing to consider is consistent APIs.

Bad:

function travelToTexas($vehicle): void
{
    if ($vehicle instanceof Bicycle) {
        $vehicle->pedalTo(new Location('texas'));
    } elseif ($vehicle instanceof Car) {
        $vehicle->driveTo(new Location('texas'));
    }
}

Good:

function travelToTexas(Vehicle $vehicle): void
{
    $vehicle->travelTo(new Location('texas'));
}

⬆ back to top

Avoid type-checking (part 2)

If you are working with basic primitive values like strings, integers, and arrays, and you use PHP 7+ and you can't use polymorphism but you still feel the need to type-check, you should consider type declaration or strict mode. It provides you with static typing on top of standard PHP syntax. The problem with manually type-checking is that doing it will require so much extra verbiage that the faux "type-safety" you get doesn't make up for the lost readability. Keep your PHP clean, write good tests, and have good code reviews. Otherwise, do all of that but with PHP strict type declaration or strict mode.

Bad:

function combine($val1, $val2): int
{
    if (! is_numeric($val1) || ! is_numeric($val2)) {
        throw new Exception('Must be of type Number');
    }

    return $val1 + $val2;
}

Good:

function combine(int $val1, int $val2): int
{
    return $val1 + $val2;
}

⬆ back to top

Remove dead code

Dead code is just as bad as duplicate code. There's no reason to keep it in your codebase. If it's not being called, get rid of it! It will still be safe in your version history if you still need it.

Bad:

function oldRequestModule(string $url): void
{
    // ...
}

function newRequestModule(string $url): void
{
    // ...
}

$request = newRequestModule($requestUrl);
inventoryTracker('apples', $request, 'www.inventory-awesome.io');

Good:

function requestModule(string $url): void
{
    // ...
}

$request = requestModule($requestUrl);
inventoryTracker('apples', $request, 'www.inventory-awesome.io');

⬆ back to top

Objects and Data Structures

Use object encapsulation

In PHP you can set public, protected and private keywords for methods. Using it, you can control properties modification on an object.

  • When you want to do more beyond getting an object property, you don't have to look up and change every accessor in your codebase.
  • Makes adding validation simple when doing a set.
  • Encapsulates the internal representation.
  • Easy to add logging and error handling when getting and setting.
  • Inheriting this class, you can override default functionality.
  • You can lazy load your object's properties, let's say getting it from a server.

Additionally, this is part of Open/Closed principle.

Bad:

class BankAccount
{
    public $balance = 1000;
}

$bankAccount = new BankAccount();

// Buy shoes...
$bankAccount->balance -= 100;

Good:

class BankAccount
{
    private $balance;

    public function __construct(int $balance = 1000)
    {
      $this->balance = $balance;
    }

    public function withdraw(int $amount): void
    {
        if ($amount > $this->balance) {
            throw new \Exception('Amount greater than available balance.');
        }

        $this->balance -= $amount;
    }

    public function deposit(int $amount): void
    {
        $this->balance += $amount;
    }

    public function getBalance(): int
    {
        return $this->balance;
    }
}

$bankAccount = new BankAccount();

// Buy shoes...
$bankAccount->withdraw($shoesPrice);

// Get balance
$balance = $bankAccount->getBalance();

⬆ back to top

Make objects have private/protected members

  • public methods and properties are most dangerous for changes, because some outside code may easily rely on them and you can't control what code relies on them. Modifications in class are dangerous for all users of class.
  • protected modifier are as dangerous as public, because they are available in scope of any child class. This effectively means that difference between public and protected is only in access mechanism, but encapsulation guarantee remains the same. Modifications in class are dangerous for all descendant classes.
  • private modifier guarantees that code is dangerous to modify only in boundaries of single class (you are safe for modifications and you won't have Jenga effect).

Therefore, use private by default and public/protected when you need to provide access for external classes.

For more information you can read the blog post on this topic written by Fabien Potencier.

Bad:

class Employee
{
    public $name;

    public function __construct(string $name)
    {
        $this->name = $name;
    }
}

$employee = new Employee('John Doe');
// Employee name: John Doe
echo 'Employee name: ' . $employee->name;

Good:

class Employee
{
    private $name;

    public function __construct(string $name)
    {
        $this->name = $name;
    }

    public function getName(): string
    {
        return $this->name;
    }
}

$employee = new Employee('John Doe');
// Employee name: John Doe
echo 'Employee name: ' . $employee->getName();

⬆ back to top

Classes

Prefer composition over inheritance

As stated famously in Design Patterns by the Gang of Four, you should prefer composition over inheritance where you can. There are lots of good reasons to use inheritance and lots of good reasons to use composition. The main point for this maxim is that if your mind instinctively goes for inheritance, try to think if composition could model your problem better. In some cases it can.

You might be wondering then, "when should I use inheritance?" It depends on your problem at hand, but this is a decent list of when inheritance makes more sense than composition:

  1. Your inheritance represents an "is-a" relationship and not a "has-a" relationship (Human->Animal vs. User->UserDetails).
  2. You can reuse code from the base classes (Humans can move like all animals).
  3. You want to make global changes to derived classes by changing a base class. (Change the caloric expenditure of all animals when they move).

Bad:

class Employee
{
    private $name;

    private $email;

    public function __construct(string $name, string $email)
    {
        $this->name = $name;
        $this->email = $email;
    }

    // ...
}

// Bad because Employees "have" tax data.
// EmployeeTaxData is not a type of Employee

class EmployeeTaxData extends Employee
{
    private $ssn;

    private $salary;

    public function __construct(string $name, string $email, string $ssn, string $salary)
    {
        parent::__construct($name, $email);

        $this->ssn = $ssn;
        $this->salary = $salary;
    }

    // ...
}

Good:

class EmployeeTaxData
{
    private $ssn;

    private $salary;

    public function __construct(string $ssn, string $salary)
    {
        $this->ssn = $ssn;
        $this->salary = $salary;
    }

    // ...
}

class Employee
{
    private $name;

    private $email;

    private $taxData;

    public function __construct(string $name, string $email)
    {
        $this->name = $name;
        $this->email = $email;
    }

    public function setTaxData(EmployeeTaxData $taxData): void
    {
        $this->taxData = $taxData;
    }

    // ...
}

⬆ back to top

Avoid fluent interfaces

A Fluent interface is an object oriented API that aims to improve the readability of the source code by using Method chaining.

While there can be some contexts, frequently builder objects, where this pattern reduces the verbosity of the code (for example the PHPUnit Mock Builder or the Doctrine Query Builder), more often it comes at some costs:

  1. Breaks Encapsulation.
  2. Breaks Decorators.
  3. Is harder to mock in a test suite.
  4. Makes diffs of commits harder to read.

For more information you can read the full blog post on this topic written by Marco Pivetta.

Bad:

class Car
{
    private $make = 'Honda';

    private $model = 'Accord';

    private $color = 'white';

    public function setMake(string $make): self
    {
        $this->make = $make;

        // NOTE: Returning this for chaining
        return $this;
    }

    public function setModel(string $model): self
    {
        $this->model = $model;

        // NOTE: Returning this for chaining
        return $this;
    }

    public function setColor(string $color): self
    {
        $this->color = $color;

        // NOTE: Returning this for chaining
        return $this;
    }

    public function dump(): void
    {
        var_dump($this->make, $this->model, $this->color);
    }
}

$car = (new Car())
    ->setColor('pink')
    ->setMake('Ford')
    ->setModel('F-150')
    ->dump();

Good:

class Car
{
    private $make = 'Honda';

    private $model = 'Accord';

    private $color = 'white';

    public function setMake(string $make): void
    {
        $this->make = $make;
    }

    public function setModel(string $model): void
    {
        $this->model = $model;
    }

    public function setColor(string $color): void
    {
        $this->color = $color;
    }

    public function dump(): void
    {
        var_dump($this->make, $this->model, $this->color);
    }
}

$car = new Car();
$car->setColor('pink');
$car->setMake('Ford');
$car->setModel('F-150');
$car->dump();

⬆ back to top

Prefer final classes

The final keyword should be used whenever possible:

  1. It prevents an uncontrolled inheritance chain.
  2. It encourages composition.
  3. It encourages the Single Responsibility Principle.
  4. It encourages developers to use your public methods instead of extending the class to get access to protected ones.
  5. It allows you to change your code without breaking applications that use your class.

The only condition is that your class should implement an interface and no other public methods are defined.

For more informations you can read the blog post on this topic written by Marco Pivetta (Ocramius).

Bad:

final class Car
{
    private $color;

    public function __construct($color)
    {
        $this->color = $color;
    }

    /**
     * @return string The color of the vehicle
     */
    public function getColor()
    {
        return $this->color;
    }
}

Good:

interface Vehicle
{
    /**
     * @return string The color of the vehicle
     */
    public function getColor();
}

final class Car implements Vehicle
{
    private $color;

    public function __construct($color)
    {
        $this->color = $color;
    }

    public function getColor()
    {
        return $this->color;
    }
}

⬆ back to top

SOLID

SOLID is the mnemonic acronym introduced by Michael Feathers for the first five principles named by Robert Martin, which meant five basic principles of object-oriented programming and design.

Single Responsibility Principle (SRP)

As stated in Clean Code, "There should never be more than one reason for a class to change". It's tempting to jam-pack a class with a lot of functionality, like when you can only take one suitcase on your flight. The issue with this is that your class won't be conceptually cohesive and it will give it many reasons to change. Minimizing the amount of times you need to change a class is important. It's important because if too much functionality is in one class and you modify a piece of it, it can be difficult to understand how that will affect other dependent modules in your codebase.

Bad:

class UserSettings
{
    private $user;

    public function __construct(User $user)
    {
        $this->user = $user;
    }

    public function changeSettings(array $settings): void
    {
        if ($this->verifyCredentials()) {
            // ...
        }
    }

    private function verifyCredentials(): bool
    {
        // ...
    }
}

Good:

class UserAuth
{
    private $user;

    public function __construct(User $user)
    {
        $this->user = $user;
    }

    public function verifyCredentials(): bool
    {
        // ...
    }
}

class UserSettings
{
    private $user;

    private $auth;

    public function __construct(User $user)
    {
        $this->user = $user;
        $this->auth = new UserAuth($user);
    }

    public function changeSettings(array $settings): void
    {
        if ($this->auth->verifyCredentials()) {
            // ...
        }
    }
}

⬆ back to top

Open/Closed Principle (OCP)

As stated by Bertrand Meyer, "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification." What does that mean though? This principle basically states that you should allow users to add new functionalities without changing existing code.

Bad:

abstract class Adapter
{
    protected $name;

    public function getName(): string
    {
        return $this->name;
    }
}

class AjaxAdapter extends Adapter
{
    public function __construct()
    {
        parent::__construct();

        $this->name = 'ajaxAdapter';
    }
}

class NodeAdapter extends Adapter
{
    public function __construct()
    {
        parent::__construct();

        $this->name = 'nodeAdapter';
    }
}

class HttpRequester
{
    private $adapter;

    public function __construct(Adapter $adapter)
    {
        $this->adapter = $adapter;
    }

    public function fetch(string $url): Promise
    {
        $adapterName = $this->adapter->getName();

        if ($adapterName === 'ajaxAdapter') {
            return $this->makeAjaxCall($url);
        } elseif ($adapterName === 'httpNodeAdapter') {
            return $this->makeHttpCall($url);
        }
    }

    private function makeAjaxCall(string $url): Promise
    {
        // request and return promise
    }

    private function makeHttpCall(string $url): Promise
    {
        // request and return promise
    }
}

Good:

interface Adapter
{
    public function request(string $url): Promise;
}

class AjaxAdapter implements Adapter
{
    public function request(string $url): Promise
    {
        // request and return promise
    }
}

class NodeAdapter implements Adapter
{
    public function request(string $url): Promise
    {
        // request and return promise
    }
}

class HttpRequester
{
    private $adapter;

    public function __construct(Adapter $adapter)
    {
        $this->adapter = $adapter;
    }

    public function fetch(string $url): Promise
    {
        return $this->adapter->request($url);
    }
}

⬆ back to top

Liskov Substitution Principle (LSP)

This is a scary term for a very simple concept. It's formally defined as "If S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may substitute objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.)." That's an even scarier definition.

The best explanation for this is if you have a parent class and a child class, then the base class and child class can be used interchangeably without getting incorrect results. This might still be confusing, so let's take a look at the classic Square-Rectangle example. Mathematically, a square is a rectangle, but if you model it using the "is-a" relationship via inheritance, you quickly get into trouble.

Bad:

class Rectangle
{
    protected $width = 0;

    protected $height = 0;

    public function setWidth(int $width): void
    {
        $this->width = $width;
    }

    public function setHeight(int $height): void
    {
        $this->height = $height;
    }

    public function getArea(): int
    {
        return $this->width * $this->height;
    }
}

class Square extends Rectangle
{
    public function setWidth(int $width): void
    {
        $this->width = $this->height = $width;
    }

    public function setHeight(int $height): void
    {
        $this->width = $this->height = $height;
    }
}

function printArea(Rectangle $rectangle): void
{
    $rectangle->setWidth(4);
    $rectangle->setHeight(5);

    // BAD: Will return 25 for Square. Should be 20.
    echo sprintf('%s has area %d.', get_class($rectangle), $rectangle->getArea()) . PHP_EOL;
}

$rectangles = [new Rectangle(), new Square()];

foreach ($rectangles as $rectangle) {
    printArea($rectangle);
}

Good:

The best way is separate the quadrangles and allocation of a more general subtype for both shapes.

Despite the apparent similarity of the square and the rectangle, they are different. A square has much in common with a rhombus, and a rectangle with a parallelogram, but they are not subtypes. A square, a rectangle, a rhombus and a parallelogram are separate shapes with their own properties, albeit similar.

interface Shape
{
    public function getArea(): int;
}

class Rectangle implements Shape
{
    private $width = 0;
    private $height = 0;

    public function __construct(int $width, int $height)
    {
        $this->width = $width;
        $this->height = $height;
    }

    public function getArea(): int
    {
        return $this->width * $this->height;
    }
}

class Square implements Shape
{
    private $length = 0;

    public function __construct(int $length)
    {
        $this->length = $length;
    }

    public function getArea(): int
    {
        return $this->length ** 2;
    }
}

function printArea(Shape $shape): void
{
    echo sprintf('%s has area %d.', get_class($shape), $shape->getArea()).PHP_EOL;
}

$shapes = [new Rectangle(4, 5), new Square(5)];

foreach ($shapes as $shape) {
    printArea($shape);
}

⬆ back to top

Interface Segregation Principle (ISP)

ISP states that "Clients should not be forced to depend upon interfaces that they do not use."

A good example to look at that demonstrates this principle is for classes that require large settings objects. Not requiring clients to set up huge amounts of options is beneficial, because most of the time they won't need all of the settings. Making them optional helps prevent having a "fat interface".

Bad:

interface Employee
{
    public function work(): void;

    public function eat(): void;
}

class HumanEmployee implements Employee
{
    public function work(): void
    {
        // ....working
    }

    public function eat(): void
    {
        // ...... eating in lunch break
    }
}

class RobotEmployee implements Employee
{
    public function work(): void
    {
        //.... working much more
    }

    public function eat(): void
    {
        //.... robot can't eat, but it must implement this method
    }
}

Good:

Not every worker is an employee, but every employee is a worker.

interface Workable
{
    public function work(): void;
}

interface Feedable
{
    public function eat(): void;
}

interface Employee extends Feedable, Workable
{
}

class HumanEmployee implements Employee
{
    public function work(): void
    {
        // ....working
    }

    public function eat(): void
    {
        //.... eating in lunch break
    }
}

// robot can only work
class RobotEmployee implements Workable
{
    public function work(): void
    {
        // ....working
    }
}

⬆ back to top

Dependency Inversion Principle (DIP)

This principle states two essential things:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend upon details. Details should depend on abstractions.

This can be hard to understand at first, but if you've worked with PHP frameworks (like Symfony), you've seen an implementation of this principle in the form of Dependency Injection (DI). While they are not identical concepts, DIP keeps high-level modules from knowing the details of its low-level modules and setting them up. It can accomplish this through DI. A huge benefit of this is that it reduces the coupling between modules. Coupling is a very bad development pattern because it makes your code hard to refactor.

Bad:

class Employee
{
    public function work(): void
    {
        // ....working
    }
}

class Robot extends Employee
{
    public function work(): void
    {
        //.... working much more
    }
}

class Manager
{
    private $employee;

    public function __construct(Employee $employee)
    {
        $this->employee = $employee;
    }

    public function manage(): void
    {
        $this->employee->work();
    }
}

Good:

interface Employee
{
    public function work(): void;
}

class Human implements Employee
{
    public function work(): void
    {
        // ....working
    }
}

class Robot implements Employee
{
    public function work(): void
    {
        //.... working much more
    }
}

class Manager
{
    private $employee;

    public function __construct(Employee $employee)
    {
        $this->employee = $employee;
    }

    public function manage(): void
    {
        $this->employee->work();
    }
}

⬆ back to top

Don’t repeat yourself (DRY)

Try to observe the DRY principle.

Do your absolute best to avoid duplicate code. Duplicate code is bad because it means that there's more than one place to alter something if you need to change some logic.

Imagine if you run a restaurant and you keep track of your inventory: all your tomatoes, onions, garlic, spices, etc. If you have multiple lists that you keep this on, then all have to be updated when you serve a dish with tomatoes in them. If you only have one list, there's only one place to update!

Often you have duplicate code because you have two or more slightly different things, that share a lot in common, but their differences force you to have two or more separate functions that do much of the same things. Removing duplicate code means creating an abstraction that can handle this set of different things with just one function/module/class.

Getting the abstraction right is critical, that's why you should follow the SOLID principles laid out in the Classes section. Bad abstractions can be worse than duplicate code, so be careful! Having said this, if you can make a good abstraction, do it! Don't repeat yourself, otherwise you'll find yourself updating multiple places any time you want to change one thing.

Bad:

function showDeveloperList(array $developers): void
{
    foreach ($developers as $developer) {
        $expectedSalary = $developer->calculateExpectedSalary();
        $experience = $developer->getExperience();
        $githubLink = $developer->getGithubLink();
        $data = [$expectedSalary, $experience, $githubLink];

        render($data);
    }
}

function showManagerList(array $managers): void
{
    foreach ($managers as $manager) {
        $expectedSalary = $manager->calculateExpectedSalary();
        $experience = $manager->getExperience();
        $githubLink = $manager->getGithubLink();
        $data = [$expectedSalary, $experience, $githubLink];

        render($data);
    }
}

Good:

function showList(array $employees): void
{
    foreach ($employees as $employee) {
        $expectedSalary = $employee->calculateExpectedSalary();
        $experience = $employee->getExperience();
        $githubLink = $employee->getGithubLink();
        $data = [$expectedSalary, $experience, $githubLink];

        render($data);
    }
}

Very good:

It is better to use a compact version of the code.

function showList(array $employees): void
{
    foreach ($employees as $employee) {
        render([$employee->calculateExpectedSalary(), $employee->getExperience(), $employee->getGithubLink()]);
    }
}

⬆ back to top

Translations

This is also available in other languages:

⬆ back to top

Author: Jupeter
Source Code: https://github.com/jupeter/clean-code-php 
License: MIT License

#php #code #clean 

Clean Code Concepts Adapted for PHP
Dexter  Goodwin

Dexter Goodwin

1642523220

Buddy.js: Magic number detection for JavaScript

Magic number detection for javascript. Let Buddy sniff out the unnamed numerical constants in your code.

Overview

We all know magic numbers are frowned upon as a programming practice. They may give no indication of their meaning, and when used multiple times, can result in future inconsistencies. They can expose you to the risk of typos, hinder maintenance and have an impact on readability. That's where Buddy comes in.

Buddy is a cli tool that's eager to find the magic numbers in your code. It accepts a list of paths to parse, and renders any found instances with the selected reporter. In the case of directories, they're walked recursively, and only .js files are analyzed. Any node_modules dirs are also ignored.

Since const is not widespread in JavaScript, it defaults to searching for numbers which are not the sole literal in an object expression or variable declaration. Furthermore, specific values can be ignored, such as 0 and 1, which are ignored by default.

Who's a good boy?

What are magic numbers?

Magic numbers are unnamed numerical constants, though the term can sometimes be used to refer to other literals as well. Take the following contrived example:

function getTotal(subtotal) {
  var beforeTax = subtotal + 9.99;
  return beforeTax + (beforeTax * 0.13);
}

In the above function, the meaning of the two numbers might not be clear. What is this 9.99 charge? In our case, let's say it's a shipping rate. And what about the 0.13? It's the sales tax. Buddy will highlight those two instances:

$ buddy example.js

example.js:2 | var beforeTax = subtotal + 9.99;
example.js:3 | return beforeTax + (beforeTax * 0.13);

 2 magic numbers found across 1 file

If the tax rate was used in multiple locations, it's prone to human error. And it might not be immediately clear that the 9.99 charge is a flat rate shipping cost, which can affect maintenance. So how would this be improved?

var FLAT_SHIPPING_COST = 9.99;
var SALES_TAX = 0.13;

function getTotal(subtotal) {
  var beforeTax = subtotal + FLAT_SHIPPING_COST;
  return beforeTax + (beforeTax * SALES_TAX);
}

Or, depending on your target platforms or browsers, by using the const keyword for variable declaration instead of var. While const is available in Node, you should take note of its browser compatibility for front end JavaScript.

$ buddy example.js

 No magic numbers found across 1 file

Installation

It can be installed via npm using:

npm install -g buddy.js

Also available: grunt-buddyjs, and gulp-buddy.js

Usage

Usage: buddy [options] <paths ...>

Options:

  -h, --help                             output usage information
  -V, --version                          output the version number
  -d, --detect-objects                   detect object expressions and properties
  -e, --enforce-const                    require literals to be defined using const
  -i, --ignore <numbers>                 list numbers to ignore (default: 0,1)
  -I, --disable-ignore                   disables the ignore list
  -r, --reporter [simple|detailed|json]  specify reporter to use (default: simple)
  -C, --no-color                         disables colors

If a .buddyrc file is located in the project directory, its values will be used in place of the defaults listed above. For example:

{
  "detectObjects": false,
  "enforceConst":  false,
  "ignore":        [0, 1, 2], // Use empty array to disable ignore
  "reporter":      "detailed"
}

Integration

You can easily run Buddy on your library source as part of your build. It will exit with an error code of 0 when no magic numbers were found. Otherwise it will return a positive error code, and result in a failing build. For example, with Travis CI, you can add the following two entries to your .travis.yml:

before_script:
  - "npm install -g buddy.js"

script:
  - "buddy ./path/to/src"

Reporters

For additional context, try using the detailed reporter. Or, for logging output and integration with your quality assurance process, the json reporter can be used.

Ignoring numbers

A magic number can be ignored in any of three ways:

  1. Its value is ignored using the --ignore flag
  2. The line includes the following comment buddy ignore:line
  3. The line is located between a buddy ignore:start and buddy ignore:end

Given the following example, two magic numbers exist that could be ignored:

var SECOND = 1000;
var MINUTE = 60 * SECOND;
var HOUR = 60 * MINUTE;

Using the command line option, you can run buddy with: buddy example.js --ignore 60. Or, if preferred, you can specify that the instances be ignored on a case-by-case basis:

var SECOND = 1000;
var MINUTE = 60 * SECOND; // buddy ignore:line
var HOUR = 60 * MINUTE; // buddy ignore:line

Or better yet, you can make use of directives to ignore all magic numbers within a range:

// buddy ignore:start
var SECOND = 1000;
var MINUTE = 60 * SECOND;
var HOUR = 60 * MINUTE;
// buddy ignore:end

Author: Danielstjules
Source Code: https://github.com/danielstjules/buddy.js 
License: MIT License

#javascript #clean 

Buddy.js: Magic number detection for JavaScript
Dexter  Goodwin

Dexter Goodwin

1642515660

jsinspect: Detect copy-pasted and structurally similar code

Detect copy-pasted and structurally similar JavaScript code. Requires Node.js 6.0+, and supports ES6, JSX as well as Flow. Note: the project has been mostly rewritten for the 0.10 release and saw several breaking changes.

Overview

We've all had to deal with code smell, and duplicate code is a common source. While some instances are easy to spot, this type of searching is the perfect use-case for a helpful CLI tool.

Existing solutions do exist for this purpose, but some struggle with code that has wildly varying identifiers or literals, and others have lackluster support for the JS ecosystem: ES6, JSX, Flow, ignoring module declarations and imports, etc.

And copy-pasted code is but one type of code duplication. Common boilerplate and repeated logic can be identified as well using jsinspect, since it doesn't operate directly on tokens - it uses the ASTs of the parsed code.

You have the freedom to specify a threshold determining the smallest subset of nodes to analyze. This will identify code with a similar structure, based on the AST node types, e.g. BlockStatement, VariableDeclaration, ObjectExpression, etc. By default, it searches nodes with matching identifiers and literals for copy-paste oriented detection, but this can be disabled. For context, identifiers include the names of variables, methods, properties, etc, while literals are strings, numbers, etc.

The tool accepts a list of paths to parse and prints any found matches. Any directories among the paths are walked recursively, and only .js and .jsx files are analyzed. You can explicitly pass file paths that include a different extension as well. Any node_modules and bower_components dirs are also ignored.

screenshot

Installation

It can be installed via npm using:

npm install -g jsinspect

Usage

Usage: jsinspect [options] <paths ...>


Detect copy-pasted and structurally similar JavaScript code
Example use: jsinspect -I -L -t 20 --ignore "test" ./path/to/src


Options:

  -h, --help                         output usage information
  -V, --version                      output the version number
  -t, --threshold <number>           number of nodes (default: 30)
  -m, --min-instances <number>       min instances for a match (default: 2)
  -c, --config [config]              path to config file (default: .jsinspectrc)
  -r, --reporter [default|json|pmd]  specify the reporter to use
  -I, --no-identifiers               do not match identifiers
  -L, --no-literals                  do not match literals
  -C, --no-color                     disable colors
  --ignore <pattern>                 ignore paths matching a regex
  --truncate <number>                length to truncate lines (default: 100, off: 0)
  --debug                            print debug information

If a .jsinspectrc file is located in the project directory, its values will be used in place of the defaults listed above. For example:

{
  "threshold":     30,
  "identifiers":   true,
  "literals":      true,
  "color":         true,
  "minInstances":  2,
  "ignore":        "test|spec|mock",
  "reporter":      "json",
  "truncate":      100,
}

On first use with a project, you may want to run the tool with the following options, while running explicitly on the lib/src directories, and not the test/spec dir.

jsinspect -t 50 --ignore "test" ./path/to/src

From there, feel free to try decreasing the threshold, ignoring identifiers using the -I flag and ignoring literals with -L. A lower threshold may lead you to discover new areas of interest for refactoring or cleanup.

Integration

It's simple to run jsinspect on your library source as part of a build process. It will exit with an error code of 0 when no matches are found, resulting in a passing step, and a positive error code corresponding to its failure. For example, with Travis CI, you could add the following entries to your .travis.yml:

before_script:
  - "npm install -g jsinspect"

script:
  - "jsinspect ./path/to/src"

Note that in the above example, we're using a threshold of 30 for detecting structurally similar code. A higher threshold may be appropriate as well.

To have jsinspect run with each job, but not block or fail the build, you can use something like the following:

script:
  - "jsinspect ./path/to/src || true"

Reporters

Aside from the default reporter, both JSON and PMD CPD-style XML reporters are available. Note that in the JSON example below, indentation and formatting has been applied. Furthermore, the id property available in these reporters is useful for parsing by automatic scripts to determine whether or not duplicate code has changed between builds.

JSON

[{
  "id":"6ceb36d5891732db3835c4954d48d1b90368a475",
  "instances":[
    {
      "path":"spec/fixtures/intersection.js",
      "lines":[1,5],
      "code":"function intersectionA(array1, array2) {\n  array1.filter(function(n) {\n    return array2.indexOf(n) != -1;\n  });\n}"
    },
    {
      "path":"spec/fixtures/intersection.js",
      "lines":[7,11],
      "code":"function intersectionB(arrayA, arrayB) {\n  arrayA.filter(function(n) {\n    return arrayB.indexOf(n) != -1;\n  });\n}"
    }
  ]
}]

PMD CPD XML

<?xml version="1.0" encoding="utf-8"?>
<pmd-cpd>
<duplication lines="10" id="6ceb36d5891732db3835c4954d48d1b90368a475">
<file path="/jsinspect/spec/fixtures/intersection.js" line="1"/>
<file path="/jsinspect/spec/fixtures/intersection.js" line="7"/>
<codefragment>
spec/fixtures/intersection.js:1,5
function intersectionA(array1, array2) {
  array1.filter(function(n) {
    return array2.indexOf(n) != -1;
  });
}

spec/fixtures/intersection.js:7,11
function intersectionB(arrayA, arrayB) {
  arrayA.filter(function(n) {
    return arrayB.indexOf(n) != -1;
  });
}
</codefragment>
</duplication>
</pmd-cpd>

Author: Danielstjules
Source Code: https://github.com/danielstjules/jsinspect 
License: MIT License

#javascript #refactoring #clean 

jsinspect: Detect copy-pasted and structurally similar code

An Utility Package and Code Generator Designed to Make Clean Flutter

Neat

Welcome to Neat, a utility package that helps make clean Flutter code.

Flutter framework is very verbose and widget trees quickly becomes difficult to read. Neat package is designed to make the code more expressive, shorter and easier to understand by introducing convenient solutions for common patterns without requiring any additional work from your part.

Actually, neat provide 4 types of helpers and widgets:

  • Text helpers that helps you create headlines/subtitles/bodyTexts
  • Theme accessors for easily access theme's data
  • Space widgets, a blank space widgets, generated from your own data that inherit from SizedBox
  • Padding helpers, for easily creating padding thanks to helpers generated from your own data

Take this example, without neat:

import 'dimensions.dart'; //you declare constants in this file

Text(
  "Flutter is awesome...",
  style: Theme.of(context).textTheme.headline1,
),
const SizedBox(
  height: Dimensions.spaceSmall,
),
Container(
  color: Theme.of(context).colorScheme.surface,
  padding: EdgeInsets.only(
    left: Dimensions.paddingSmall,
    right: Dimensions.paddingSmall,
    bottom: Dimensions.paddingSmall,
  ),
  child: Text(
    "...but its a little bit verbose",
    style: Theme.of(context).textTheme.bodyText1?.copyWith(
      color: Theme.of(context).colorScheme.primary,
    ),
  ),
),

With Neat you could write:

import 'package:neat/neat.dart';
import 'dimensions.dart';

context.headline1("Neat make your life easier"),
const SpaceSmall(),
Container(
  color: context.colorScheme.surface,
  padding: PaddingSmall(left | right | bottom),
  child: context.bodyText1(
    "you can override the style",
    style: TextStyle(color: context.colorScheme.primary),
  ),
),

Pretty neat, isn't it ?

Summary

How to use

Installing

Install Neat by running following command:

flutter pub add neat

If you want to use the Neat's code generator, you will need neat_generator package and and a typical build_runner/code-generator setup. Run the following command to add neat_generator and build_runner packages to your dev dependencies:

flutter pub add neat_generator -dev
flutter pub add build_runner -dev

These commands will add the following dependencies to your pubspec.yaml file:

# pubspec.yaml
dependencies:
  neat:

dev_dependencies:
  neat_generator:
  build_runner:

Features

Neat has two distinct parts:

neat package, a collection of helpers and widgets that you can import with import 'package:neat/neat.dart';.

neat_generator package, a code generator that you should install as a dev dependency and run using build_runner package.

Text helpers

In most flutter applications, you define TextStyles in your material ThemeData and then to create, for example, a Headline1, you should do the following:

Text(
    "Headline1",
    style: Theme.of(context).textTheme.headline1,
),

If you want to override some properties of the style it getting even worth:

Text(
  "Headline1",
  style: Theme.of(context)
      .textTheme
      .headline1
      ?.copyWith(color: Colors.red),
),

Neat introduce a collection of extension on BuildContext that simplify the creation of headlines and other types of text defined in material textTheme:

import 'package:neat/neat.dart';

// It's that simple !
context.headline1("Headline1")

// You can override the style
context.headline1(
    "Headline1",
    style: TextStyle(color: Colors.gold),
),

Every text types are available:

context.headline1("Headline1"),
context.headline2("Headline2"),
context.headline3("Headline3"),
context.headline4("Headline4"),
context.headline5("Headline5"),
context.headline6("Headline6"),
context.subtitle("Subtitle"),
context.bodyText("BodyText"),
context.caption("caption"),
context.overline("overline"),
//textTheme.button has been renamed to avoid confusions
context.buttonText("button"),

If you specify a TextStyle, it will be merged with corresponding base theme found in your textTheme, like this:

Theme.of(context).textTheme.{textType}?.merge(style),

All text extensions have the same properties as the regular Text widget:

context.headline1(
    String text, {
    Key? key,
    TextStyle? style,
    StrutStyle? strutStyle,
    TextAlign? textAlign,
    TextDirection? textDirection,
    Locale? locale,
    bool? softWrap,
    TextOverflow? overflow,
    double? textScaleFactor,
    int? maxLines,
    String? semanticsLabel,
    TextWidthBasis? textWidthBasis,
    TextHeightBehavior? textHeightBehavior,
    FontWeight? weight,
  }),

Theme accessors

Without Neat, you access ThemeData, textTheme and colorScheme in the following way:

Theme.of(context);
Theme.of(context).textTheme;
Theme.of(context).colorScheme;

Neat introduce an alternative way to access your theme:

import 'package:neat/neat.dart';

context.theme;
context.textTheme;
context.colorScheme;

Neat Generator

When you want to keep your spacing and padding consistent across your app, you often end up with a Dimension class holding all your variables at the same place, like this:

class Dimensions {
    static const double paddingSmall = 8;
    static const double paddingMedium = 13;
    static const double paddingBig = 21;

    static const double spaceSmall = 13;
    static const double spaceMedium = 21;
    static const double spaceBig = 34;
}

Then you use these values in your app:

const SizedBox(height: Dimensions.spaceSmall),
Padding(
    padding: const EdgeInsets.only(
      top: Dimensions.paddingMedium, 
      left: Dimensions.paddingMedium,
    ),
    child: ...,
),

Neat helps you push it further by generating specialized helpers and widgets based on your data class, without wasting time coding it yourself:

//Generated Space widget, inherit from SizedBox class,
const SpaceSmall(),

Padding(
  //Generated Padding helper, inherit form EdgeInsets class
  padding: PaddingMedium(top | left),
  child: ...,
)

The generator is flexible and let you use multiple data source class, configure how widget names are generated, filters what field to include or exclude from generation, etc. See generator options for advanced usage.

Basic usage

Create a Dimensions class and annotate it with @Neat.generate. Neat generator will generate one Space widget for each field that starts with "space" and one padding helper for each field that starts with "padding".

You're not obligated to use only one data class, there are annotations to generate space widget and padding separately. You can also change field prefix and generated class base name if you need to. More details about advanced configuration here.

dimensions.dart

//import generator's annotations
import 'package:neat/generator.dart';

part 'dimensions.nt.dart';

@Neat.generate
class Dimensions {
  static const double spaceSmall = 21;
  static const double spaceMedium = 34;
  static const double spaceBig = 55;
  static const double spaceAnyNameYouWant = 42;

  static const double paddingSmall = 21;
  static const double paddingMedium = 34;
  static const double paddingBig = 55;
  static const double paddingAnyNumberYouWant = 42;
}

Note that like most code-generators, Neat will need you to both import the annotation (meta) and use the part keyword on the top of your files.

As such, a file that wants to use Neat's code generator will start with:

import 'package:neat/generator.dart';
import 'package:flutter/widgets.dart';

part 'my_file.nt.dart';

Make sure you have installed the build_runner package, then run the generator with the command flutter pub run build_runner build. I recommend using the option --delete-conflicting-outputs to avoid problems during builds.

Based on your class, Neat will generate the following helpers/widgets:

SpaceSmall              // 21
SpaceMedium             // 34
SpaceBig                // 55
SpaceAnyNameYouWant     // 42

PaddingSmall            // 21
PaddingMedium           // 34
PaddingBig              // 55
PaddingAnyNumberYouWant // 42

Ignore lint warnings on generated files

Depending on your lint options, Neat Generator may cause your linter to report warnings.

The solution to this problem is to tell the linter to ignore generated files, by modifying your analysis_options.yaml:

analyzer:
  exclude:
    - "**/*.nt.dart"

Space Widgets

Space Widget represents a blank space in your app. This widget inherit from SizedBox and define constructors with pre-filled height and width values, based on data in your value class. By default, space widgets are generated from static const double fields of a class annotated @Neat.generate and that starts with "space". The generator will name widgets according to their corresponding fieldName. There are few class annotations for different use case and you can customize the generator behavior by adding some parameters to the annotation. More details here.

Generated constructors

const SpaceX();   //SizedBox(width: X, height: X);
const SpaceX.h(); //SizedBox(width: 0, height: X);
const SpaceX.w(); //SizedBox(width: X, height: 0);

Example

dimensions.dart

import 'package:neat/generator.dart';

part 'dimensions.nt.dart';

@Neat.generate
class Dimensions {
  static const double spaceSmall = 21;
  static const double spaceMedium = 34;
  static const double spaceBig = 55;
}

main.dart

import 'dimensions.dart';

const SpaceSmall();     //SizedBox(height: 21,  width: 21);
const SpaceMedium.w();  //SizedBox(height: 0,   width: 34);
const SpaceBig.h();     //SizedBox(height: 55,  width: 0);

Padding helpers

PaddingHelpers inherit from EdgeInsets class and define new constructors with pre-filled values based on data in your value class. By default, helpers are generated from static const double fields of a class annotated @Neat.generate and that starts with "padding". The generator will name classes according to their corresponding fieldName. There are few class annotations for different use case and you can customize the generator behavior by adding some parameters to the annotation. More details here.

Generated constructors

const PaddingX(top | left | right | bottom);    //EdgeInsets.only(top: X, left: X, right: X, bottom: X); can be disabled
const PaddingX.all();                           //EdgeInsets.all(X);
const PaddingX.horizontal();                    //EdgeInsets.symmetric(horizontal: X);
const PaddingX.vertical();                      //EdgeInsets.symmetric(vertical: X);
const PaddingX.only(                            //EdgeInsets.only(
  top: true,                                    //  top: X,
  left: true,                                   //  left: X,
  right: true,                                  //  right: X,
  bottom: true,                                 //  bottom: X,
);                                              //);

Note that the binary flag constructor Padding(top | left | right | bottom) use generated constants (top, left, right, bottom) to works. If it causing conflicts in your code, you can disable it by using generateBinaryFlagConstructor = false. More details about generator configuration here.

Example

dimensions.dart

import 'package:neat/generator.dart';

part 'dimensions.nt.dart';

@Neat.generate
class Paddings {
  static const double padding1 = 21;
  static const double padding2 = 13;
  static const double padding3 = 8;
  static const double padding4 = 5;
  static const double padding5 = 3;
}

main.dart

import 'dimensions.dart';

Padding(padding: Padding1.all());                       //EdgeInsets.all(21)
Padding(padding: Padding2.horizontal());                //EdgeInsets.symmetric(horizontal: 13)
Padding(padding: Padding3.vertical());                  //EdgeInsets.symmetric(vertical: 8)
Padding(padding: Padding4(top | left));                 //EdgeInsets.only(top: 5, left: 5)
Padding(padding: Padding5.only(top: true, left: true)); //EdgeInsets.only(top: 5, left: 5)

Contributions

Wants to contribute ? I'm happy to discuss about what feature to add next !

I've published this package recently, helps in one of the following area is appreciated:

  • Improving the README: English is not my native language, PRs to improve the quality of the readme are welcome !
  • Test coverage: Adding some tests, especially for the fieldFilter / widgetNameExtractor is a top priority.
  • Improve the code generator configuration: Make code generators more flexible by adding more generation options. Parser is actually pretty basic and it probably needs to be refactored.

If you like Neat, don't forget to leave a ⭐️ on the repo and share the package !

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add neat

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  neat: ^0.1.1

Alternatively, your editor might support or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:neat/neat.dart'; 

Download Details:

Author: Pierre2tm

Source Code: https://github.com/Pierre2tm/neat

#flutter #clean #code 

An Utility Package and Code Generator Designed to Make Clean Flutter

libOBS (OBS Studio) Bindings for Node.js and Electron

libobs via node bindings

This library intends to provide bindings to obs-studio's internal library, named libobs accordingly, for the purpose of using it from a node runtime. Currently, only Windows is supported.

Why CMake?

CMake offers better compatibility with existing projects than node-gyp and comparable solutions. It's also capable of generating solution files for multiple different IDEs and compilers, which makes it ideal for a native module. Personally, I don't like gyp syntax or the build system surrounding it or the fact it requires you to install python.

Building

Prerequisites

You will need to have the following installed:

Windows

Building on windows requires additional software:

Example Build

We use a flexible cmake script to be as broad and generic as possible in order to prevent the need to constantly manage the cmake script for custom uses, while also providing sane defaults. It follows a pretty standard cmake layout and you may execute it however you want.

Example:

yarn install
git submodule update --init --recursive
mkdir build
cd build
cmake .. -G"Visual Studio 15 2017" -A x64
cmake --build .
cpack -G ZIP

This will will download any required dependencies, build the module, and then place it in an archive compatible with npm or yarn that you may specify in a given package.json.

Custom OBS Build

By default, we download a pre-built version of libobs if none is specified. However, this pre-built version may not be what you want to use or maybe you're testing a new obs feature.

You may specify a custom archive of your own. However, some changes need to be made to obs-studio's default configuration before building:

  • ENABLE_SCRIPTING must be set to false
  • ENABLE_UI must be set to false
  • QTDIR should not be specified.

If you don't know how to build obs-studio from source, you may find instructions here.

Example (from root of obs-studio repository clone):

mkdir build
cd build
cmake .. -DENABLE_UI=false -DDepsPath="C:\Users\computerquip\Projectslibobs-deps\win64" -DENABLE_SCRIPTING=false -G"Visual Studio 15 2017" -A x64
cmake --build .
cpack -G ZIP

This will create an archive that's compatible with obs-studio-node. The destination of the archive will appear after cpack is finished executing.

Example:

CPack: Create package using ZIP

CPack: Install projects

CPack: - Install project: obs-studio

CPack: Create package

CPack: - package: C:/Users/computerquip/Projects/obs-studio/build/obs-studio-x64-22.0.3-sl-7-13-g208cb2f5.zip generated.

This archive may then be specified as a cmake variable when building obs-studio-node like so:

cmake .. -G"Visual Studio 15 2017" -A x64 -DOSN_LIBOBS_URL="C:/Users/computerquip/Projects/obs-studio/build/obs-studio-x64-22.0.3-sl-7-13-g208cb2f5.zip"
cmake --build .
cpack -G ZIP

Further Building

I don't specify every possible combination of variables. Here's a list of actively maintained variables that control how obs-studio-node is built:

  • All configurable node-cmake variables found here.
  • OSN_LIBOBS_URL - Controls where to fetch the libobs archive. May be a directory, any compressed archive that cpack supports, or a URI of various types including FTP or HTTP/S.

If you find yourself unable to configure something about our build script or have any questions, please file a github issue!

define EXTENDED_DEBUG_LOG controls logging of ipc requests.

Static code analyzis

cppcheck

Install cppcheck from http://cppcheck.sourceforge.net/ and add cppcheck folder to PATH To run check from console:

cd build 
cmake --build . --target CPPCHECK

Also target can be built from Visula Studio. Report output format set as compatible and navigation to file:line posiible from build results panel.

Some warnings suppressed in files obs-studio-client/cppcheck_suppressions_list.txt and obs-studio-server/cppcheck_suppressions_list.txt.

Clang Analyzer

Ninja and LLVM have to be installed in system. Warning: depot_tool have broken ninja.
To make build open cmd.exe.

mkdir build_clang
cd build_clang

"c:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat"
 
set CCC_CC=clang-cl
set CCC_CXX=clang-cl
set CC=ccc-analyzer.bat
set CXX=c++-analyzer.bat
#set CCC_ANALYZER_VERBOSE=1

#make ninja project 
cmake  -G "Ninja" -DCLANG_ANALYZE_CONFIG=1 -DCMAKE_INSTALL_PREFIX:PATH=""  -DCMAKE_LINKER=lld-link -DCMAKE_BUILD_TYPE="Debug"   -DCMAKE_SYSTEM_NAME="Generic" -DCMAKE_MAKE_PROGRAM=ninja.exe ..

#try to build and "fix" errors 
ninja.exe 

#clean build to scan 
ninja.exe clean 

scan-build --keep-empty -internal-stats -stats -v -v -v -o check ninja.exe

Step with "fixing" errors is important as code base and especially third-party code are not ready to be build with clang. And files which failed to compile will not be scanned for errors.

Tests

The tests for obs studio node are written in Typescript and use Mocha as test framework, with electron-mocha pacakage to make Mocha run in Electron, and Chai as assertion framework.

You need to build obs-studio-node in order to run the tests. You can build it any way you want, just be sure to use CMAKE_INSTALL_PREFIX to install obs-studio-node in a folder of your choosing. The tests use this variable to know where the obs-studio-node module is. Since we use our own fork of Electron, you also need to create an environment variable called ELECTRON_PATH pointing to where the Electron binary is in the node_modules folder after you run yarn install. Below are three different ways to build obs-studio-node:

Terminal commands

In obs-studio-node root folder:

  1. yarn install
  2. git submodule update --init --recursive --force
  3. mkdir build
  4. cmake -Bbuild -H. -G"Visual Studio 15 2017" -A x64 -DCMAKE_INSTALL_PREFIX="path_of_your_choosing"
  5. cmake --build build --target install

Terminal using package.json scripts

In obs-studio-node root folder:

  1. mkdir build
  2. yarn local:config
  3. yarn local:build
  4. Optional: To clean build folder to repeat the steps 2 to 3 again do yarn local:clean

CMake GUI

  1. yarn install
  2. Create a build folder in obs-studio-node root
  3. Open CMake GUI
  4. Put obs-studio-node project path in Where is the source code: box
  5. Put path to build folder in Where to build the binaries: box
  6. Click Configure
  7. Change CMAKE_INSTALL_PREFIX to a folder path of your choosing
  8. Click Generate
  9. Click Open Project to open Visual Studio and build the project there

Running tests

Some tests interact with Twitch and we use a user pool service to get users but in case we are not able to fetch a user from it, we use the stream key provided by an environment variable. Create an environment variable called SLOBS_BE_STREAMKEY with the stream key of a Twitch account of your choosing.

  • To run all the tests do yarn run test
  • To run only run one test do yarn run test --grep describe_name_value where describe_name_value is the name of the test passed to the describe call in each test file. Example: yarn run test --grep nodeobs_api

Download Details:
Author: stream-labs
Download Link: Download The Source Code
Official Website: https://github.com/stream-labs/obs-studio-node 
License: GPL-2.0 

#node #nodejs #electron

libOBS (OBS Studio) Bindings for Node.js and Electron
Vicky  Graham

Vicky Graham

1627010100

Gold Trading App UI - Speed Code - FLUTTER | Simple | Clean | UI

What is up coders!!

Today we’ll be converting an another dribbble inspiration into a real coded app. Our today’s
inspiration is a Gold Trading App UI by Ghani Pradita. Thank you so much for creating such an awesome #UI.
This UI contains two screens grouped in a bottom navigation bar. Second screen is what excited me to make
this UI. This screen contains a beautiful Bezier chart which can be easily implemented using a flutter package.
Check out the video to see how easy it is to make such beautiful charts in Flutter.

#ui #clean #app ui #speed code #flutter

Gold Trading App UI - Speed Code - FLUTTER | Simple | Clean | UI