Enos  Prosacco

Enos Prosacco


Optional Chaining and Nullish Coalescing

Optional Chaining and Nullish Coalescing
When we write applications, calling properties or methods on a non-existing object can have fatal consequences for your program. This usually results in a reference error causing our code to crash. You may not always expect this to happen, because you are not always in control of the data you are working with.

#nullish-coalescing #es2020 #optional-chaining #javascript

What is GEEK

Buddha Community

Optional Chaining and Nullish Coalescing

How to Create the Custom Radio Buttons using only HTML & CSS

In this guide you’ll learn how to create the Custom Radio Buttons using only HTML & CSS.

To create the custom radio buttons using only HTML & CSS. First, you need to create two Files one HTML File and another one is CSS File.

1: First, create an HTML file with the name of index.html

<!DOCTYPE html>
<html lang="en" dir="ltr">
    <meta charset="utf-8">
    <title>Custom Radio Buttons | Codequs</title>
    <link rel="stylesheet" href="style.css">
    <div class="wrapper">
      <input type="radio" name="select" id="option-1" checked>
      <input type="radio" name="select" id="option-2">
      <label for="option-1" class="option option-1">
        <div class="dot"></div>
      <label for="option-2" class="option option-2">
        <div class="dot"></div>


2: Second, create a CSS file with the name of style.css


@import url('https://fonts.googleapis.com/css?family=Poppins:400,500,600,700&display=swap');
  margin: 0;
  padding: 0;
  box-sizing: border-box;
  font-family: 'Poppins', sans-serif;
  display: grid;
  height: 100%;
  place-items: center;
  background: #0069d9;
  display: inline-flex;
  background: #fff;
  height: 100px;
  width: 400px;
  align-items: center;
  justify-content: space-evenly;
  border-radius: 5px;
  padding: 20px 15px;
  box-shadow: 5px 5px 30px rgba(0,0,0,0.2);
.wrapper .option{
  background: #fff;
  height: 100%;
  width: 100%;
  display: flex;
  align-items: center;
  justify-content: space-evenly;
  margin: 0 10px;
  border-radius: 5px;
  cursor: pointer;
  padding: 0 10px;
  border: 2px solid lightgrey;
  transition: all 0.3s ease;
.wrapper .option .dot{
  height: 20px;
  width: 20px;
  background: #d9d9d9;
  border-radius: 50%;
  position: relative;
.wrapper .option .dot::before{
  position: absolute;
  content: "";
  top: 4px;
  left: 4px;
  width: 12px;
  height: 12px;
  background: #0069d9;
  border-radius: 50%;
  opacity: 0;
  transform: scale(1.5);
  transition: all 0.3s ease;
  display: none;
#option-1:checked:checked ~ .option-1,
#option-2:checked:checked ~ .option-2{
  border-color: #0069d9;
  background: #0069d9;
#option-1:checked:checked ~ .option-1 .dot,
#option-2:checked:checked ~ .option-2 .dot{
  background: #fff;
#option-1:checked:checked ~ .option-1 .dot::before,
#option-2:checked:checked ~ .option-2 .dot::before{
  opacity: 1;
  transform: scale(1);
.wrapper .option span{
  font-size: 20px;
  color: #808080;
#option-1:checked:checked ~ .option-1 span,
#option-2:checked:checked ~ .option-2 span{
  color: #fff;

Now you’ve successfully created Custom Radio Buttons using only HTML & CSS.

Rate Limit Auto-configure for Spring Cloud Netflix Zuul


Module to enable rate limit per service in Netflix Zuul.

There are five built-in rate limit approaches:

  • Authenticated User
    • Uses the authenticated username or 'anonymous'
  • Request Origin
    • Uses the user origin request
  • URL
    • Uses the request path of the downstream service
  • URL Pattern
    • Uses the request Ant path pattern to the downstream service
  • ROLE
    • Uses the authenticated user roles
  • Request method
    • Uses the HTTP request method
  • Request header
    • Uses the HTTP request header
  • Global configuration per service:
    • This one does not validate the request Origin, Authenticated User or URI
    • To use this approach just don’t set param 'type'
NoteIt is possible to combine Authenticated User, Request Origin, URL, ROLE and Request Method just adding multiple values to the list


NoteLatest version: Maven Central
NoteIf you are using Spring Boot version 1.5.x you MUST use Spring Cloud Zuul RateLimit version 1.7.x. Please take a look at the Maven Central and pick the latest artifact in this version line.

Add the dependency on pom.xml


Add the following dependency accordingly to the chosen data storage:





Spring Data JPA


This implementation also requires a database table, bellow here you can find a sample script:

  rate_key VARCHAR(255) NOT NULL,
  remaining BIGINT,
  remaining_quota BIGINT,
  reset BIGINT,
  expiration TIMESTAMP,
  PRIMARY KEY(rate_key)

Bucket4j JCache


Bucket4j Hazelcast (depends on Bucket4j JCache)


Bucket4j Infinispan (depends on Bucket4j JCache)


Bucket4j Ignite (depends on Bucket4j JCache)


Sample YAML configuration

    key-prefix: your-prefix
    enabled: true
    repository: REDIS
    behind-proxy: true
    add-response-headers: true
      response-status-code: 404 #default value is 403 (FORBIDDEN)
        - somedomain.com
    default-policy-list: #optional - will apply unless specific policy exists
      - limit: 10 #optional - request number limit per refresh interval window
        quota: 1000 #optional - request time limit per refresh interval window (in seconds)
        refresh-interval: 60 #default value (in seconds)
        type: #optional
          - user
          - origin
          - url
          - http_method
        - limit: 10 #optional - request number limit per refresh interval window
          quota: 1000 #optional - request time limit per refresh interval window (in seconds)
          refresh-interval: 60 #default value (in seconds)
          type: #optional
            - user
            - origin
            - url
        - type: #optional value for each type
            - user=anonymous
            - origin=somemachine.com
            - url=/api #url prefix
            - role=user
            - http_method=get #case insensitive
            - http_header=customHeader
        - type:
            - url_pattern=/api/*/payment

Sample Properties configuration




# Adding multiple rate limit type

# Adding the first rate limit policy to "myServiceId"

# Adding the second rate limit policy to "myServiceId"

Both 'quota' and 'refresh-interval', can be expressed with Spring Boot’s duration formats:

A regular long representation (using seconds as the default unit)

The standard ISO-8601 format used by java.time.Duration (e.g. PT30M means 30 minutes)

A more readable format where the value and the unit are coupled (e.g. 10s means 10 seconds)

Available implementations

There are eight implementations provided:

ImplementationData Storage
SpringDataRateLimiterSpring Data

Bucket4j implementations require the relevant bean with @Qualifier("RateLimit"):

JCache - javax.cache.Cache

Hazelcast - com.hazelcast.map.IMap

Ignite - org.apache.ignite.IgniteCache

Infinispan - org.infinispan.functional.ReadWriteMap

Common application properties

Property namespace: zuul.ratelimit

Property nameValuesDefault Value
default-policy-listList of Policy-
policy-listMap of Lists of Policy-
postFilterOrderintFilterConstants.SEND_RESPONSE_FILTER_ORDER - 10

Deny Request properties

Property nameValuesDefault Value
originslist of origins to have the access denied-
response-status-codethe http status code to be returned on a denied request403 (FORBIDDEN)

Policy properties:

Property nameValuesDefault Value
limitnumber of requests-
quotatime of requests-

Further Customization

This section details how to add custom implementations

Key Generator

If the application needs to control the key strategy beyond the options offered by the type property then it can be done just by creating a custom RateLimitKeyGenerator bean[1] implementation adding further qualifiers or something entirely different:

  public RateLimitKeyGenerator ratelimitKeyGenerator(RateLimitProperties properties, RateLimitUtils rateLimitUtils) {
      return new DefaultRateLimitKeyGenerator(properties, rateLimitUtils) {
          public String key(HttpServletRequest request, Route route, RateLimitProperties.Policy policy) {
              return super.key(request, route, policy) + ":" + request.getMethod();

Error Handling

This framework uses 3rd party applications to control the rate limit access and these libraries are out of control of this framework. If one of the 3rd party applications fails, the framework will handle this failure in the DefaultRateLimiterErrorHandler class which will log the error upon failure.

If there is a need to handle the errors differently, it can be achieved by defining a custom RateLimiterErrorHandler bean[2], e.g:

  public RateLimiterErrorHandler rateLimitErrorHandler() {
    return new DefaultRateLimiterErrorHandler() {
        public void handleSaveError(String key, Exception e) {
            // custom code

        public void handleFetchError(String key, Exception e) {
            // custom code

        public void handleError(String msg, Exception e) {
            // custom code

Event Handling

If the application needs to be notified when a rate limit access was exceeded then it can be done by listening to RateLimitExceededEvent event:

    public void observe(RateLimitExceededEvent event) {
        // custom code


Spring Cloud Zuul Rate Limit is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

Download Details:
Author: marcosbarbero
Source Code: https://github.com/marcosbarbero/spring-cloud-zuul-ratelimit
License: Apache-2.0 License

#spring  #spring-boot  #java 

Enos  Prosacco

Enos Prosacco


Optional Chaining and Nullish Coalescing

Optional Chaining and Nullish Coalescing
When we write applications, calling properties or methods on a non-existing object can have fatal consequences for your program. This usually results in a reference error causing our code to crash. You may not always expect this to happen, because you are not always in control of the data you are working with.

#nullish-coalescing #es2020 #optional-chaining #javascript

Terry  Tremblay

Terry Tremblay


null and ES2020: Optional Chaining and Null Coalescing

When Javascript developers think of ECMA, they reference the changes made in 2015 that improved syntax and added new features to the language. However, ECMA still updates the language! And earlier this year, they released ES11, with two of my new favorite tools: Optional Chaining and Null Coalescing!

A Brief History of null

null is a very interesting datatype. It is a primitive type, along with String, Number, Boolean, Symbol, and Undefined. (Side note: as of ES11, BigInt is now another primitive type!) Primitive types are immutable, while objects (including arrays), can be mutated.

However, null is the odd primitive out. When you use typeof on any datatype, you are given a reasonable and expected response: typeof 3 is a Numbertypeof {foo: "bar"} is an Object , etc. Due to the way Javascript was developed, null is considered an Object when typeof is applied to it. Check out this brilliant article by Dr. Axel Rauschmayer for more detail!

null is a falsey value, is loosely equal to undefined (undefined == null // TRUE), and, most importantly represents the intentional absence of any object value. When a value is not defined, it will always receive the value of undefined. When Javascript developers declare a variable but want it to be empty, they will set the value to nullnull is never assigned by Javascript, but only by developers; it is a useful tool to give a value of null if it does not have any meaningful data. Getting a value of null means that your variable is declared and does exist, automatically informing you that you don’t have to adjust variable declaration, hoisting, or scope issues: the value is usable, just void of any information.

Undefined versus Null: an empty toilet paper dispenser and a toilet paper tube on a dispenser with no toilet paper.

#javascript #ecmascript-2020 #null #nullish-coalescing #optional-chaining

Jamison  Fisher

Jamison Fisher


Datacompy: Pandas and Spark DataFrame Comparison for Humans


DataComPy is a package to compare two Pandas DataFrames. Originally started to be something of a replacement for SAS's PROC COMPARE for Pandas DataFrames with some more functionality than just Pandas.DataFrame.equals(Pandas.DataFrame) (in that it prints out some stats, and lets you tweak how accurate matches have to be). Then extended to carry that functionality over to Spark Dataframes.

Quick Installation

pip install datacompy

Pandas Detail

DataComPy will try to join two dataframes either on a list of join columns, or on indexes. If the two dataframes have duplicates based on join values, the match process sorts by the remaining fields and joins based on that row number.

Column-wise comparisons attempt to match values even when dtypes don't match. So if, for example, you have a column with decimal.Decimal values in one dataframe and an identically-named column with float64 dtype in another, it will tell you that the dtypes are different but will still try to compare the values.

Basic Usage

from io import StringIO
import pandas as pd
import datacompy

data1 = """acct_id,dollar_amt,name,float_fld,date_fld
10000001234,123.45,George Maharis,14530.1555,2017-01-01
10000001235,0.45,Michael Bluth,1,2017-01-01
10000001236,1345,George Bluth,,2017-01-01
10000001237,123456,Bob Loblaw,345.12,2017-01-01
10000001239,1.05,Lucille Bluth,,2017-01-01

data2 = """acct_id,dollar_amt,name,float_fld
10000001234,123.4,George Michael Bluth,14530.155
10000001235,0.45,Michael Bluth,
10000001236,1345,George Bluth,1
10000001237,123456,Robert Loblaw,345.12
10000001238,1.05,Loose Seal Bluth,111

df1 = pd.read_csv(StringIO(data1))
df2 = pd.read_csv(StringIO(data2))

compare = datacompy.Compare(
    join_columns='acct_id',  #You can also specify a list of columns
    abs_tol=0, #Optional, defaults to 0
    rel_tol=0, #Optional, defaults to 0
    df1_name='Original', #Optional, defaults to 'df1'
    df2_name='New' #Optional, defaults to 'df2'
# False

# This method prints out a human-readable report summarizing and sampling differences

See docs for more detailed usage instructions and an example of the report output.

Things that are happening behind the scenes

  • You pass in two dataframes (df1, df2) to datacompy.Compare and a column to join on (or list of columns) to join_columns. By default the comparison needs to match values exactly, but you can pass in abs_tol and/or rel_tol to apply absolute and/or relative tolerances for numeric columns.
    • You can pass in on_index=True instead of join_columns to join on the index instead.
  • The class validates that you passed dataframes, that they contain all of the columns in join_columns and have unique column names other than that. The class also lowercases all column names to disambiguate.
  • On initialization the class validates inputs, and runs the comparison.
  • Compare.matches() will return True if the dataframes match, False otherwise.
    • You can pass in ignore_extra_columns=True to not return False just because there are non-overlapping column names (will still check on overlapping columns)
    • NOTE: if you only want to validate whether a dataframe matches exactly or not, you should look at pandas.testing.assert_frame_equal. The main use case for datacompy is when you need to interpret the difference between two dataframes.
  • Compare also has some shortcuts like
    • intersect_rows, df1_unq_rows, df2_unq_rows for getting intersection, just df1 and just df2 records (DataFrames)
    • intersect_columns(), df1_unq_columns(), df2_unq_columns() for getting intersection, just df1 and just df2 columns (Sets)
  • You can turn on logging to see more detailed logs.

Spark Detail

DataComPy's SparkCompare class will join two dataframes either on a list of join columns. It has the capability to map column names that may be different in each dataframe, including in the join columns. You are responsible for creating the dataframes from any source which Spark can handle and specifying a unique join key. If there are duplicates in either dataframe by join key, the match process will remove the duplicates before joining (and tell you how many duplicates were found).

As with the Pandas-based Compare class, comparisons will be attempted even if dtypes don't match. Any schema differences will be reported in the output as well as in any mismatch reports, so that you can assess whether or not a type mismatch is a problem or not.

The main reasons why you would choose to use SparkCompare over Compare are that your data is too large to fit into memory, or you're comparing data that works well in a Spark environment, like partitioned Parquet, CSV, or JSON files, or Cerebro tables.

Performance Implications

Spark scales incredibly well, so you can use SparkCompare to compare billions of rows of data, provided you spin up a big enough cluster. Still, joining billions of rows of data is an inherently large task, so there are a couple of things you may want to take into consideration when getting into the cliched realm of "big data":

  • SparkCompare will compare all columns in common in the dataframes and report on the rest. If there are columns in the data that you don't care to compare, use a select statement/method on the dataframe(s) to filter those out. Particularly when reading from wide Parquet files, this can make a huge difference when the columns you don't care about don't have to be read into memory and included in the joined dataframe.
  • For large datasets, adding cache_intermediates=True to the SparkCompare call can help optimize performance by caching certain intermediate dataframes in memory, like the de-duped version of each input dataset, or the joined dataframe. Otherwise, Spark's lazy evaluation will recompute those each time it needs the data in a report or as you access instance attributes. This may be fine for smaller dataframes, but will be costly for larger ones. You do need to ensure that you have enough free cache memory before you do this, so this parameter is set to False by default.

Basic Usage

import datetime
import datacompy
from pyspark.sql import Row

# This example assumes you have a SparkSession named "spark" in your environment, as you
# do when running `pyspark` from the terminal or in a Databricks notebook (Spark v2.0 and higher)

data1 = [
    Row(acct_id=10000001234, dollar_amt=123.45, name='George Maharis', float_fld=14530.1555,
        date_fld=datetime.date(2017, 1, 1)),
    Row(acct_id=10000001235, dollar_amt=0.45, name='Michael Bluth', float_fld=1.0,
        date_fld=datetime.date(2017, 1, 1)),
    Row(acct_id=10000001236, dollar_amt=1345.0, name='George Bluth', float_fld=None,
        date_fld=datetime.date(2017, 1, 1)),
    Row(acct_id=10000001237, dollar_amt=123456.0, name='Bob Loblaw', float_fld=345.12,
        date_fld=datetime.date(2017, 1, 1)),
    Row(acct_id=10000001239, dollar_amt=1.05, name='Lucille Bluth', float_fld=None,
        date_fld=datetime.date(2017, 1, 1))

data2 = [
    Row(acct_id=10000001234, dollar_amt=123.4, name='George Michael Bluth', float_fld=14530.155),
    Row(acct_id=10000001235, dollar_amt=0.45, name='Michael Bluth', float_fld=None),
    Row(acct_id=10000001236, dollar_amt=1345.0, name='George Bluth', float_fld=1.0),
    Row(acct_id=10000001237, dollar_amt=123456.0, name='Robert Loblaw', float_fld=345.12),
    Row(acct_id=10000001238, dollar_amt=1.05, name='Loose Seal Bluth', float_fld=111.0)

base_df = spark.createDataFrame(data1)
compare_df = spark.createDataFrame(data2)

comparison = datacompy.SparkCompare(spark, base_df, compare_df, join_columns=['acct_id'])

# This prints out a human-readable report summarizing differences

Using SparkCompare on EMR or standalone Spark

  1. Set proxy variables
  2. Create a virtual environment, if desired (virtualenv venv; source venv/bin/activate)
  3. Pip install datacompy and requirements
  4. Ensure your SPARK_HOME environment variable is set (this is probably /usr/lib/spark but may differ based on your installation)
  5. Augment your PYTHONPATH environment variable with export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$SPARK_HOME/python:$PYTHONPATH (note that your version of py4j may differ depending on the version of Spark you're using)

Using SparkCompare on Databricks

  1. Clone this repository locally
  2. Create a datacompy egg by running python setup.py bdist_egg from the repo root directory.
  3. From the Databricks front page, click the "Library" link under the "New" section.
  4. On the New library page:
  • Change source to "Upload Python Egg or PyPi"
  • Under "Upload Egg", Library Name should be "datacompy"
  • Drag the egg file in datacompy/dist/ to the "Drop library egg here to upload" box
  • Click the "Create Library" button

5.   Once the library has been created, from the library page (which you can find in your /Users/{login} workspace), you can choose clusters to attach the library to.

6.   import datacompy in a notebook attached to the cluster that the library is attached to and enjoy!


We welcome and appreciate your contributions! Before we can accept any contributions, we ask that you please be sure to sign the Contributor License Agreement (CLA).

This project adheres to the Open Source Code of Conduct. By participating, you are expected to honor this code.


Roadmap details can be found here

Download Details:
Author: capitalone
Source Code: https://github.com/capitalone/datacompy
License: Apache-2.0 License

#pandas  #python #data-science