1591267560
Optional Chaining and Nullish Coalescing
When we write applications, calling properties or methods on a non-existing object can have fatal consequences for your program. This usually results in a reference error causing our code to crash. You may not always expect this to happen, because you are not always in control of the data you are working with.
#nullish-coalescing #es2020 #optional-chaining #javascript
1650521873
In this guide you’ll learn how to create the Custom Radio Buttons using only HTML & CSS.
To create the custom radio buttons using only HTML & CSS. First, you need to create two Files one HTML File and another one is CSS File.
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>Custom Radio Buttons | Codequs</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="wrapper">
<input type="radio" name="select" id="option-1" checked>
<input type="radio" name="select" id="option-2">
<label for="option-1" class="option option-1">
<div class="dot"></div>
<span>Student</span>
</label>
<label for="option-2" class="option option-2">
<div class="dot"></div>
<span>Teacher</span>
</label>
</div>
</body>
</html>
@import url('https://fonts.googleapis.com/css?family=Poppins:400,500,600,700&display=swap');
*{
margin: 0;
padding: 0;
box-sizing: border-box;
font-family: 'Poppins', sans-serif;
}
html,body{
display: grid;
height: 100%;
place-items: center;
background: #0069d9;
}
.wrapper{
display: inline-flex;
background: #fff;
height: 100px;
width: 400px;
align-items: center;
justify-content: space-evenly;
border-radius: 5px;
padding: 20px 15px;
box-shadow: 5px 5px 30px rgba(0,0,0,0.2);
}
.wrapper .option{
background: #fff;
height: 100%;
width: 100%;
display: flex;
align-items: center;
justify-content: space-evenly;
margin: 0 10px;
border-radius: 5px;
cursor: pointer;
padding: 0 10px;
border: 2px solid lightgrey;
transition: all 0.3s ease;
}
.wrapper .option .dot{
height: 20px;
width: 20px;
background: #d9d9d9;
border-radius: 50%;
position: relative;
}
.wrapper .option .dot::before{
position: absolute;
content: "";
top: 4px;
left: 4px;
width: 12px;
height: 12px;
background: #0069d9;
border-radius: 50%;
opacity: 0;
transform: scale(1.5);
transition: all 0.3s ease;
}
input[type="radio"]{
display: none;
}
#option-1:checked:checked ~ .option-1,
#option-2:checked:checked ~ .option-2{
border-color: #0069d9;
background: #0069d9;
}
#option-1:checked:checked ~ .option-1 .dot,
#option-2:checked:checked ~ .option-2 .dot{
background: #fff;
}
#option-1:checked:checked ~ .option-1 .dot::before,
#option-2:checked:checked ~ .option-2 .dot::before{
opacity: 1;
transform: scale(1);
}
.wrapper .option span{
font-size: 20px;
color: #808080;
}
#option-1:checked:checked ~ .option-1 span,
#option-2:checked:checked ~ .option-2 span{
color: #fff;
}
Now you’ve successfully created Custom Radio Buttons using only HTML & CSS.
1647998803
Module to enable rate limit per service in Netflix Zuul.
There are five built-in rate limit approaches:
Note | It is possible to combine Authenticated User, Request Origin, URL, ROLE and Request Method just adding multiple values to the list |
Note | Latest version: |
Note | If you are using Spring Boot version 1.5.x you MUST use Spring Cloud Zuul RateLimit version 1.7.x . Please take a look at the Maven Central and pick the latest artifact in this version line. |
Add the dependency on pom.xml
<dependency>
<groupId>com.marcosbarbero.cloud</groupId>
<artifactId>spring-cloud-zuul-ratelimit</artifactId>
<version>${latest-version}</version>
</dependency>
Add the following dependency accordingly to the chosen data storage:
Redis
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Consul
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-consul</artifactId>
</dependency>
Spring Data JPA
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
This implementation also requires a database table, bellow here you can find a sample script:
CREATE TABLE rate (
rate_key VARCHAR(255) NOT NULL,
remaining BIGINT,
remaining_quota BIGINT,
reset BIGINT,
expiration TIMESTAMP,
PRIMARY KEY(rate_key)
);
Bucket4j JCache
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-core</artifactId>
</dependency>
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-jcache</artifactId>
</dependency>
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
</dependency>
Bucket4j Hazelcast (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
Bucket4j Infinispan (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-infinispan</artifactId>
</dependency>
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-core</artifactId>
</dependency>
Bucket4j Ignite (depends on Bucket4j JCache)
<dependency>
<groupId>com.github.vladimir-bukhtoyarov</groupId>
<artifactId>bucket4j-ignite</artifactId>
</dependency>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-core</artifactId>
</dependency>
Sample YAML configuration
zuul:
ratelimit:
key-prefix: your-prefix
enabled: true
repository: REDIS
behind-proxy: true
add-response-headers: true
deny-request:
response-status-code: 404 #default value is 403 (FORBIDDEN)
origins:
- 200.187.10.25
- somedomain.com
default-policy-list: #optional - will apply unless specific policy exists
- limit: 10 #optional - request number limit per refresh interval window
quota: 1000 #optional - request time limit per refresh interval window (in seconds)
refresh-interval: 60 #default value (in seconds)
type: #optional
- user
- origin
- url
- http_method
policy-list:
myServiceId:
- limit: 10 #optional - request number limit per refresh interval window
quota: 1000 #optional - request time limit per refresh interval window (in seconds)
refresh-interval: 60 #default value (in seconds)
type: #optional
- user
- origin
- url
- type: #optional value for each type
- user=anonymous
- origin=somemachine.com
- url=/api #url prefix
- role=user
- http_method=get #case insensitive
- http_header=customHeader
- type:
- url_pattern=/api/*/payment
Sample Properties configuration
zuul.ratelimit.enabled=true
zuul.ratelimit.key-prefix=your-prefix
zuul.ratelimit.repository=REDIS
zuul.ratelimit.behind-proxy=true
zuul.ratelimit.add-response-headers=true
zuul.ratelimit.deny-request.response-status-code=404
zuul.ratelimit.deny-request.origins[0]=200.187.10.25
zuul.ratelimit.deny-request.origins[1]=somedomain.com
zuul.ratelimit.default-policy-list[0].limit=10
zuul.ratelimit.default-policy-list[0].quota=1000
zuul.ratelimit.default-policy-list[0].refresh-interval=60
# Adding multiple rate limit type
zuul.ratelimit.default-policy-list[0].type[0]=user
zuul.ratelimit.default-policy-list[0].type[1]=origin
zuul.ratelimit.default-policy-list[0].type[2]=url
zuul.ratelimit.default-policy-list[0].type[3]=http_method
# Adding the first rate limit policy to "myServiceId"
zuul.ratelimit.policy-list.myServiceId[0].limit=10
zuul.ratelimit.policy-list.myServiceId[0].quota=1000
zuul.ratelimit.policy-list.myServiceId[0].refresh-interval=60
zuul.ratelimit.policy-list.myServiceId[0].type[0]=user
zuul.ratelimit.policy-list.myServiceId[0].type[1]=origin
zuul.ratelimit.policy-list.myServiceId[0].type[2]=url
# Adding the second rate limit policy to "myServiceId"
zuul.ratelimit.policy-list.myServiceId[1].type[0]=user=anonymous
zuul.ratelimit.policy-list.myServiceId[1].type[1]=origin=somemachine.com
zuul.ratelimit.policy-list.myServiceId[1].type[2]=url_pattern=/api/*/payment
zuul.ratelimit.policy-list.myServiceId[1].type[3]=role=user
zuul.ratelimit.policy-list.myServiceId[1].type[4]=http_method=get
zuul.ratelimit.policy-list.myServiceId[1].type[5]=http_header=customHeader
Both 'quota' and 'refresh-interval', can be expressed with Spring Boot’s duration formats:
A regular long representation (using seconds as the default unit)
The standard ISO-8601 format used by java.time.Duration (e.g. PT30M means 30 minutes)
A more readable format where the value and the unit are coupled (e.g. 10s means 10 seconds)
There are eight implementations provided:
Implementation | Data Storage |
---|---|
ConsulRateLimiter | Consul |
RedisRateLimiter | Redis |
SpringDataRateLimiter | Spring Data |
Bucket4jJCacheRateLimiter | Bucket4j |
Bucket4jHazelcastRateLimiter | |
Bucket4jIgniteRateLimiter | |
Bucket4jInfinispanRateLimiter |
Bucket4j implementations require the relevant bean with @Qualifier("RateLimit")
:
JCache
- javax.cache.Cache
Hazelcast
- com.hazelcast.map.IMap
Ignite
- org.apache.ignite.IgniteCache
Infinispan
- org.infinispan.functional.ReadWriteMap
Property namespace: zuul.ratelimit
Property name | Values | Default Value |
---|---|---|
enabled | true/false | false |
behind-proxy | true/false | false |
response-headers | NONE, STANDARD, VERBOSE | VERBOSE |
key-prefix | String | ${spring.application.name:rate-limit-application} |
repository | CONSUL, REDIS, JPA, BUCKET4J_JCACHE, BUCKET4J_HAZELCAST, BUCKET4J_INFINISPAN, BUCKET4J_IGNITE | - |
deny-request | DenyRequest | - |
default-policy-list | List of Policy | - |
policy-list | Map of Lists of Policy | - |
postFilterOrder | int | FilterConstants.SEND_RESPONSE_FILTER_ORDER - 10 |
preFilterOrder | int | FilterConstants.FORM_BODY_WRAPPER_FILTER_ORDER |
Deny Request properties
Property name | Values | Default Value |
---|---|---|
origins | list of origins to have the access denied | - |
response-status-code | the http status code to be returned on a denied request | 403 (FORBIDDEN) |
Policy properties:
Property name | Values | Default Value |
---|---|---|
limit | number of requests | - |
quota | time of requests | - |
refresh-interval | seconds | 60 |
type | [ORIGIN, USER, URL, URL_PATTERN, ROLE, HTTP_METHOD, HTTP_HEADER] | [] |
breakOnMatch | true/false | false |
This section details how to add custom implementations
If the application needs to control the key strategy beyond the options offered by the type property then it can be done just by creating a custom RateLimitKeyGenerator
bean[1] implementation adding further qualifiers or something entirely different:
@Bean
public RateLimitKeyGenerator ratelimitKeyGenerator(RateLimitProperties properties, RateLimitUtils rateLimitUtils) {
return new DefaultRateLimitKeyGenerator(properties, rateLimitUtils) {
@Override
public String key(HttpServletRequest request, Route route, RateLimitProperties.Policy policy) {
return super.key(request, route, policy) + ":" + request.getMethod();
}
};
}
This framework uses 3rd party applications to control the rate limit access and these libraries are out of control of this framework. If one of the 3rd party applications fails, the framework will handle this failure in the DefaultRateLimiterErrorHandler
class which will log the error upon failure.
If there is a need to handle the errors differently, it can be achieved by defining a custom RateLimiterErrorHandler
bean[2], e.g:
@Bean
public RateLimiterErrorHandler rateLimitErrorHandler() {
return new DefaultRateLimiterErrorHandler() {
@Override
public void handleSaveError(String key, Exception e) {
// custom code
}
@Override
public void handleFetchError(String key, Exception e) {
// custom code
}
@Override
public void handleError(String msg, Exception e) {
// custom code
}
}
}
If the application needs to be notified when a rate limit access was exceeded then it can be done by listening to RateLimitExceededEvent
event:
@EventListener
public void observe(RateLimitExceededEvent event) {
// custom code
}
Spring Cloud Zuul Rate Limit is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.
Download Details:
Author: marcosbarbero
Source Code: https://github.com/marcosbarbero/spring-cloud-zuul-ratelimit
License: Apache-2.0 License
1591267560
Optional Chaining and Nullish Coalescing
When we write applications, calling properties or methods on a non-existing object can have fatal consequences for your program. This usually results in a reference error causing our code to crash. You may not always expect this to happen, because you are not always in control of the data you are working with.
#nullish-coalescing #es2020 #optional-chaining #javascript
1600614480
When Javascript developers think of ECMA, they reference the changes made in 2015 that improved syntax and added new features to the language. However, ECMA still updates the language! And earlier this year, they released ES11, with two of my new favorite tools: Optional Chaining and Null Coalescing!
null is a very interesting datatype. It is a primitive type, along with String, Number, Boolean, Symbol, and Undefined. (Side note: as of ES11, BigInt is now another primitive type!) Primitive types are immutable, while objects (including arrays), can be mutated.
However, null is the odd primitive out. When you use typeof
on any datatype, you are given a reasonable and expected response: typeof 3
is a Number
, typeof {foo: "bar"}
is an Object
, etc. Due to the way Javascript was developed, null is considered an Object
when typeof
is applied to it. Check out this brilliant article by Dr. Axel Rauschmayer for more detail!
null is a falsey value, is loosely equal to undefined (undefined == null // TRUE
), and, most importantly represents the intentional absence of any object value. When a value is not defined, it will always receive the value of undefined. When Javascript developers declare a variable but want it to be empty, they will set the value to null. null is never assigned by Javascript, but only by developers; it is a useful tool to give a value of null if it does not have any meaningful data. Getting a value of null means that your variable is declared and does exist, automatically informing you that you don’t have to adjust variable declaration, hoisting, or scope issues: the value is usable, just void of any information.
#javascript #ecmascript-2020 #null #nullish-coalescing #optional-chaining
1644350700
DataComPy is a package to compare two Pandas DataFrames. Originally started to be something of a replacement for SAS's PROC COMPARE
for Pandas DataFrames with some more functionality than just Pandas.DataFrame.equals(Pandas.DataFrame)
(in that it prints out some stats, and lets you tweak how accurate matches have to be). Then extended to carry that functionality over to Spark Dataframes.
pip install datacompy
DataComPy will try to join two dataframes either on a list of join columns, or on indexes. If the two dataframes have duplicates based on join values, the match process sorts by the remaining fields and joins based on that row number.
Column-wise comparisons attempt to match values even when dtypes don't match. So if, for example, you have a column with decimal.Decimal
values in one dataframe and an identically-named column with float64
dtype in another, it will tell you that the dtypes are different but will still try to compare the values.
from io import StringIO
import pandas as pd
import datacompy
data1 = """acct_id,dollar_amt,name,float_fld,date_fld
10000001234,123.45,George Maharis,14530.1555,2017-01-01
10000001235,0.45,Michael Bluth,1,2017-01-01
10000001236,1345,George Bluth,,2017-01-01
10000001237,123456,Bob Loblaw,345.12,2017-01-01
10000001239,1.05,Lucille Bluth,,2017-01-01
"""
data2 = """acct_id,dollar_amt,name,float_fld
10000001234,123.4,George Michael Bluth,14530.155
10000001235,0.45,Michael Bluth,
10000001236,1345,George Bluth,1
10000001237,123456,Robert Loblaw,345.12
10000001238,1.05,Loose Seal Bluth,111
"""
df1 = pd.read_csv(StringIO(data1))
df2 = pd.read_csv(StringIO(data2))
compare = datacompy.Compare(
df1,
df2,
join_columns='acct_id', #You can also specify a list of columns
abs_tol=0, #Optional, defaults to 0
rel_tol=0, #Optional, defaults to 0
df1_name='Original', #Optional, defaults to 'df1'
df2_name='New' #Optional, defaults to 'df2'
)
compare.matches(ignore_extra_columns=False)
# False
# This method prints out a human-readable report summarizing and sampling differences
print(compare.report())
See docs for more detailed usage instructions and an example of the report output.
df1
, df2
) to datacompy.Compare
and a column to join on (or list of columns) to join_columns
. By default the comparison needs to match values exactly, but you can pass in abs_tol
and/or rel_tol
to apply absolute and/or relative tolerances for numeric columns.on_index=True
instead of join_columns
to join on the index instead.Compare.matches()
will return True
if the dataframes match, False
otherwise.ignore_extra_columns=True
to not return False
just because there are non-overlapping column names (will still check on overlapping columns)pandas.testing.assert_frame_equal
. The main use case for datacompy
is when you need to interpret the difference between two dataframes.intersect_rows
, df1_unq_rows
, df2_unq_rows
for getting intersection, just df1 and just df2 records (DataFrames)intersect_columns()
, df1_unq_columns()
, df2_unq_columns()
for getting intersection, just df1 and just df2 columns (Sets)DataComPy's SparkCompare
class will join two dataframes either on a list of join columns. It has the capability to map column names that may be different in each dataframe, including in the join columns. You are responsible for creating the dataframes from any source which Spark can handle and specifying a unique join key. If there are duplicates in either dataframe by join key, the match process will remove the duplicates before joining (and tell you how many duplicates were found).
As with the Pandas-based Compare
class, comparisons will be attempted even if dtypes don't match. Any schema differences will be reported in the output as well as in any mismatch reports, so that you can assess whether or not a type mismatch is a problem or not.
The main reasons why you would choose to use SparkCompare
over Compare
are that your data is too large to fit into memory, or you're comparing data that works well in a Spark environment, like partitioned Parquet, CSV, or JSON files, or Cerebro tables.
Spark scales incredibly well, so you can use SparkCompare
to compare billions of rows of data, provided you spin up a big enough cluster. Still, joining billions of rows of data is an inherently large task, so there are a couple of things you may want to take into consideration when getting into the cliched realm of "big data":
SparkCompare
will compare all columns in common in the dataframes and report on the rest. If there are columns in the data that you don't care to compare, use a select
statement/method on the dataframe(s) to filter those out. Particularly when reading from wide Parquet files, this can make a huge difference when the columns you don't care about don't have to be read into memory and included in the joined dataframe.cache_intermediates=True
to the SparkCompare
call can help optimize performance by caching certain intermediate dataframes in memory, like the de-duped version of each input dataset, or the joined dataframe. Otherwise, Spark's lazy evaluation will recompute those each time it needs the data in a report or as you access instance attributes. This may be fine for smaller dataframes, but will be costly for larger ones. You do need to ensure that you have enough free cache memory before you do this, so this parameter is set to False by default.import datetime
import datacompy
from pyspark.sql import Row
# This example assumes you have a SparkSession named "spark" in your environment, as you
# do when running `pyspark` from the terminal or in a Databricks notebook (Spark v2.0 and higher)
data1 = [
Row(acct_id=10000001234, dollar_amt=123.45, name='George Maharis', float_fld=14530.1555,
date_fld=datetime.date(2017, 1, 1)),
Row(acct_id=10000001235, dollar_amt=0.45, name='Michael Bluth', float_fld=1.0,
date_fld=datetime.date(2017, 1, 1)),
Row(acct_id=10000001236, dollar_amt=1345.0, name='George Bluth', float_fld=None,
date_fld=datetime.date(2017, 1, 1)),
Row(acct_id=10000001237, dollar_amt=123456.0, name='Bob Loblaw', float_fld=345.12,
date_fld=datetime.date(2017, 1, 1)),
Row(acct_id=10000001239, dollar_amt=1.05, name='Lucille Bluth', float_fld=None,
date_fld=datetime.date(2017, 1, 1))
]
data2 = [
Row(acct_id=10000001234, dollar_amt=123.4, name='George Michael Bluth', float_fld=14530.155),
Row(acct_id=10000001235, dollar_amt=0.45, name='Michael Bluth', float_fld=None),
Row(acct_id=10000001236, dollar_amt=1345.0, name='George Bluth', float_fld=1.0),
Row(acct_id=10000001237, dollar_amt=123456.0, name='Robert Loblaw', float_fld=345.12),
Row(acct_id=10000001238, dollar_amt=1.05, name='Loose Seal Bluth', float_fld=111.0)
]
base_df = spark.createDataFrame(data1)
compare_df = spark.createDataFrame(data2)
comparison = datacompy.SparkCompare(spark, base_df, compare_df, join_columns=['acct_id'])
# This prints out a human-readable report summarizing differences
comparison.report()
virtualenv venv; source venv/bin/activate
)/usr/lib/spark
but may differ based on your installation)export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$SPARK_HOME/python:$PYTHONPATH
(note that your version of py4j may differ depending on the version of Spark you're using)python setup.py bdist_egg
from the repo root directory.5. Once the library has been created, from the library page (which you can find in your /Users/{login} workspace), you can choose clusters to attach the library to.
6. import datacompy
in a notebook attached to the cluster that the library is attached to and enjoy!
We welcome and appreciate your contributions! Before we can accept any contributions, we ask that you please be sure to sign the Contributor License Agreement (CLA).
This project adheres to the Open Source Code of Conduct. By participating, you are expected to honor this code.
Roadmap details can be found here
Download Details:
Author: capitalone
Source Code: https://github.com/capitalone/datacompy
License: Apache-2.0 License