Aliyah  Murray

Aliyah Murray

1652013780

How To Apply a DUAL LIGHTING Effect In Photoshop (Simple Way)

Learn this simple yet powerful technique to apply a dual lighting effect in Photoshop!

This tutorial is part of my Photoshop in-app tutorials. You can open the tutorial image directly from Photoshop and follow the coach mark overlays to follow along. You will need Photoshop 2021 (22.5) and newer.

In this tutorial, you will learn how to use Gradient Maps, Blending Modes, and Layer Masks to create the illusion of two lights of different colors hitting the main subject.


00:00 - Introduction
00:15 - Photoshop In-App Tutorials
01:23 - Dual Lighting Effect Explanation
01:40 - Select the Main Subject
01:55 - Create a Layer Group and Apply a Mask
02:20 - Create a Black and White Adjustment Layer
02:35 - Set The Default Foreground and Background Colors
02:45 - Create the Blue Gradient Map
03:42 - Mask Out The Blue Color From The Right Side
04:27 - Create the Red Gradient Map
05:20 - Final Thoughts  

#photoshop 

What is GEEK

Buddha Community

How To Apply a DUAL LIGHTING Effect In Photoshop (Simple Way)
Aliyah  Murray

Aliyah Murray

1652013780

How To Apply a DUAL LIGHTING Effect In Photoshop (Simple Way)

Learn this simple yet powerful technique to apply a dual lighting effect in Photoshop!

This tutorial is part of my Photoshop in-app tutorials. You can open the tutorial image directly from Photoshop and follow the coach mark overlays to follow along. You will need Photoshop 2021 (22.5) and newer.

In this tutorial, you will learn how to use Gradient Maps, Blending Modes, and Layer Masks to create the illusion of two lights of different colors hitting the main subject.


00:00 - Introduction
00:15 - Photoshop In-App Tutorials
01:23 - Dual Lighting Effect Explanation
01:40 - Select the Main Subject
01:55 - Create a Layer Group and Apply a Mask
02:20 - Create a Black and White Adjustment Layer
02:35 - Set The Default Foreground and Background Colors
02:45 - Create the Blue Gradient Map
03:42 - Mask Out The Blue Color From The Right Side
04:27 - Create the Red Gradient Map
05:20 - Final Thoughts  

#photoshop 

Edward Jackson

Edward Jackson

1653377002

PySpark Cheat Sheet: Spark in Python

This PySpark cheat sheet with code samples covers the basics like initializing Spark in Python, loading data, sorting, and repartitioning.

Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python. 

Even though working with Spark will remind you in many ways of working with Pandas DataFrames, you'll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, ... your data. What's more, if you've never worked with any other programming language or if you're new to the field, it might be hard to distinguish between RDD operations.

Let's face it, map() and flatMap() are different enough, but it might still come as a challenge to decide which one you really need when you're faced with them in your analysis. Or what about other functions, like reduce() and reduceByKey()

PySpark cheat sheet

Even though the documentation is very elaborate, it never hurts to have a cheat sheet by your side, especially when you're just getting into it.

This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that's not all. You'll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet. 

Note that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you'll be using Spark to analyze big data.

PySpark is the Spark Python API that exposes the Spark programming model to Python.

Initializing Spark 

SparkContext 

>>> from pyspark import SparkContext
>>> sc = SparkContext(master = 'local[2]')

Inspect SparkContext 

>>> sc.version #Retrieve SparkContext version
>>> sc.pythonVer #Retrieve Python version
>>> sc.master #Master URL to connect to
>>> str(sc.sparkHome) #Path where Spark is installed on worker nodes
>>> str(sc.sparkUser()) #Retrieve name of the Spark User running SparkContext
>>> sc.appName #Return application name
>>> sc.applicationld #Retrieve application ID
>>> sc.defaultParallelism #Return default level of parallelism
>>> sc.defaultMinPartitions #Default minimum number of partitions for RDDs

Configuration 

>>> from pyspark import SparkConf, SparkContext
>>> conf = (SparkConf()
     .setMaster("local")
     .setAppName("My app")
     . set   ("spark. executor.memory",   "lg"))
>>> sc = SparkContext(conf = conf)

Using the Shell 

In the PySpark shell, a special interpreter-aware SparkContext is already created in the variable called sc.

$ ./bin/spark-shell --master local[2]
$ ./bin/pyspark --master local[s] --py-files code.py

Set which master the context connects to with the --master argument, and add Python .zip..egg or.py files to the

runtime path by passing a comma-separated list to  --py-files.

Loading Data 

Parallelized Collections 

>>> rdd = sc.parallelize([('a',7),('a',2),('b',2)])
>>> rdd2 = sc.parallelize([('a',2),('d',1),('b',1)])
>>> rdd3 = sc.parallelize(range(100))
>>> rdd = sc.parallelize([("a",["x","y","z"]),
               ("b" ["p","r,"])])

External Data 

Read either one text file from HDFS, a local file system or any Hadoop-supported file system URI with textFile(), or read in a directory of text files with wholeTextFiles(). 

>>> textFile = sc.textFile("/my/directory/•.txt")
>>> textFile2 = sc.wholeTextFiles("/my/directory/")

Retrieving RDD Information 

Basic Information 

>>> rdd.getNumPartitions() #List the number of partitions
>>> rdd.count() #Count RDD instances 3
>>> rdd.countByKey() #Count RDD instances by key
defaultdict(<type 'int'>,{'a':2,'b':1})
>>> rdd.countByValue() #Count RDD instances by value
defaultdict(<type 'int'>,{('b',2):1,('a',2):1,('a',7):1})
>>> rdd.collectAsMap() #Return (key,value) pairs as a dictionary
   {'a': 2, 'b': 2}
>>> rdd3.sum() #Sum of RDD elements 4950
>>> sc.parallelize([]).isEmpty() #Check whether RDD is empty
True

Summary 

>>> rdd3.max() #Maximum value of RDD elements 
99
>>> rdd3.min() #Minimum value of RDD elements
0
>>> rdd3.mean() #Mean value of RDD elements 
49.5
>>> rdd3.stdev() #Standard deviation of RDD elements 
28.866070047722118
>>> rdd3.variance() #Compute variance of RDD elements 
833.25
>>> rdd3.histogram(3) #Compute histogram by bins
([0,33,66,99],[33,33,34])
>>> rdd3.stats() #Summary statistics (count, mean, stdev, max & min)

Applying Functions 

#Apply a function to each RFD element
>>> rdd.map(lambda x: x+(x[1],x[0])).collect()
[('a' ,7,7, 'a'),('a' ,2,2, 'a'), ('b' ,2,2, 'b')]
#Apply a function to each RDD element and flatten the result
>>> rdd5 = rdd.flatMap(lambda x: x+(x[1],x[0]))
>>> rdd5.collect()
['a',7 , 7 ,  'a' , 'a' , 2,  2,  'a', 'b', 2 , 2, 'b']
#Apply a flatMap function to each (key,value) pair of rdd4 without changing the keys
>>> rdds.flatMapValues(lambda x: x).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'),('b', 'p'),('b', 'r')]

Selecting Data

Getting

>>> rdd.collect() #Return a list with all RDD elements 
[('a', 7), ('a', 2), ('b', 2)]
>>> rdd.take(2) #Take first 2 RDD elements 
[('a', 7),  ('a', 2)]
>>> rdd.first() #Take first RDD element
('a', 7)
>>> rdd.top(2) #Take top 2 RDD elements 
[('b', 2), ('a', 7)]

Sampling

>>> rdd3.sample(False, 0.15, 81).collect() #Return sampled subset of rdd3
     [3,4,27,31,40,41,42,43,60,76,79,80,86,97]

Filtering

>>> rdd.filter(lambda x: "a" in x).collect() #Filter the RDD
[('a',7),('a',2)]
>>> rdd5.distinct().collect() #Return distinct RDD values
['a' ,2, 'b',7]
>>> rdd.keys().collect() #Return (key,value) RDD's keys
['a',  'a',  'b']

Iterating 

>>> def g (x): print(x)
>>> rdd.foreach(g) #Apply a function to all RDD elements
('a', 7)
('b', 2)
('a', 2)

Reshaping Data 

Reducing

>>> rdd.reduceByKey(lambda x,y : x+y).collect() #Merge the rdd values for each key
[('a',9),('b',2)]
>>> rdd.reduce(lambda a, b: a+ b) #Merge the rdd values
('a', 7, 'a' , 2 , 'b' , 2)

 

Grouping by

>>> rdd3.groupBy(lambda x: x % 2) #Return RDD of grouped values
          .mapValues(list)
          .collect()
>>> rdd.groupByKey() #Group rdd by key
          .mapValues(list)
          .collect() 
[('a',[7,2]),('b',[2])]

Aggregating

>> seqOp = (lambda x,y: (x[0]+y,x[1]+1))
>>> combOp = (lambda x,y:(x[0]+y[0],x[1]+y[1]))
#Aggregate RDD elements of each partition and then the results
>>> rdd3.aggregate((0,0),seqOp,combOp) 
(4950,100)
#Aggregate values of each RDD key
>>> rdd.aggregateByKey((0,0),seqop,combop).collect() 
     [('a',(9,2)), ('b',(2,1))]
#Aggregate the elements of each partition, and then the results
>>> rdd3.fold(0,add)
     4950
#Merge the values for each key
>>> rdd.foldByKey(0, add).collect()
[('a' ,9), ('b' ,2)]
#Create tuples of RDD elements by applying a function
>>> rdd3.keyBy(lambda x: x+x).collect()

Mathematical Operations 

>>>> rdd.subtract(rdd2).collect() #Return each rdd value not contained in rdd2
[('b' ,2), ('a' ,7)]
#Return each (key,value) pair of rdd2 with no matching key in rdd
>>> rdd2.subtractByKey(rdd).collect()
[('d', 1)1
>>>rdd.cartesian(rdd2).collect() #Return the Cartesian product of rdd and rdd2

Sort 

>>> rdd2.sortBy(lambda x: x[1]).collect() #Sort RDD by given function
[('d',1),('b',1),('a',2)]
>>> rdd2.sortByKey().collect() #Sort (key, value) ROD by key
[('a' ,2), ('b' ,1), ('d' ,1)]

Repartitioning 

>>> rdd.repartition(4) #New RDD with 4 partitions
>>> rdd.coalesce(1) #Decrease the number of partitions in the RDD to 1

Saving 

>>> rdd.saveAsTextFile("rdd.txt")
>>> rdd.saveAsHadoopFile("hdfs:// namenodehost/parent/child",
               'org.apache.hadoop.mapred.TextOutputFormat')

Stopping SparkContext 

>>> sc.stop()

Execution 

$ ./bin/spark-submit examples/src/main/python/pi.py

Have this Cheat Sheet at your fingertips

Original article source at https://www.datacamp.com

#pyspark #cheatsheet #spark #python

What is “effect” or “effectful” mean in Functional Programming?

A lot of the time, when we discuss the effect, we usually talk about side-effect. However, as I study more and more into functional programming and reading more and more functional programming books, I noticed many times “Effect” or “Effectful” had been widely said in the FP community when describing abstract things.

I dig a little deeper into what an “Effect” or “Effectful” means and put that in this blog post for a note to my future self.

It is not Side Effect

Usually, what they meant for “Effect” or “Effectful” is no side effect (sometimes it does). It is Main Effect.

It has something to do with Type Category

A type category is a Math Structure to abstract out representation for all the different fields in Math. When designing a program, we can think in the properties of that program before writing code instead of the other way around. For example, a function sum can be empty (identity law), has the property of combined operation and needs to be associative. (1+2 is equal to 2+1). We can characterize them as and restrict input function to be a Monoid. This way, we can create a solution in a systematic approach that generates fewer bugs.

Within Type Category is a fancy word for a wrapper that produces an “effect” on a given type. I will quote the statement that Alvin Alexander mentioned in Functional and Reactive Domain Modeling:

  1. Option models the effects of optionality
  2. Future models latency as an effect
  3. Try abstract the consequences of failures

Those statements can be rewritten as:

  1. Option is a monad that models the effect of optionality (of being something optional)
  2. Future is a monad that models the impact of latency
  3. Try is a monad that models the impact of failures (manages exception as an effect)

Similarly:

  1. Reader is a monad that models the effect of composting operations based on some input.
  2. Writer is a monad that models the impact of logging
  3. State is a monad that models the impact of State
  4. Sync in Cats-effect is a monad that models the effects of synchronous lazy execution.

It is an F[A] instead of A

An effect can be said of what the monad handles.

Quoting from Rob Norris in Functional Programming with Effects — an effectual function returns F[A] rather than A.

#scala #programming #functional-programming #effect #side-effects

Gerhard  Brink

Gerhard Brink

1624825860

What are the Best Steps to Effective Data Classification?

Data protection is not only a legal necessity. It is essential for an organization’s survival and profitability. Nowadays, storage has become cheap, and organizations have become data hoarders. And even one day will come when they’ll get around mining all of those data and look for something useful.

But, again, data hoarding causes serious issues. And most of what is collected may become redundant, old, or when it is not touched for years.

Moreover, storage might be cheap, but it is not free. And storing a huge amount of data might cost you and, more importantly, increases your risk.

So, suppose your sensitive data is stored digitally, which includes intellectual property, personally identifiable data on the customers or employees, protected health information or financial account information, and credit card details. In that case, these needs are to be properly secured.

So how to protect your data?

What is data classification?

Here are the seven effective steps to Data Classification

#big data #latest news #what are the best steps to effective data classification? #effective data classification #best #effective

Abigail  Cassin

Abigail Cassin

1597161660

Deciphering the impact of IoT on Smart Lighting

Technology is a compelling resource when it comes to minimizing the chances of covid-19 transmission. While the usefulness in the medical realm is evident and would require a separate discussion, the main focus of our current discussion will be IoT-empowered lighting solutions. Most of the manufacturers dealing in lighting solutions are opting for smart accessibility to minimize human contact.

Implementing smart lighting solutions will allow us to access the appliances without having to touch the switchboards. Besides eliminating touch and physical interactions, smart lights can offer a host of other relevant benefits to the early adopters.

Embracing Digitalization

The concept of connected LEDs does instill confidence. However, before we delve any deeper into why and how companies are embracing smart lighting solutions, we need to understand a bit more about the digitalization and the introduction of new services. Post covid19, individual inclinations are expected to change. Instead of opting for massive setups, preferences will be more discretionary in nature, when it comes to using home lighting solutions.

#smart-lighting #lighting-industry #iot #internet-of-things #technology #future #future-of-smart-lights #ai-applications