1669709646
What does return do in JavaScript? The return keyword explained
The return
keyword in JavaScript is a keyword to build a statement that ends JavaScript code execution. For example, suppose you have the following code that runs two console logs:
console.log("Hello");
return;
console.log("How are you?");
You’ll see that only “Hello” is printed to the console, while “How are you?" is not printed because of the return
keyword. In short, the return
statement will always stop any execution in the current JavaScript context, and return control to the higher context. When you use return
inside of a function
, the function
will stop running and give back control to the caller.
The following function won’t print “Hi from function” to the console because the return
statement is executed before the call to log the string. JavaScript will then give back control to the caller and continue code execution from there:
function fnExample() {
return;
console.log("Hi from function");
}
fnExample(); // nothing will be printed
console.log("JavaScript");
console.log("Continue to execute after return");
You can add an expression to the return
statement so that instead of returning undefined
, the value that you specify will be returned.
Take a look at the following code example, where the fnOne()
function just return
without any value, while fnTwo()
returns a number 7
:
function fnOne() {
return;
}
function fnTwo() {
return 7;
}
let one = fnOne();
let two = fnTwo();
console.log(one); // undefined
console.log(two); // 7
In the code above, the values returned from the return
statement by each function are assigned to variables one
and two
. When return
statement will give back undefined
by default.
You can also assign a variable to the return
statement as follows:
function fnExample() {
let str = "Hello";
return str;
}
let response = fnExample();
console.log(response); // Hello
And that’s all about the return
keyword in JavaScript. You can combine the return
statement and conditional if..else
block to control what value returned from your function
.
In the following example, the function will return “Banana” when number === 1
else it will return “Melon”:
function fnFruit(number) {
if (number === 1) {
return "Banana";
} else {
return "Melon";
}
}
let fruit = fnFruit(1);
console.log(fruit); // "Banana"
Original article source at: https://sebhastian.com/
1653377002
This PySpark cheat sheet with code samples covers the basics like initializing Spark in Python, loading data, sorting, and repartitioning.
Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python.
Even though working with Spark will remind you in many ways of working with Pandas DataFrames, you'll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, ... your data. What's more, if you've never worked with any other programming language or if you're new to the field, it might be hard to distinguish between RDD operations.
Let's face it, map()
and flatMap()
are different enough, but it might still come as a challenge to decide which one you really need when you're faced with them in your analysis. Or what about other functions, like reduce()
and reduceByKey()
?
Even though the documentation is very elaborate, it never hurts to have a cheat sheet by your side, especially when you're just getting into it.
This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that's not all. You'll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet.
Note that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you'll be using Spark to analyze big data.
PySpark is the Spark Python API that exposes the Spark programming model to Python.
>>> from pyspark import SparkContext
>>> sc = SparkContext(master = 'local[2]')
>>> sc.version #Retrieve SparkContext version
>>> sc.pythonVer #Retrieve Python version
>>> sc.master #Master URL to connect to
>>> str(sc.sparkHome) #Path where Spark is installed on worker nodes
>>> str(sc.sparkUser()) #Retrieve name of the Spark User running SparkContext
>>> sc.appName #Return application name
>>> sc.applicationld #Retrieve application ID
>>> sc.defaultParallelism #Return default level of parallelism
>>> sc.defaultMinPartitions #Default minimum number of partitions for RDDs
>>> from pyspark import SparkConf, SparkContext
>>> conf = (SparkConf()
.setMaster("local")
.setAppName("My app")
. set ("spark. executor.memory", "lg"))
>>> sc = SparkContext(conf = conf)
In the PySpark shell, a special interpreter-aware SparkContext is already created in the variable called sc.
$ ./bin/spark-shell --master local[2]
$ ./bin/pyspark --master local[s] --py-files code.py
Set which master the context connects to with the --master argument, and add Python .zip..egg or.py files to the
runtime path by passing a comma-separated list to --py-files.
>>> rdd = sc.parallelize([('a',7),('a',2),('b',2)])
>>> rdd2 = sc.parallelize([('a',2),('d',1),('b',1)])
>>> rdd3 = sc.parallelize(range(100))
>>> rdd = sc.parallelize([("a",["x","y","z"]),
("b" ["p","r,"])])
Read either one text file from HDFS, a local file system or any Hadoop-supported file system URI with textFile(), or read in a directory of text files with wholeTextFiles().
>>> textFile = sc.textFile("/my/directory/•.txt")
>>> textFile2 = sc.wholeTextFiles("/my/directory/")
>>> rdd.getNumPartitions() #List the number of partitions
>>> rdd.count() #Count RDD instances 3
>>> rdd.countByKey() #Count RDD instances by key
defaultdict(<type 'int'>,{'a':2,'b':1})
>>> rdd.countByValue() #Count RDD instances by value
defaultdict(<type 'int'>,{('b',2):1,('a',2):1,('a',7):1})
>>> rdd.collectAsMap() #Return (key,value) pairs as a dictionary
{'a': 2, 'b': 2}
>>> rdd3.sum() #Sum of RDD elements 4950
>>> sc.parallelize([]).isEmpty() #Check whether RDD is empty
True
>>> rdd3.max() #Maximum value of RDD elements
99
>>> rdd3.min() #Minimum value of RDD elements
0
>>> rdd3.mean() #Mean value of RDD elements
49.5
>>> rdd3.stdev() #Standard deviation of RDD elements
28.866070047722118
>>> rdd3.variance() #Compute variance of RDD elements
833.25
>>> rdd3.histogram(3) #Compute histogram by bins
([0,33,66,99],[33,33,34])
>>> rdd3.stats() #Summary statistics (count, mean, stdev, max & min)
#Apply a function to each RFD element
>>> rdd.map(lambda x: x+(x[1],x[0])).collect()
[('a' ,7,7, 'a'),('a' ,2,2, 'a'), ('b' ,2,2, 'b')]
#Apply a function to each RDD element and flatten the result
>>> rdd5 = rdd.flatMap(lambda x: x+(x[1],x[0]))
>>> rdd5.collect()
['a',7 , 7 , 'a' , 'a' , 2, 2, 'a', 'b', 2 , 2, 'b']
#Apply a flatMap function to each (key,value) pair of rdd4 without changing the keys
>>> rdds.flatMapValues(lambda x: x).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'),('b', 'p'),('b', 'r')]
Getting
>>> rdd.collect() #Return a list with all RDD elements
[('a', 7), ('a', 2), ('b', 2)]
>>> rdd.take(2) #Take first 2 RDD elements
[('a', 7), ('a', 2)]
>>> rdd.first() #Take first RDD element
('a', 7)
>>> rdd.top(2) #Take top 2 RDD elements
[('b', 2), ('a', 7)]
Sampling
>>> rdd3.sample(False, 0.15, 81).collect() #Return sampled subset of rdd3
[3,4,27,31,40,41,42,43,60,76,79,80,86,97]
Filtering
>>> rdd.filter(lambda x: "a" in x).collect() #Filter the RDD
[('a',7),('a',2)]
>>> rdd5.distinct().collect() #Return distinct RDD values
['a' ,2, 'b',7]
>>> rdd.keys().collect() #Return (key,value) RDD's keys
['a', 'a', 'b']
>>> def g (x): print(x)
>>> rdd.foreach(g) #Apply a function to all RDD elements
('a', 7)
('b', 2)
('a', 2)
Reducing
>>> rdd.reduceByKey(lambda x,y : x+y).collect() #Merge the rdd values for each key
[('a',9),('b',2)]
>>> rdd.reduce(lambda a, b: a+ b) #Merge the rdd values
('a', 7, 'a' , 2 , 'b' , 2)
Grouping by
>>> rdd3.groupBy(lambda x: x % 2) #Return RDD of grouped values
.mapValues(list)
.collect()
>>> rdd.groupByKey() #Group rdd by key
.mapValues(list)
.collect()
[('a',[7,2]),('b',[2])]
Aggregating
>> seqOp = (lambda x,y: (x[0]+y,x[1]+1))
>>> combOp = (lambda x,y:(x[0]+y[0],x[1]+y[1]))
#Aggregate RDD elements of each partition and then the results
>>> rdd3.aggregate((0,0),seqOp,combOp)
(4950,100)
#Aggregate values of each RDD key
>>> rdd.aggregateByKey((0,0),seqop,combop).collect()
[('a',(9,2)), ('b',(2,1))]
#Aggregate the elements of each partition, and then the results
>>> rdd3.fold(0,add)
4950
#Merge the values for each key
>>> rdd.foldByKey(0, add).collect()
[('a' ,9), ('b' ,2)]
#Create tuples of RDD elements by applying a function
>>> rdd3.keyBy(lambda x: x+x).collect()
>>>> rdd.subtract(rdd2).collect() #Return each rdd value not contained in rdd2
[('b' ,2), ('a' ,7)]
#Return each (key,value) pair of rdd2 with no matching key in rdd
>>> rdd2.subtractByKey(rdd).collect()
[('d', 1)1
>>>rdd.cartesian(rdd2).collect() #Return the Cartesian product of rdd and rdd2
>>> rdd2.sortBy(lambda x: x[1]).collect() #Sort RDD by given function
[('d',1),('b',1),('a',2)]
>>> rdd2.sortByKey().collect() #Sort (key, value) ROD by key
[('a' ,2), ('b' ,1), ('d' ,1)]
>>> rdd.repartition(4) #New RDD with 4 partitions
>>> rdd.coalesce(1) #Decrease the number of partitions in the RDD to 1
>>> rdd.saveAsTextFile("rdd.txt")
>>> rdd.saveAsHadoopFile("hdfs:// namenodehost/parent/child",
'org.apache.hadoop.mapred.TextOutputFormat')
>>> sc.stop()
$ ./bin/spark-submit examples/src/main/python/pi.py
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#pyspark #cheatsheet #spark #python
1653465344
This PySpark SQL cheat sheet is your handy companion to Apache Spark DataFrames in Python and includes code samples.
You'll probably already know about Apache Spark, the fast, general and open-source engine for big data processing; It has built-in modules for streaming, SQL, machine learning and graph processing. Spark allows you to speed analytic applications up to 100 times faster compared to other technologies on the market today. Interfacing Spark with Python is easy with PySpark: this Spark Python API exposes the Spark programming model to Python.
Now, it's time to tackle the Spark SQL module, which is meant for structured data processing, and the DataFrame API, which is not only available in Python, but also in Scala, Java, and R.
Without further ado, here's the cheat sheet:
This PySpark SQL cheat sheet covers the basics of working with the Apache Spark DataFrames in Python: from initializing the SparkSession to creating DataFrames, inspecting the data, handling duplicate values, querying, adding, updating or removing columns, grouping, filtering or sorting data. You'll also see that this cheat sheet also on how to run SQL Queries programmatically, how to save your data to parquet and JSON files, and how to stop your SparkSession.
Spark SGlL is Apache Spark's module for working with structured data.
A SparkSession can be used create DataFrame, register DataFrame as tables, execute SGL over tables, cache tables, and read parquet files.
>>> from pyspark.sql import SparkSession
>>> spark a SparkSession \
.builder\
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
>>> from pyspark.sql.types import*
Infer Schema
>>> sc = spark.sparkContext
>>> lines = sc.textFile(''people.txt'')
>>> parts = lines.map(lambda l: l.split(","))
>>> people = parts.map(lambda p: Row(nameap[0],ageaint(p[l])))
>>> peopledf = spark.createDataFrame(people)
Specify Schema
>>> people = parts.map(lambda p: Row(name=p[0],
age=int(p[1].strip())))
>>> schemaString = "name age"
>>> fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
>>> schema = StructType(fields)
>>> spark.createDataFrame(people, schema).show()
From Spark Data Sources
JSON
>>> df = spark.read.json("customer.json")
>>> df.show()
>>> df2 = spark.read.load("people.json", format="json")
Parquet files
>>> df3 = spark.read.load("users.parquet")
TXT files
>>> df4 = spark.read.text("people.txt")
#Filter entries of age, only keep those records of which the values are >24
>>> df.filter(df["age"]>24).show()
>>> df = df.dropDuplicates()
>>> from pyspark.sql import functions as F
Select
>>> df.select("firstName").show() #Show all entries in firstName column
>>> df.select("firstName","lastName") \
.show()
>>> df.select("firstName", #Show all entries in firstName, age and type
"age",
explode("phoneNumber") \
.alias("contactInfo")) \
.select("contactInfo.type",
"firstName",
"age") \
.show()
>>> df.select(df["firstName"],df["age"]+ 1) #Show all entries in firstName and age, .show() add 1 to the entries of age
>>> df.select(df['age'] > 24).show() #Show all entries where age >24
When
>>> df.select("firstName", #Show firstName and 0 or 1 depending on age >30
F.when(df.age > 30, 1) \
.otherwise(0)) \
.show()
>>> df[df.firstName.isin("Jane","Boris")] #Show firstName if in the given options
.collect()
Like
>>> df.select("firstName", #Show firstName, and lastName is TRUE if lastName is like Smith
df.lastName.like("Smith")) \
.show()
Startswith - Endswith
>>> df.select("firstName", #Show firstName, and TRUE if lastName starts with Sm
df.lastName \
.startswith("Sm")) \
.show()
>>> df.select(df.lastName.endswith("th"))\ #Show last names ending in th
.show()
Substring
>>> df.select(df.firstName.substr(1, 3) \ #Return substrings of firstName
.alias("name")) \
.collect()
Between
>>> df.select(df.age.between(22, 24)) \ #Show age: values are TRUE if between 22 and 24
.show()
Adding Columns
>>> df = df.withColumn('city',df.address.city) \
.withColumn('postalCode',df.address.postalCode) \
.withColumn('state',df.address.state) \
.withColumn('streetAddress',df.address.streetAddress) \
.withColumn('telePhoneNumber', explode(df.phoneNumber.number)) \
.withColumn('telePhoneType', explode(df.phoneNumber.type))
Updating Columns
>>> df = df.withColumnRenamed('telePhoneNumber', 'phoneNumber')
Removing Columns
>>> df = df.drop("address", "phoneNumber")
>>> df = df.drop(df.address).drop(df.phoneNumber)
>>> df.na.fill(50).show() #Replace null values
>>> df.na.drop().show() #Return new df omitting rows with null values
>>> df.na \ #Return new df replacing one value with another
.replace(10, 20) \
.show()
>>> df.groupBy("age")\ #Group by age, count the members in the groups
.count() \
.show()
>>> peopledf.sort(peopledf.age.desc()).collect()
>>> df.sort("age", ascending=False).collect()
>>> df.orderBy(["age","city"],ascending=[0,1])\
.collect()
>>> df.repartition(10)\ #df with 10 partitions
.rdd \
.getNumPartitions()
>>> df.coalesce(1).rdd.getNumPartitions() #df with 1 partition
Registering DataFrames as Views
>>> peopledf.createGlobalTempView("people")
>>> df.createTempView("customer")
>>> df.createOrReplaceTempView("customer")
Query Views
>>> df5 = spark.sql("SELECT * FROM customer").show()
>>> peopledf2 = spark.sql("SELECT * FROM global_temp.people")\
.show()
>>> df.dtypes #Return df column names and data types
>>> df.show() #Display the content of df
>>> df.head() #Return first n rows
>>> df.first() #Return first row
>>> df.take(2) #Return the first n rows >>> df.schema Return the schema of df
>>> df.describe().show() #Compute summary statistics >>> df.columns Return the columns of df
>>> df.count() #Count the number of rows in df
>>> df.distinct().count() #Count the number of distinct rows in df
>>> df.printSchema() #Print the schema of df
>>> df.explain() #Print the (logical and physical) plans
Data Structures
>>> rdd1 = df.rdd #Convert df into an RDD
>>> df.toJSON().first() #Convert df into a RDD of string
>>> df.toPandas() #Return the contents of df as Pandas DataFrame
Write & Save to Files
>>> df.select("firstName", "city")\
.write \
.save("nameAndCity.parquet")
>>> df.select("firstName", "age") \
.write \
.save("namesAndAges.json",format="json")
>>> spark.stop()
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#pyspark #cheatsheet #spark #dataframes #python #bigdata
1659330128
The damerau-levenshtein gem allows to find edit distance between two UTF-8 or ASCII encoded strings with O(N*M) efficiency.
This gem implements pure Levenshtein algorithm, Damerau modification of it (where 2 character transposition counts as 1 edit distance). It also includes Boehmer & Rees 2008 modification of Damerau algorithm, where transposition of bigger than 1 character blocks is taken in account as well (Rees 2014).
require "damerau-levenshtein"
DamerauLevenshtein.distance("Something", "Smoething") #returns 1
It also returns a diff between two strings according to Levenshtein alrorithm. The diff is expressed by tags <ins>
, <del>
, and <subst>
. Such tags make it possible to highlight differnce between strings in a flexible way.
require "damerau-levenshtein"
differ = DamerauLevenshtein::Differ.new
differ.run("corn", "cron")
# output: ["c<subst>or</subst>n", "c<subst>ro</subst>n"]
sudo apt-get install build-essential libgmp3-dev
gem install damerau-levenshtein
require "damerau-levenshtein"
dl = DamerauLevenshtein
dl.distance("Something", "Smoething") #returns 1
dl.distance("Something", "Smoething", 0) #returns 2
dl.distance("Something", "meSothing", 2) #returns 2 instead of 4
dl.distance("Sjöstedt", "Sjostedt") #returns 1
dl.array_distance([1,2,3,5], [1,2,3,4]) #returns 1
differ = DamerauLevenshtein::Differ.new
differ.run("Something", "smthg")
differ = DamerauLevenshtein::Differ.new
differ.format = :raw
differ.run("Something", "smthg")
DamerauLevenshtein.version
#returns version number of the gem
DamerauLevenshtein.distance(string1, string2, block_size, max_distance)
#returns edit distance between 2 strings
DamerauLevenshtein.string_distance(string1, string2, block_size, max_distance)
# an alias for .distance
DamerauLevenshtein.array_distance(array1, array2, block_size, max_distance)
# returns edit distance between 2 arrays of integers
DamerauLevenshtein.distance
and .array_distance
take 4 arguments:
string1
(array1
for .array_distance
)string2
(array2
for .array_distance
)block_size
(default is 1)max_distance
(default is 10)block_size
determines maximum number of characters in a transposition block:
block_size = 0
(transposition does not count -- it is a pure Levenshtein algorithm)
block_size = 1
(transposition between 2 adjustent characters --
it is pure Damerau-Levenshtein algorithm)
block_size = 2
(transposition between blocks as big as 2 characters -- so abcd and cdab
counts as edit distance 2, not 4)
block_size = 3
(transposition between blocks as big as 3 characters --
so abcdef and defabc counts as edit distance 3, not 6)
etc.
max_distance
-- is a threshold after which algorithm gives up and returns max_distance
instead of real edit distance.
Levenshtein algorithm is expensive, so it makes sense to give up when edit distance is becoming too big. The argument max_distance does just that.
DamerauLevenshtein.distance("abcdefg", "1234567", 0, 3)
# output: 4 -- it gave up when edit distance exceeded 3
differ = DamerauLevenshtein::Differ.new
creates an instance of new differ class to return difference between two strings
differ.format
shows current format for diff. Default is :tag
format
differ.format = :raw
changes current format for diffs. Possible values are :tag
and :raw
differ.run("String1", "String2")
returns difference between two strings.
For example:
differ = DamerauLevenshtein::Differ.new
differ.run("Something", "smthng")
# output: ["<ins>S</ins><subst>o</subst>m<ins>e</ins>th<ins>i</ins>ng",
# "<del>S</del><subst>s</subst>m<del>e</del>th<del>i</del>ng"]
Or with parsing:
require "damerau-levenshtein"
require "nokogiri"
differ = DamerauLevenshtein::Differ.new
res = differ.run("Something", "Smothing!")
nodes = Nokogiri::XML("<root>#{res.first}</root>")
markup = nodes.root.children.map do |n|
case n.name
when "text"
n.text
when "del"
"~~#{n.children.first.text}~~"
when "ins"
"*#{n.children.first.text}*"
when "subst"
"**#{n.children.first.text}**"
end
end.join("")
puts markup
Output
S*o*m**e**thing~~!~~
This gem is following practices of Semantic Versioning
Author: GlobalNamesArchitecture
Source Code: https://github.com/GlobalNamesArchitecture/damerau-levenshtein
License: MIT license
1669709646
What does return do in JavaScript? The return keyword explained
The return
keyword in JavaScript is a keyword to build a statement that ends JavaScript code execution. For example, suppose you have the following code that runs two console logs:
console.log("Hello");
return;
console.log("How are you?");
You’ll see that only “Hello” is printed to the console, while “How are you?" is not printed because of the return
keyword. In short, the return
statement will always stop any execution in the current JavaScript context, and return control to the higher context. When you use return
inside of a function
, the function
will stop running and give back control to the caller.
The following function won’t print “Hi from function” to the console because the return
statement is executed before the call to log the string. JavaScript will then give back control to the caller and continue code execution from there:
function fnExample() {
return;
console.log("Hi from function");
}
fnExample(); // nothing will be printed
console.log("JavaScript");
console.log("Continue to execute after return");
You can add an expression to the return
statement so that instead of returning undefined
, the value that you specify will be returned.
Take a look at the following code example, where the fnOne()
function just return
without any value, while fnTwo()
returns a number 7
:
function fnOne() {
return;
}
function fnTwo() {
return 7;
}
let one = fnOne();
let two = fnTwo();
console.log(one); // undefined
console.log(two); // 7
In the code above, the values returned from the return
statement by each function are assigned to variables one
and two
. When return
statement will give back undefined
by default.
You can also assign a variable to the return
statement as follows:
function fnExample() {
let str = "Hello";
return str;
}
let response = fnExample();
console.log(response); // Hello
And that’s all about the return
keyword in JavaScript. You can combine the return
statement and conditional if..else
block to control what value returned from your function
.
In the following example, the function will return “Banana” when number === 1
else it will return “Melon”:
function fnFruit(number) {
if (number === 1) {
return "Banana";
} else {
return "Melon";
}
}
let fruit = fnFruit(1);
console.log(fruit); // "Banana"
Original article source at: https://sebhastian.com/
1622207074
Who invented JavaScript, how it works, as we have given information about Programming language in our previous article ( What is PHP ), but today we will talk about what is JavaScript, why JavaScript is used The Answers to all such questions and much other information about JavaScript, you are going to get here today. Hope this information will work for you.
JavaScript language was invented by Brendan Eich in 1995. JavaScript is inspired by Java Programming Language. The first name of JavaScript was Mocha which was named by Marc Andreessen, Marc Andreessen is the founder of Netscape and in the same year Mocha was renamed LiveScript, and later in December 1995, it was renamed JavaScript which is still in trend.
JavaScript is a client-side scripting language used with HTML (Hypertext Markup Language). JavaScript is an Interpreted / Oriented language called JS in programming language JavaScript code can be run on any normal web browser. To run the code of JavaScript, we have to enable JavaScript of Web Browser. But some web browsers already have JavaScript enabled.
Today almost all websites are using it as web technology, mind is that there is maximum scope in JavaScript in the coming time, so if you want to become a programmer, then you can be very beneficial to learn JavaScript.
In JavaScript, ‘document.write‘ is used to represent a string on a browser.
<script type="text/javascript">
document.write("Hello World!");
</script>
<script type="text/javascript">
//single line comment
/* document.write("Hello"); */
</script>
#javascript #javascript code #javascript hello world #what is javascript #who invented javascript