1627091100
In this lesson we will create view, routes and method to show single post. As well as a side note update the timezone of your application. Add meta description for social share.
Timestamps:
0:00 Introduction and setup
1:23 Return view in Blog Controller
2:00 Edit show view
6:25 Add meta to view
15:41 Timezone in Laravel App
#laravel #jetstream
1653465344
This PySpark SQL cheat sheet is your handy companion to Apache Spark DataFrames in Python and includes code samples.
You'll probably already know about Apache Spark, the fast, general and open-source engine for big data processing; It has built-in modules for streaming, SQL, machine learning and graph processing. Spark allows you to speed analytic applications up to 100 times faster compared to other technologies on the market today. Interfacing Spark with Python is easy with PySpark: this Spark Python API exposes the Spark programming model to Python.
Now, it's time to tackle the Spark SQL module, which is meant for structured data processing, and the DataFrame API, which is not only available in Python, but also in Scala, Java, and R.
Without further ado, here's the cheat sheet:
This PySpark SQL cheat sheet covers the basics of working with the Apache Spark DataFrames in Python: from initializing the SparkSession to creating DataFrames, inspecting the data, handling duplicate values, querying, adding, updating or removing columns, grouping, filtering or sorting data. You'll also see that this cheat sheet also on how to run SQL Queries programmatically, how to save your data to parquet and JSON files, and how to stop your SparkSession.
Spark SGlL is Apache Spark's module for working with structured data.
A SparkSession can be used create DataFrame, register DataFrame as tables, execute SGL over tables, cache tables, and read parquet files.
>>> from pyspark.sql import SparkSession
>>> spark a SparkSession \
.builder\
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
>>> from pyspark.sql.types import*
Infer Schema
>>> sc = spark.sparkContext
>>> lines = sc.textFile(''people.txt'')
>>> parts = lines.map(lambda l: l.split(","))
>>> people = parts.map(lambda p: Row(nameap[0],ageaint(p[l])))
>>> peopledf = spark.createDataFrame(people)
Specify Schema
>>> people = parts.map(lambda p: Row(name=p[0],
age=int(p[1].strip())))
>>> schemaString = "name age"
>>> fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
>>> schema = StructType(fields)
>>> spark.createDataFrame(people, schema).show()
From Spark Data Sources
JSON
>>> df = spark.read.json("customer.json")
>>> df.show()
>>> df2 = spark.read.load("people.json", format="json")
Parquet files
>>> df3 = spark.read.load("users.parquet")
TXT files
>>> df4 = spark.read.text("people.txt")
#Filter entries of age, only keep those records of which the values are >24
>>> df.filter(df["age"]>24).show()
>>> df = df.dropDuplicates()
>>> from pyspark.sql import functions as F
Select
>>> df.select("firstName").show() #Show all entries in firstName column
>>> df.select("firstName","lastName") \
.show()
>>> df.select("firstName", #Show all entries in firstName, age and type
"age",
explode("phoneNumber") \
.alias("contactInfo")) \
.select("contactInfo.type",
"firstName",
"age") \
.show()
>>> df.select(df["firstName"],df["age"]+ 1) #Show all entries in firstName and age, .show() add 1 to the entries of age
>>> df.select(df['age'] > 24).show() #Show all entries where age >24
When
>>> df.select("firstName", #Show firstName and 0 or 1 depending on age >30
F.when(df.age > 30, 1) \
.otherwise(0)) \
.show()
>>> df[df.firstName.isin("Jane","Boris")] #Show firstName if in the given options
.collect()
Like
>>> df.select("firstName", #Show firstName, and lastName is TRUE if lastName is like Smith
df.lastName.like("Smith")) \
.show()
Startswith - Endswith
>>> df.select("firstName", #Show firstName, and TRUE if lastName starts with Sm
df.lastName \
.startswith("Sm")) \
.show()
>>> df.select(df.lastName.endswith("th"))\ #Show last names ending in th
.show()
Substring
>>> df.select(df.firstName.substr(1, 3) \ #Return substrings of firstName
.alias("name")) \
.collect()
Between
>>> df.select(df.age.between(22, 24)) \ #Show age: values are TRUE if between 22 and 24
.show()
Adding Columns
>>> df = df.withColumn('city',df.address.city) \
.withColumn('postalCode',df.address.postalCode) \
.withColumn('state',df.address.state) \
.withColumn('streetAddress',df.address.streetAddress) \
.withColumn('telePhoneNumber', explode(df.phoneNumber.number)) \
.withColumn('telePhoneType', explode(df.phoneNumber.type))
Updating Columns
>>> df = df.withColumnRenamed('telePhoneNumber', 'phoneNumber')
Removing Columns
>>> df = df.drop("address", "phoneNumber")
>>> df = df.drop(df.address).drop(df.phoneNumber)
>>> df.na.fill(50).show() #Replace null values
>>> df.na.drop().show() #Return new df omitting rows with null values
>>> df.na \ #Return new df replacing one value with another
.replace(10, 20) \
.show()
>>> df.groupBy("age")\ #Group by age, count the members in the groups
.count() \
.show()
>>> peopledf.sort(peopledf.age.desc()).collect()
>>> df.sort("age", ascending=False).collect()
>>> df.orderBy(["age","city"],ascending=[0,1])\
.collect()
>>> df.repartition(10)\ #df with 10 partitions
.rdd \
.getNumPartitions()
>>> df.coalesce(1).rdd.getNumPartitions() #df with 1 partition
Registering DataFrames as Views
>>> peopledf.createGlobalTempView("people")
>>> df.createTempView("customer")
>>> df.createOrReplaceTempView("customer")
Query Views
>>> df5 = spark.sql("SELECT * FROM customer").show()
>>> peopledf2 = spark.sql("SELECT * FROM global_temp.people")\
.show()
>>> df.dtypes #Return df column names and data types
>>> df.show() #Display the content of df
>>> df.head() #Return first n rows
>>> df.first() #Return first row
>>> df.take(2) #Return the first n rows >>> df.schema Return the schema of df
>>> df.describe().show() #Compute summary statistics >>> df.columns Return the columns of df
>>> df.count() #Count the number of rows in df
>>> df.distinct().count() #Count the number of distinct rows in df
>>> df.printSchema() #Print the schema of df
>>> df.explain() #Print the (logical and physical) plans
Data Structures
>>> rdd1 = df.rdd #Convert df into an RDD
>>> df.toJSON().first() #Convert df into a RDD of string
>>> df.toPandas() #Return the contents of df as Pandas DataFrame
Write & Save to Files
>>> df.select("firstName", "city")\
.write \
.save("nameAndCity.parquet")
>>> df.select("firstName", "age") \
.write \
.save("namesAndAges.json",format="json")
>>> spark.stop()
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#pyspark #cheatsheet #spark #dataframes #python #bigdata
1625805060
Hello Guys,
In this example, we will discuss about Laravel 8 Authentication with Jetstream. In this post will give you simple and easy example of laravel 8 authentication using jetstream example. you can see Laravel 8 Jetstream auth with Livewire.
It’s very amazing features in Laravel 8. Laravel 8 has totally changed with the authentication scaffolding.
#laravel #authentication #livewire #jetstream #laravel 8 authentication using jetstream example #laravel 8 jetstream auth with livewire
1617089618
Hello everyone! I just updated this tutorial for Laravel 8. In this tutorial, we’ll go through the basics of the Laravel framework by building a simple blogging system. Note that this tutorial is only for beginners who are interested in web development but don’t know where to start. Check it out if you are interested: Laravel Tutorial For Beginners
Laravel is a very powerful framework that follows the MVC structure. It is designed for web developers who need a simple, elegant yet powerful toolkit to build a fully-featured website.
#laravel 8 tutorial #laravel 8 tutorial crud #laravel 8 tutorial point #laravel 8 auth tutorial #laravel 8 project example #laravel 8 tutorial for beginners
1599536794
In this post, i will show you what’s new in laravel 8 version.
https://www.tutsmake.com/laravel-8-new-features-release-notes/
#laravel 8 features #laravel 8 release date #laravel 8 tutorial #news - laravel 8 new features #what's new in laravel 8 #laravel 8 release notes
1606794037
Laravel 8 livewire form wizard example. In tutorial i will show you how to implement multi step form or form wizard using livewire package in laravel 8 app from scratch.
Follow the below given steps and easy implement multi step form or form wizard in laravel 8 app with livewire:
https://www.tutsmake.com/laravel-8-livewire-form-wizard-tutorial/
#laravel multi step form wizard #laravel 8 livewire multi step form wizard #livewire multi step form bootstrap laravel #laravel multi step form wizard with livewire #laravel livewire multi step form example #laravel livewire wizard form example