1596588540
In ASP.NET MVC, the File helper method built into your controller gives you multiple options for retrieving the file you want to send to the client. You can pass the File helper method an array of bytes, a FileStream or the path name to a file depending on whether you want to return something in memory (the array of bytes), an open file (FileStream) or a file on your server’s hard disk (a path).
In .NET Core, you still have the File helper method, but it has one limitation: It assumes that any file path you pass it is a virtual path (a relative path within your Web site). To compensate, this new version of the File method simplifies two common operations that require additional code in ASP.NET MVC: checking for new content and returning part of the file when the client includes a Range header in its request.
To handle physical path names in .NET Core, you’ll want to switch to using the PhysicalFile helper method. Other than assuming any file name is a physical path to somewhere on your server, this method works just like the new File helper method.
1602560783
In this article, we’ll discuss how to use jQuery Ajax for ASP.NET Core MVC CRUD Operations using Bootstrap Modal. With jQuery Ajax, we can make HTTP request to controller action methods without reloading the entire page, like a single page application.
To demonstrate CRUD operations – insert, update, delete and retrieve, the project will be dealing with details of a normal bank transaction. GitHub repository for this demo project : https://bit.ly/33KTJAu.
Sub-topics discussed :
In Visual Studio 2019, Go to File > New > Project (Ctrl + Shift + N).
From new project window, Select Asp.Net Core Web Application_._
Once you provide the project name and location. Select Web Application(Model-View-Controller) and uncheck HTTPS Configuration. Above steps will create a brand new ASP.NET Core MVC project.
Let’s create a database for this application using Entity Framework Core. For that we’ve to install corresponding NuGet Packages. Right click on project from solution explorer, select Manage NuGet Packages_,_ From browse tab, install following 3 packages.
Now let’s define DB model class file – /Models/TransactionModel.cs.
public class TransactionModel
{
[Key]
public int TransactionId { get; set; }
[Column(TypeName ="nvarchar(12)")]
[DisplayName("Account Number")]
[Required(ErrorMessage ="This Field is required.")]
[MaxLength(12,ErrorMessage ="Maximum 12 characters only")]
public string AccountNumber { get; set; }
[Column(TypeName ="nvarchar(100)")]
[DisplayName("Beneficiary Name")]
[Required(ErrorMessage = "This Field is required.")]
public string BeneficiaryName { get; set; }
[Column(TypeName ="nvarchar(100)")]
[DisplayName("Bank Name")]
[Required(ErrorMessage = "This Field is required.")]
public string BankName { get; set; }
[Column(TypeName ="nvarchar(11)")]
[DisplayName("SWIFT Code")]
[Required(ErrorMessage = "This Field is required.")]
[MaxLength(11)]
public string SWIFTCode { get; set; }
[DisplayName("Amount")]
[Required(ErrorMessage = "This Field is required.")]
public int Amount { get; set; }
[DisplayFormat(DataFormatString = "{0:MM/dd/yyyy}")]
public DateTime Date { get; set; }
}
C#Copy
Here we’ve defined model properties for the transaction with proper validation. Now let’s define DbContextclass for EF Core.
#asp.net core article #asp.net core #add loading spinner in asp.net core #asp.net core crud without reloading #asp.net core jquery ajax form #asp.net core modal dialog #asp.net core mvc crud using jquery ajax #asp.net core mvc with jquery and ajax #asp.net core popup window #bootstrap modal popup in asp.net core mvc. bootstrap modal popup in asp.net core #delete and viewall in asp.net core #jquery ajax - insert #jquery ajax form post #modal popup dialog in asp.net core #no direct access action method #update #validation in modal popup
1602564619
User registration and authentication are mandatory in any application when you have little concern about privacy. Hence all most all application development starts with an authentication module. In this article, we will discuss the quickest way to use **ASP.NET Core Identity for User Login and Registration **in a new or existing MVC application.
Sub-topics discussed :
ASP.NET Core Identity is an API, which provides both user interface(UI) and functions for user authentication, registration, authorization, etc. Modules/ APIs like this will really be helpful and fasten the development process. It comes with ASP.NET Core Framework and used in many applications before. Which makes the API more dependable and trustworthy.
ASP.NET Core MVC with user authentication can easily be accomplished using Identity.UI. While creating the MVC project, you just need to select Authentication as Individual User Accounts.
The rest will be handled by ASP.NET Core Identity UI. It already contains razor view pages and backend codes for an authentication system. But that’s not what we want in most of the cases. we want to customize ASP.NET Core Identity as per our requirement. That’s what we do here.
First of all, I will create a brand new ASP.NET Core MVC application without any authentication selected. We could add ASP.NET Core Identity later into the project.
In Visual Studio 2019, Go to File > New > Project (Ctrl + Shift + N). From new project window, select ASP.NET Core Web Application.
Once you provide the project name and location. A new window will be opened as follows, Select _Web Application(Model-View-Controller), _uncheck _HTTPS Configuration _and DO NOT select any authentication method. Above steps will create a brand new ASP.NET Core MVC project.
#asp.net core article #asp.net core #add asp.net core identity to existing project #asp.net core identity in mvc #asp.net core mvc login and registration #login and logout in asp.net core
1653377002
This PySpark cheat sheet with code samples covers the basics like initializing Spark in Python, loading data, sorting, and repartitioning.
Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python.
Even though working with Spark will remind you in many ways of working with Pandas DataFrames, you'll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, ... your data. What's more, if you've never worked with any other programming language or if you're new to the field, it might be hard to distinguish between RDD operations.
Let's face it, map()
and flatMap()
are different enough, but it might still come as a challenge to decide which one you really need when you're faced with them in your analysis. Or what about other functions, like reduce()
and reduceByKey()
?
Even though the documentation is very elaborate, it never hurts to have a cheat sheet by your side, especially when you're just getting into it.
This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that's not all. You'll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet.
Note that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you'll be using Spark to analyze big data.
PySpark is the Spark Python API that exposes the Spark programming model to Python.
>>> from pyspark import SparkContext
>>> sc = SparkContext(master = 'local[2]')
>>> sc.version #Retrieve SparkContext version
>>> sc.pythonVer #Retrieve Python version
>>> sc.master #Master URL to connect to
>>> str(sc.sparkHome) #Path where Spark is installed on worker nodes
>>> str(sc.sparkUser()) #Retrieve name of the Spark User running SparkContext
>>> sc.appName #Return application name
>>> sc.applicationld #Retrieve application ID
>>> sc.defaultParallelism #Return default level of parallelism
>>> sc.defaultMinPartitions #Default minimum number of partitions for RDDs
>>> from pyspark import SparkConf, SparkContext
>>> conf = (SparkConf()
.setMaster("local")
.setAppName("My app")
. set ("spark. executor.memory", "lg"))
>>> sc = SparkContext(conf = conf)
In the PySpark shell, a special interpreter-aware SparkContext is already created in the variable called sc.
$ ./bin/spark-shell --master local[2]
$ ./bin/pyspark --master local[s] --py-files code.py
Set which master the context connects to with the --master argument, and add Python .zip..egg or.py files to the
runtime path by passing a comma-separated list to --py-files.
>>> rdd = sc.parallelize([('a',7),('a',2),('b',2)])
>>> rdd2 = sc.parallelize([('a',2),('d',1),('b',1)])
>>> rdd3 = sc.parallelize(range(100))
>>> rdd = sc.parallelize([("a",["x","y","z"]),
("b" ["p","r,"])])
Read either one text file from HDFS, a local file system or any Hadoop-supported file system URI with textFile(), or read in a directory of text files with wholeTextFiles().
>>> textFile = sc.textFile("/my/directory/•.txt")
>>> textFile2 = sc.wholeTextFiles("/my/directory/")
>>> rdd.getNumPartitions() #List the number of partitions
>>> rdd.count() #Count RDD instances 3
>>> rdd.countByKey() #Count RDD instances by key
defaultdict(<type 'int'>,{'a':2,'b':1})
>>> rdd.countByValue() #Count RDD instances by value
defaultdict(<type 'int'>,{('b',2):1,('a',2):1,('a',7):1})
>>> rdd.collectAsMap() #Return (key,value) pairs as a dictionary
{'a': 2, 'b': 2}
>>> rdd3.sum() #Sum of RDD elements 4950
>>> sc.parallelize([]).isEmpty() #Check whether RDD is empty
True
>>> rdd3.max() #Maximum value of RDD elements
99
>>> rdd3.min() #Minimum value of RDD elements
0
>>> rdd3.mean() #Mean value of RDD elements
49.5
>>> rdd3.stdev() #Standard deviation of RDD elements
28.866070047722118
>>> rdd3.variance() #Compute variance of RDD elements
833.25
>>> rdd3.histogram(3) #Compute histogram by bins
([0,33,66,99],[33,33,34])
>>> rdd3.stats() #Summary statistics (count, mean, stdev, max & min)
#Apply a function to each RFD element
>>> rdd.map(lambda x: x+(x[1],x[0])).collect()
[('a' ,7,7, 'a'),('a' ,2,2, 'a'), ('b' ,2,2, 'b')]
#Apply a function to each RDD element and flatten the result
>>> rdd5 = rdd.flatMap(lambda x: x+(x[1],x[0]))
>>> rdd5.collect()
['a',7 , 7 , 'a' , 'a' , 2, 2, 'a', 'b', 2 , 2, 'b']
#Apply a flatMap function to each (key,value) pair of rdd4 without changing the keys
>>> rdds.flatMapValues(lambda x: x).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'),('b', 'p'),('b', 'r')]
Getting
>>> rdd.collect() #Return a list with all RDD elements
[('a', 7), ('a', 2), ('b', 2)]
>>> rdd.take(2) #Take first 2 RDD elements
[('a', 7), ('a', 2)]
>>> rdd.first() #Take first RDD element
('a', 7)
>>> rdd.top(2) #Take top 2 RDD elements
[('b', 2), ('a', 7)]
Sampling
>>> rdd3.sample(False, 0.15, 81).collect() #Return sampled subset of rdd3
[3,4,27,31,40,41,42,43,60,76,79,80,86,97]
Filtering
>>> rdd.filter(lambda x: "a" in x).collect() #Filter the RDD
[('a',7),('a',2)]
>>> rdd5.distinct().collect() #Return distinct RDD values
['a' ,2, 'b',7]
>>> rdd.keys().collect() #Return (key,value) RDD's keys
['a', 'a', 'b']
>>> def g (x): print(x)
>>> rdd.foreach(g) #Apply a function to all RDD elements
('a', 7)
('b', 2)
('a', 2)
Reducing
>>> rdd.reduceByKey(lambda x,y : x+y).collect() #Merge the rdd values for each key
[('a',9),('b',2)]
>>> rdd.reduce(lambda a, b: a+ b) #Merge the rdd values
('a', 7, 'a' , 2 , 'b' , 2)
Grouping by
>>> rdd3.groupBy(lambda x: x % 2) #Return RDD of grouped values
.mapValues(list)
.collect()
>>> rdd.groupByKey() #Group rdd by key
.mapValues(list)
.collect()
[('a',[7,2]),('b',[2])]
Aggregating
>> seqOp = (lambda x,y: (x[0]+y,x[1]+1))
>>> combOp = (lambda x,y:(x[0]+y[0],x[1]+y[1]))
#Aggregate RDD elements of each partition and then the results
>>> rdd3.aggregate((0,0),seqOp,combOp)
(4950,100)
#Aggregate values of each RDD key
>>> rdd.aggregateByKey((0,0),seqop,combop).collect()
[('a',(9,2)), ('b',(2,1))]
#Aggregate the elements of each partition, and then the results
>>> rdd3.fold(0,add)
4950
#Merge the values for each key
>>> rdd.foldByKey(0, add).collect()
[('a' ,9), ('b' ,2)]
#Create tuples of RDD elements by applying a function
>>> rdd3.keyBy(lambda x: x+x).collect()
>>>> rdd.subtract(rdd2).collect() #Return each rdd value not contained in rdd2
[('b' ,2), ('a' ,7)]
#Return each (key,value) pair of rdd2 with no matching key in rdd
>>> rdd2.subtractByKey(rdd).collect()
[('d', 1)1
>>>rdd.cartesian(rdd2).collect() #Return the Cartesian product of rdd and rdd2
>>> rdd2.sortBy(lambda x: x[1]).collect() #Sort RDD by given function
[('d',1),('b',1),('a',2)]
>>> rdd2.sortByKey().collect() #Sort (key, value) ROD by key
[('a' ,2), ('b' ,1), ('d' ,1)]
>>> rdd.repartition(4) #New RDD with 4 partitions
>>> rdd.coalesce(1) #Decrease the number of partitions in the RDD to 1
>>> rdd.saveAsTextFile("rdd.txt")
>>> rdd.saveAsHadoopFile("hdfs:// namenodehost/parent/child",
'org.apache.hadoop.mapred.TextOutputFormat')
>>> sc.stop()
$ ./bin/spark-submit examples/src/main/python/pi.py
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#pyspark #cheatsheet #spark #python
1583436587
#Asp.net core #Asp.net core mvc #Core #Asp.net core tutorials #Asp.net core with entity framework
1582910639
#Asp.net core #Asp.net core mvc #Core #Asp.net core tutorials #Asp.net core with entity framework