1591845245
Deno is a new runtime for JavaScript and TypeScript. If this doesn’t tell you much and you don’t know what to expect, then take this statement as secondary introduction: Ryan Dahl, inventor of Node.js, released Deno in 2020 as answer to improve Node.js. However, Deno isn’t Node.js, but a complete new runtime for JavaScript, but also TypeScript. Similar to Node.js, Deno can be used for server-side JavaScript, but aims to negate the mistakes that were made with Node.js. It’s like Node.js 2.0 and only the future can tell whether people will adopt it as much as they did Node.js back in 2009.
Ryan Dahl, the inventor of Node (2009) and Deno (2020), released Deno as addition to the JavaScript ecosystem. When Ryan announced Deno the first time at a conference, he talked about the mistakes made in Node.js. Watching this conference talk (see exercises) is a lesson on humility, because Node.js has become indispensable for the JavaScript ecosystem, used by millions of people, and yet Ryan Dahl feels bad about decisions which were made back at the time. Now, Ryan Dahl wants to make things right with Deno by addressing the design flaws from Node. Deno is a brand new runtime for secure server-side JavaScript and TypeScript implemented by the V8 JavaScript engine, Rust and TypeScript.
The following sections will show you all these bullet points in detail while implementing a small Deno application step by step from scratch. Afterward we will continue developing a real web application with Deno.
There are various ways to set up a Deno application. For you it depends on your operating system and on your toolchain for installing programs on your machine. For example, I am using Homebrew on MacOS to manage programs on my machine. For you it may be something else, so please take the appropriate command for your machine from this list which is taken from Deno’s website. This command should be executed in an integrated terminal or command line interface:
Shell (Mac, Linux):
curl -fsSL https://deno.land/x/install/install.sh | sh
PowerShell (Windows):
iwr https://deno.land/x/install/install.ps1 -useb | iex
Homebrew (Mac):
brew install deno
Chocolatey (Windows):
choco install deno
Scoop (Windows):
scoop install deno
Build and install from source using Cargo
cargo install deno
After you have installed Deno, you can verify its installation on the command line. Your version may be newer than mine, because in my case I installed the first released Deno version 1.0.0. However, the following sections will assume that you have the newest Deno version installed:
deno --version
-> deno 1.0.0
If you want to upgrade Deno’s version, you can use deno upgrade
. In addition, try to execute the following remote Deno application via the command line to verify that Deno runs correctly on your machine:
deno run https://deno.land/std/examples/welcome.ts
-> Welcome to Deno
This Deno application just outputs a text on your command line. However, it also shows you how a Deno application can be executed from a remote source by downloading and compiling it on the fly. If you have trouble to set up Deno on your machine, follow installation instructions on Deno’s official website.
#deno #node #javascript #typescript #developer
1657081614
In this article, We will show how we can use python to automate Excel . A useful Python library is Openpyxl which we will learn to do Excel Automation
Openpyxl is a Python library that is used to read from an Excel file or write to an Excel file. Data scientists use Openpyxl for data analysis, data copying, data mining, drawing charts, styling sheets, adding formulas, and more.
Workbook: A spreadsheet is represented as a workbook in openpyxl. A workbook consists of one or more sheets.
Sheet: A sheet is a single page composed of cells for organizing data.
Cell: The intersection of a row and a column is called a cell. Usually represented by A1, B5, etc.
Row: A row is a horizontal line represented by a number (1,2, etc.).
Column: A column is a vertical line represented by a capital letter (A, B, etc.).
Openpyxl can be installed using the pip command and it is recommended to install it in a virtual environment.
pip install openpyxl
We start by creating a new spreadsheet, which is called a workbook in Openpyxl. We import the workbook module from Openpyxl and use the function Workbook()
which creates a new workbook.
from openpyxl
import Workbook
#creates a new workbook
wb = Workbook()
#Gets the first active worksheet
ws = wb.active
#creating new worksheets by using the create_sheet method
ws1 = wb.create_sheet("sheet1", 0) #inserts at first position
ws2 = wb.create_sheet("sheet2") #inserts at last position
ws3 = wb.create_sheet("sheet3", -1) #inserts at penultimate position
#Renaming the sheet
ws.title = "Example"
#save the workbook
wb.save(filename = "example.xlsx")
We load the file using the function load_Workbook()
which takes the filename as an argument. The file must be saved in the same working directory.
#loading a workbook
wb = openpyxl.load_workbook("example.xlsx")
#getting sheet names
wb.sheetnames
result = ['sheet1', 'Sheet', 'sheet3', 'sheet2']
#getting a particular sheet
sheet1 = wb["sheet2"]
#getting sheet title
sheet1.title
result = 'sheet2'
#Getting the active sheet
sheetactive = wb.active
result = 'sheet1'
#get a cell from the sheet
sheet1["A1"] <
Cell 'Sheet1'.A1 >
#get the cell value
ws["A1"].value 'Segment'
#accessing cell using row and column and assigning a value
d = ws.cell(row = 4, column = 2, value = 10)
d.value
10
#looping through each row and column
for x in range(1, 5):
for y in range(1, 5):
print(x, y, ws.cell(row = x, column = y)
.value)
#getting the highest row number
ws.max_row
701
#getting the highest column number
ws.max_column
19
There are two functions for iterating through rows and columns.
Iter_rows() => returns the rows
Iter_cols() => returns the columns {
min_row = 4, max_row = 5, min_col = 2, max_col = 5
} => This can be used to set the boundaries
for any iteration.
Example:
#iterating rows
for row in ws.iter_rows(min_row = 2, max_col = 3, max_row = 3):
for cell in row:
print(cell) <
Cell 'Sheet1'.A2 >
<
Cell 'Sheet1'.B2 >
<
Cell 'Sheet1'.C2 >
<
Cell 'Sheet1'.A3 >
<
Cell 'Sheet1'.B3 >
<
Cell 'Sheet1'.C3 >
#iterating columns
for col in ws.iter_cols(min_row = 2, max_col = 3, max_row = 3):
for cell in col:
print(cell) <
Cell 'Sheet1'.A2 >
<
Cell 'Sheet1'.A3 >
<
Cell 'Sheet1'.B2 >
<
Cell 'Sheet1'.B3 >
<
Cell 'Sheet1'.C2 >
<
Cell 'Sheet1'.C3 >
To get all the rows of the worksheet we use the method worksheet.rows and to get all the columns of the worksheet we use the method worksheet.columns. Similarly, to iterate only through the values we use the method worksheet.values.
Example:
for row in ws.values:
for value in row:
print(value)
Writing to a workbook can be done in many ways such as adding a formula, adding charts, images, updating cell values, inserting rows and columns, etc… We will discuss each of these with an example.
#creates a new workbook
wb = openpyxl.Workbook()
#saving the workbook
wb.save("new.xlsx")
#creating a new sheet
ws1 = wb.create_sheet(title = "sheet 2")
#creating a new sheet at index 0
ws2 = wb.create_sheet(index = 0, title = "sheet 0")
#checking the sheet names
wb.sheetnames['sheet 0', 'Sheet', 'sheet 2']
#deleting a sheet
del wb['sheet 0']
#checking sheetnames
wb.sheetnames['Sheet', 'sheet 2']
#checking the sheet value
ws['B2'].value
null
#adding value to cell
ws['B2'] = 367
#checking value
ws['B2'].value
367
We often require formulas to be included in our Excel datasheet. We can easily add formulas using the Openpyxl module just like you add values to a cell.
For example:
import openpyxl
from openpyxl
import Workbook
wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']
ws['A9'] = '=SUM(A2:A8)'
wb.save("new2.xlsx")
The above program will add the formula (=SUM(A2:A8)) in cell A9. The result will be as below.
Two or more cells can be merged to a rectangular area using the method merge_cells(), and similarly, they can be unmerged using the method unmerge_cells().
For example:
Merge cells
#merge cells B2 to C9
ws.merge_cells('B2:C9')
ws['B2'] = "Merged cells"
Adding the above code to the previous example will merge cells as below.
#unmerge cells B2 to C9
ws.unmerge_cells('B2:C9')
The above code will unmerge cells from B2 to C9.
To insert an image we import the image function from the module openpyxl.drawing.image. We then load our image and add it to the cell as shown in the below example.
Example:
import openpyxl
from openpyxl
import Workbook
from openpyxl.drawing.image
import Image
wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']
#loading the image(should be in same folder)
img = Image('logo.png')
ws['A1'] = "Adding image"
#adjusting size
img.height = 130
img.width = 200
#adding img to cell A3
ws.add_image(img, 'A3')
wb.save("new2.xlsx")
Result:
Charts are essential to show a visualization of data. We can create charts from Excel data using the Openpyxl module chart. Different forms of charts such as line charts, bar charts, 3D line charts, etc., can be created. We need to create a reference that contains the data to be used for the chart, which is nothing but a selection of cells (rows and columns). I am using sample data to create a 3D bar chart in the below example:
Example
import openpyxl
from openpyxl
import Workbook
from openpyxl.chart
import BarChart3D, Reference, series
wb = openpyxl.load_workbook("example.xlsx")
ws = wb.active
values = Reference(ws, min_col = 3, min_row = 2, max_col = 3, max_row = 40)
chart = BarChart3D()
chart.add_data(values)
ws.add_chart(chart, "E3")
wb.save("MyChart.xlsx")
Result
Welcome to another video! In this video, We will cover how we can use python to automate Excel. I'll be going over everything from creating workbooks to accessing individual cells and stylizing cells. There is a ton of things that you can do with Excel but I'll just be covering the core/base things in OpenPyXl.
⭐️ Timestamps ⭐️
00:00 | Introduction
02:14 | Installing openpyxl
03:19 | Testing Installation
04:25 | Loading an Existing Workbook
06:46 | Accessing Worksheets
07:37 | Accessing Cell Values
08:58 | Saving Workbooks
09:52 | Creating, Listing and Changing Sheets
11:50 | Creating a New Workbook
12:39 | Adding/Appending Rows
14:26 | Accessing Multiple Cells
20:46 | Merging Cells
22:27 | Inserting and Deleting Rows
23:35 | Inserting and Deleting Columns
24:48 | Copying and Moving Cells
26:06 | Practical Example, Formulas & Cell Styling
📄 Resources 📄
OpenPyXL Docs: https://openpyxl.readthedocs.io/en/stable/
Code Written in This Tutorial: https://github.com/techwithtim/ExcelPythonTutorial
Subscribe: https://www.youtube.com/c/TechWithTim/featured
1596728880
In this tutorial we’ll learn how to begin programming with R using RStudio. We’ll install R, and RStudio RStudio, an extremely popular development environment for R. We’ll learn the key RStudio features in order to start programming in R on our own.
If you already know how to use RStudio and want to learn some tips, tricks, and shortcuts, check out this Dataquest blog post.
[tidyverse](https://www.dataquest.io/blog/tutorial-getting-started-with-r-and-rstudio/#tve-jump-173bb26184b)
Packages[tidyverse](https://www.dataquest.io/blog/tutorial-getting-started-with-r-and-rstudio/#tve-jump-173bb264c2b)
Packages into Memory#data science tutorials #beginner #r tutorial #r tutorials #rstats #tutorial #tutorials
1659453850
A Julia package for interacting with the Twitter API.
Twitter.jl is a Julia package to work with the Twitter API v1.1. Currently, only the REST API methods are supported; streaming API endpoints aren't implemented at this time.
All functions have required arguments for those parameters required by Twitter and an options
keyword argument to provide a Dict{String, String}
of optional parameters Twitter API documentation. Most function calls will return either a Dict
or an Array <: TwitterType
. Bad requests will return the response code from the API (403
, 404
, etc).
DataFrame methods are defined for functions returning composite types: Tweets
, Places
, Lists
, and Users
.
Before one can make use of this package, you must create an application on the Twitter's Developer Platform.
Once your application is approved, you can access your dashboard/portal to grab your authentication credentials from the "Details" tab of the application.
Note that you will also want to ensure that your App has Read / Write OAuth access in order to post tweets. You can find out more about this on Stack Overflow.
To install this package, enter ]
on the REPL to bring up Julia's package manager. Then add the package:
julia> ]
(v1.7) pkg> add Twitter
Tip: Press Ctrl+C
to return to the julia>
prompt.
To run Twitter.jl, enter the following command in your Julia REPL
julia> using Twitter
Then the a global variable has to be declared with the twitterauth
function. This function holds the consumer_key
(API Key), consumer_secret
(API Key Secret), oauth_token
(Access Token), and oauth_secret
(Access Token Secret) respectively.
twitterauth("6nOtpXmf...", # API Key
"sES5Zlj096S...", # API Key Secret
"98689850-Hj...", # Access Token
"UroqCVpWKIt...") # Access Token Secret
Note: This package does not currently support OAuth authentication.
See runtests.jl for example function calls.
using Twitter, Test
using JSON, OAuth
# set debugging
ENV["JULIA_DEBUG"]=Twitter
twitterauth(ENV["CONSUMER_KEY"], ENV["CONSUMER_SECRET"], ENV["ACCESS_TOKEN"], ENV["ACCESS_TOKEN_SECRET"])
#get_mentions_timeline
mentions_timeline_default = get_mentions_timeline()
tw = mentions_timeline_default[1]
tw_df = DataFrame(mentions_timeline_default)
@test 0 <= length(mentions_timeline_default) <= 20
@test typeof(mentions_timeline_default) == Vector{Tweets}
@test typeof(tw) == Tweets
@test size(tw_df)[2] == 30
#get_user_timeline
user_timeline_default = get_user_timeline(screen_name = "randyzwitch")
@test typeof(user_timeline_default) == Vector{Tweets}
#get_home_timeline
home_timeline_default = get_home_timeline()
@test typeof(home_timeline_default) == Vector{Tweets}
#get_single_tweet_id
get_tweet_by_id = get_single_tweet_id(id = "434685122671939584")
@test typeof(get_tweet_by_id) == Tweets
#get_search_tweets
duke_tweets = get_search_tweets(q = "#Duke", count = 200)
@test typeof(duke_tweets) <: Dict
#test sending/deleting direct messages
#commenting out because Twitter API changed. Come back to fix
# send_dm = post_direct_messages_send(text = "Testing from Julia, this might disappear later $(time())", screen_name = "randyzwitch")
# get_single_dm = get_direct_messages_show(id = send_dm.id)
# destroy = post_direct_messages_destroy(id = send_dm.id)
# @test typeof(send_dm) == Tweets
# @test typeof(get_single_dm) == Tweets
# @test typeof(destroy) == Tweets
#creating/destroying friendships
add_friend = post_friendships_create(screen_name = "kyrieirving")
unfollow = post_friendships_destroy(screen_name = "kyrieirving")
unfollow_df = DataFrame(unfollow)
@test typeof(add_friend) == Users
@test typeof(unfollow) == Users
@test size(unfollow_df)[2] == 40
# create a cursor for follower ids
follow_cursor_test = get_followers_ids(screen_name = "twitter", count = 10_000)
@test length(follow_cursor_test["ids"]) == 10_000
# create a cursor for friend ids - use barackobama because he follows a lot of accounts!
friend_cursor_test = get_friends_ids(screen_name = "BarackObama", count = 10_000)
@test length(friend_cursor_test["ids"]) == 10_000
# create a test for home timelines
home_t = get_home_timeline(count = 2)
@test length(home_t) > 1
# TEST of cursoring functionality on user timelines
user_t = get_user_timeline(screen_name = "stefanjwojcik", count = 400)
@test length(user_t) == 400
# get the minimum ID of the tweets returned (the earliest)
minid = minimum(x.id for x in user_t);
# now iterate until you hit that tweet: should return 399
# WARNING: current versions of julia cannot use keywords in macros? read here: https://github.com/JuliaLang/julia/pull/29261
# eventually replace since_id = minid
tweets_since = get_user_timeline(screen_name = "stefanjwojcik", count = 400, since_id = 1001808621053898752, include_rts=1)
@test length(tweets_since)>=399
# testing get_mentions_timeline
mentions = get_mentions_timeline(screen_name = "stefanjwojcik", count = 300)
@test length(mentions) >= 50 #sometimes API doesn't return number requested (twitter API specifies count is the max returned, may be much lower)
@test Tweets<:typeof(mentions[1])
# testing retweets_of_me
my_rts = get_retweets_of_me(count = 300)
@test Tweets<:typeof(my_rts[1])
Contributions are welcome! Kindly refer to the contribution guidelines.
Linux:
CodeCov:
Author: Randyzwitch
Source Code: https://github.com/randyzwitch/Twitter.jl
License: View license
1594399440
In this blog post, we’ll look at how to use R Markdown. By the end, you’ll have the skills you need to produce a document or presentation using R Mardown, from scratch!
We’ll show you how to convert the default R Markdown document into a useful reference guide of your own. We encourage you to follow along by building out your own R Markdown guide, but if you prefer to just read along, that works, too!
R Markdown is an open-source tool for producing reproducible reports in R. It enables you to keep all of your code, results, plots, and writing in one place. R Markdown is particularly useful when you are producing a document for an audience that is interested in the results from your analysis, but not your code.
R Markdown is powerful because it can be used for data analysis and data science, collaborating with others, and communicating results to decision makers. With R Markdown, you have the option to export your work to numerous formats including PDF, Microsoft Word, a slideshow, or an HTML document for use in a website.
Turn your data analysis into pretty documents with R Markdown.
We’ll use the RStudio integrated development environment (IDE) to produce our R Markdown reference guide. If you’d like to learn more about RStudio, check out our list of 23 awesome RStudio tips and tricks!
Here at Dataquest, we love using R Markdown for coding in R and authoring content. In fact, we wrote this blog post in R Markdown! Also, learners on the Dataquest platform use R Markdown for completing their R projects.
We included fully-reproducible code examples in this blog post. When you’ve mastered the content in this post, check out our other blog post on R Markdown tips, tricks, and shortcuts.
Okay, let’s get started with building our very own R Markdown reference document!
R Markdown is a free, open source tool that is installed like any other R package. Use the following command to install R Markdown:
install.packages("rmarkdown")
Now that R Markdown is installed, open a new R Markdown file in RStudio by navigating to File > New File > R Markdown…
. R Markdown files have the file extension “.Rmd”.
When you open a new R Markdown file in RStudio, a pop-up window appears that prompts you to select output format to use for the document.
The default output format is HTML. With HTML, you can easily view it in a web browser.
We recommend selecting the default HTML setting for now — it can save you time! Why? Because compiling an HTML document is generally faster than generating a PDF or other format. When you near a finished product, you change the output to the format of your choosing and then make the final touches.
One final thing to note is that the title you give your document in the pop-up above is not the file name! Navigate to File > Save As..
to name, and save, the document.
#data science tutorials #beginner #r #r markdown #r tutorial #r tutorials #rstats #rstudio #tutorial #tutorials
1596513720
What exactly is clean data? Clean data is accurate, complete, and in a format that is ready to analyze. Characteristics of clean data include data that are:
Common symptoms of messy data include data that contain:
In this blog post, we will work with five property-sales datasets that are publicly available on the New York City Department of Finance Rolling Sales Data website. We encourage you to download the datasets and follow along! Each file contains one year of real estate sales data for one of New York City’s five boroughs. We will work with the following Microsoft Excel files:
As we work through this blog post, imagine that you are helping a friend launch their home-inspection business in New York City. You offer to help them by analyzing the data to better understand the real-estate market. But you realize that before you can analyze the data in R, you will need to diagnose and clean it first. And before you can diagnose the data, you will need to load it into R!
Benefits of using tidyverse tools are often evident in the data-loading process. In many cases, the tidyverse package readxl
will clean some data for you as Microsoft Excel data is loaded into R. If you are working with CSV data, the tidyverse readr
package function read_csv()
is the function to use (we’ll cover that later).
Let’s look at an example. Here’s how the Excel file for the Brooklyn borough looks:
The Brooklyn Excel file
Now let’s load the Brooklyn dataset into R from an Excel file. We’ll use the readxl
package. We specify the function argument skip = 4
because the row that we want to use as the header (i.e. column names) is actually row 5. We can ignore the first four rows entirely and load the data into R beginning at row 5. Here’s the code:
library(readxl) # Load Excel files
brooklyn <- read_excel("rollingsales_brooklyn.xls", skip = 4)
Note we saved this dataset with the variable name brooklyn
for future use.
The tidyverse offers a user-friendly way to view this data with the glimpse()
function that is part of the tibble
package. To use this package, we will need to load it for use in our current session. But rather than loading this package alone, we can load many of the tidyverse packages at one time. If you do not have the tidyverse collection of packages, install it on your machine using the following command in your R or R Studio session:
install.packages("tidyverse")
Once the package is installed, load it to memory:
library(tidyverse)
Now that tidyverse
is loaded into memory, take a “glimpse” of the Brooklyn dataset:
glimpse(brooklyn)
## Observations: 20,185
## Variables: 21
## $ BOROUGH <chr> "3", "3", "3", "3", "3", "3", "…
## $ NEIGHBORHOOD <chr> "BATH BEACH", "BATH BEACH", "BA…
## $ `BUILDING CLASS CATEGORY` <chr> "01 ONE FAMILY DWELLINGS", "01 …
## $ `TAX CLASS AT PRESENT` <chr> "1", "1", "1", "1", "1", "1", "…
## $ BLOCK <dbl> 6359, 6360, 6364, 6367, 6371, 6…
## $ LOT <dbl> 70, 48, 74, 24, 19, 32, 65, 20,…
## $ `EASE-MENT` <lgl> NA, NA, NA, NA, NA, NA, NA, NA,…
## $ `BUILDING CLASS AT PRESENT` <chr> "S1", "A5", "A5", "A9", "A9", "…
## $ ADDRESS <chr> "8684 15TH AVENUE", "14 BAY 10T…
## $ `APARTMENT NUMBER` <chr> NA, NA, NA, NA, NA, NA, NA, NA,…
## $ `ZIP CODE` <dbl> 11228, 11228, 11214, 11214, 112…
## $ `RESIDENTIAL UNITS` <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1…
## $ `COMMERCIAL UNITS` <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ `TOTAL UNITS` <dbl> 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1…
## $ `LAND SQUARE FEET` <dbl> 1933, 2513, 2492, 1571, 2320, 3…
## $ `GROSS SQUARE FEET` <dbl> 4080, 1428, 972, 1456, 1566, 22…
## $ `YEAR BUILT` <dbl> 1930, 1930, 1950, 1935, 1930, 1…
## $ `TAX CLASS AT TIME OF SALE` <chr> "1", "1", "1", "1", "1", "1", "…
## $ `BUILDING CLASS AT TIME OF SALE` <chr> "S1", "A5", "A5", "A9", "A9", "…
## $ `SALE PRICE` <dbl> 1300000, 849000, 0, 830000, 0, …
## $ `SALE DATE` <dttm> 2020-04-28, 2020-03-18, 2019-0…
The glimpse()
function provides a user-friendly way to view the column names and data types for all columns, or variables, in the data frame. With this function, we are also able to view the first few observations in the data frame. This data frame has 20,185 observations, or property sales records. And there are 21 variables, or columns.
#data science tutorials #beginner #r #r tutorial #r tutorials #rstats #tidyverse #tutorial #tutorials