Edureka Fan

Edureka Fan

1629735089

Manual Testing Tutorial For Beginners

This Edureka video on "What is Manual Testing" will help you understand all about manual testing and how it is performed and integrated with test automation.

#testing 

What is GEEK

Buddha Community

Manual Testing Tutorial For Beginners
Dejah  Reinger

Dejah Reinger

1599859380

How to Do API Testing?

Nowadays API testing is an integral part of testing. There are a lot of tools like postman, insomnia, etc. There are many articles that ask what is API, What is API testing, but the problem is How to do API testing? What I need to validate.

Note: In this article, I am going to use postman assertions for all the examples since it is the most popular tool. But this article is not intended only for the postman tool.

Let’s directly jump to the topic.

Let’s consider you have an API endpoint example http://dzone.com/getuserDetails/{{username}} when you send the get request to that URL it returns the JSON response.

My API endpoint is http://dzone.com/getuserDetails/{{username}}

The response is in JSON format like below

JSON

{
  "jobTitle": "string",
  "userid": "string",
  "phoneNumber": "string",
  "password": "string",
  "email": "user@example.com",
  "firstName": "string",
  "lastName": "string",
  "userName": "string",
  "country": "string",
  "region": "string",
  "city": "string",
  "department": "string",
  "userType": 0
}

In the JSON we can see there are properties and associated values.

Now, For example, if we need details of the user with the username ‘ganeshhegde’ we need to send a **GET **request to **http://dzone.com/getuserDetails/ganeshhegde **

dzone.com

Now there are two scenarios.

1. Valid Usecase: User is available in the database and it returns user details with status code 200

2. Invalid Usecase: User is Unavailable/Invalid user in this case it returns status with code 404 with not found message.

#tutorial #performance #api #test automation #api testing #testing and qa #application programming interface #testing as a service #testing tutorial #api test

Mikel  Okuneva

Mikel Okuneva

1596793726

Where To Learn Test Programming — July 2020 Edition

What do you do when you have lots of free time on your hands? Why not learn test programming strategies and approaches?

When you’re looking for places to learn test programming, Test Automation University has you covered. From API testing through visual validation, you can hone your skills and learn new approaches on TAU.

We introduced five new TAU courses from April through June, and each of them can help you expand your knowledge, learn a new approach, and improve your craft as a test automation engineer. They are:

These courses add to the other three courses we introduced in January through March 2020:

  • IntelliJ for Test Automation Engineers (3 hrs 41 min)
  • Cucumber with JavaScript (1 hr 22 min)
  • Python Programming (2 hrs)

Each of these courses can give you a new set of skills.

Let’s look at each in a little detail.

Mobile Automation With Appium in JavaScript

Orane Findley teaches Mobile Automation with Appium in JavaScript. Orane walks through all the basics of Appium, starting with what it is and where it runs.

javascript

“Appium is an open-source tool for automating native, web, and hybrid applications on different platforms.”

In the introduction, Orane describes the course parts:

  • Setup and Dependencies — installing Appium and setting up your first project
  • Working with elements by finding them, sending values, clicking, and submitting
  • Creating sessions, changing screen orientations, and taking screenshots
  • Timing, including TimeOuts and Implicit Waits
  • Collecting attributes and data from an element
  • Selecting and using element states
  • Reviewing everything to make it all make sense

The first chapter, broken into five parts, gets your system ready for the rest of the course. You’ll download and install a Java Developer Kit, a stable version of Node.js, Android Studio and Emulator (for a mobile device emulator), Visual Studio Code for an IDE, Appium Server, and a sample Appium Android Package Kit. If you get into trouble, you can use the Test Automation University Slack channel to get help from Orane. Each subchapter contains the links to get to the proper software. Finally, Orane has you customize your configuration for the course project.

Chapter 2 deals with elements and screen interactions for your app. You can find elements on the page, interact with those elements, and scroll the page to make other elements visible. Orane breaks the chapter into three distinct subchapters so you can become competent with each part of finding, scrolling, and interacting with the app. The quiz comes at the end of the third subchapter.

The remaining chapters each deal with specific bullets listed above: sessions and screen capture, timing, element attributes, and using element states. The final summary chapter ensures you have internalized the key takeaways from the course. Each of these chapters includes its quiz.

When you complete this course successfully, you will have both a certificate of completion and the code infrastructure available on your system to start testing mobile apps using Appium.

Selenium WebDriver With Python

Andrew Knight, who blogs as The Automation Panda, teaches the course on Selenium WebDriver with Python. As Andrew points out, Python has become a popular language for test automation. If you don’t know Python at all, he points you to Jess Ingrassellino’s great course, Python for Test Programming, also on Test Automation University.

Se

In the first chapter, Andrew has you write your first test. Not in Python, but Gherkin. If you have never used Gherkin syntax, it helps you structure your tests in pseudocode that you can translate into any language of your choice. Andrew points out that it’s important to write your test steps before you write test code — and Gherkin makes this process straightforward.

first test case

The second chapter goes through setting up a pytest, the test framework Andrew uses. He assumes you already have Python 3.8 installed. Depending on your machine, you may need to do some work (Macs come with Python 2.7.16 installed, which is old and won’t work. Andrew also goes through the pip package manager to install pipenv. He gives you a GitHub link to his test code for the project. And, finally, he creates a test using the Gherkin codes as comments to show you how a test runs in pytest.

In the third chapter, you set up Selenium Webdriver to work with specific browsers, then create your test fixture in the pytest. Andrew reminds you to download the appropriate browser driver for the browser you want to test — for example, chromedriver to drive Chrome and geckodriver to drive Firefox. Once you use pipenv to install Selenium, you begin your test fixture. One thing to remember is to call an explicit quit for your webdriver after a test.

Chapter 4 goes through page objects, and how you abstract page object details to simplify your test structure. Chapter 5 goes through element locator structures and how to use these in Python. And, in Chapter 6, Andrew goes through some common webdriver calls and how to use them in your tests. These first six chapters cover the basics of testing with Python and Selenium.

Now that you have the basics down, the final three chapters review some advanced ideas: testing with multiple browsers, handling race conditions, and running your tests in parallel. This course gives you specific skills around Python and Selenium on top of what you can get from the Python for Test Programming course.

#tutorial #performance #testing #automation #test automation #automated testing #visual testing #visual testing best practices #testing tutorial

Jeromy  Lowe

Jeromy Lowe

1599097440

Data Visualization in R with ggplot2: A Beginner Tutorial

A famous general is thought to have said, “A good sketch is better than a long speech.” That advice may have come from the battlefield, but it’s applicable in lots of other areas — including data science. “Sketching” out our data by visualizing it using ggplot2 in R is more impactful than simply describing the trends we find.

This is why we visualize data. We visualize data because it’s easier to learn from something that we can see rather than read. And thankfully for data analysts and data scientists who use R, there’s a tidyverse package called ggplot2 that makes data visualization a snap!

In this blog post, we’ll learn how to take some data and produce a visualization using R. To work through it, it’s best if you already have an understanding of R programming syntax, but you don’t need to be an expert or have any prior experience working with ggplot2

#data science tutorials #beginner #ggplot2 #r #r tutorial #r tutorials #rstats #tutorial #tutorials

Willie  Beier

Willie Beier

1596728880

Tutorial: Getting Started with R and RStudio

In this tutorial we’ll learn how to begin programming with R using RStudio. We’ll install R, and RStudio RStudio, an extremely popular development environment for R. We’ll learn the key RStudio features in order to start programming in R on our own.

If you already know how to use RStudio and want to learn some tips, tricks, and shortcuts, check out this Dataquest blog post.

Table of Contents

#data science tutorials #beginner #r tutorial #r tutorials #rstats #tutorial #tutorials

Tutorial: Loading and Cleaning Data with R and the tidyverse

1. Characteristics of Clean Data and Messy Data

What exactly is clean data? Clean data is accurate, complete, and in a format that is ready to analyze. Characteristics of clean data include data that are:

  • Free of duplicate rows/values
  • Error-free (e.g. free of misspellings)
  • Relevant (e.g. free of special characters)
  • The appropriate data type for analysis
  • Free of outliers (or only contain outliers have been identified/understood), and
  • Follows a “tidy data” structure

Common symptoms of messy data include data that contain:

  • Special characters (e.g. commas in numeric values)
  • Numeric values stored as text/character data types
  • Duplicate rows
  • Misspellings
  • Inaccuracies
  • White space
  • Missing data
  • Zeros instead of null values

2. Motivation

In this blog post, we will work with five property-sales datasets that are publicly available on the New York City Department of Finance Rolling Sales Data website. We encourage you to download the datasets and follow along! Each file contains one year of real estate sales data for one of New York City’s five boroughs. We will work with the following Microsoft Excel files:

  • rollingsales_bronx.xls
  • rollingsales_brooklyn.xls
  • rollingsales_manhattan.xls
  • rollingsales_queens.xls
  • rollingsales_statenisland.xls

As we work through this blog post, imagine that you are helping a friend launch their home-inspection business in New York City. You offer to help them by analyzing the data to better understand the real-estate market. But you realize that before you can analyze the data in R, you will need to diagnose and clean it first. And before you can diagnose the data, you will need to load it into R!

3. Load Data into R with readxl

Benefits of using tidyverse tools are often evident in the data-loading process. In many cases, the tidyverse package readxl will clean some data for you as Microsoft Excel data is loaded into R. If you are working with CSV data, the tidyverse readr package function read_csv() is the function to use (we’ll cover that later).

Let’s look at an example. Here’s how the Excel file for the Brooklyn borough looks:

The Brooklyn Excel file

Now let’s load the Brooklyn dataset into R from an Excel file. We’ll use the readxlpackage. We specify the function argument skip = 4 because the row that we want to use as the header (i.e. column names) is actually row 5. We can ignore the first four rows entirely and load the data into R beginning at row 5. Here’s the code:

library(readxl) # Load Excel files
brooklyn <- read_excel("rollingsales_brooklyn.xls", skip = 4)

Note we saved this dataset with the variable name brooklyn for future use.

4. View the Data with tidyr::glimpse()

The tidyverse offers a user-friendly way to view this data with the glimpse() function that is part of the tibble package. To use this package, we will need to load it for use in our current session. But rather than loading this package alone, we can load many of the tidyverse packages at one time. If you do not have the tidyverse collection of packages, install it on your machine using the following command in your R or R Studio session:

install.packages("tidyverse")

Once the package is installed, load it to memory:

library(tidyverse)

Now that tidyverse is loaded into memory, take a “glimpse” of the Brooklyn dataset:

glimpse(brooklyn)
## Observations: 20,185
## Variables: 21
## $ BOROUGH <chr> "3", "3", "3", "3", "3", "3", "…
## $ NEIGHBORHOOD <chr> "BATH BEACH", "BATH BEACH", "BA…
## $ `BUILDING CLASS CATEGORY` <chr> "01 ONE FAMILY DWELLINGS", "01 …
## $ `TAX CLASS AT PRESENT` <chr> "1", "1", "1", "1", "1", "1", "…
## $ BLOCK <dbl> 6359, 6360, 6364, 6367, 6371, 6…
## $ LOT <dbl> 70, 48, 74, 24, 19, 32, 65, 20,…
## $ `EASE-MENT` <lgl> NA, NA, NA, NA, NA, NA, NA, NA,…
## $ `BUILDING CLASS AT PRESENT` <chr> "S1", "A5", "A5", "A9", "A9", "…
## $ ADDRESS <chr> "8684 15TH AVENUE", "14 BAY 10T…
## $ `APARTMENT NUMBER` <chr> NA, NA, NA, NA, NA, NA, NA, NA,…
## $ `ZIP CODE` <dbl> 11228, 11228, 11214, 11214, 112…
## $ `RESIDENTIAL UNITS` <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1…
## $ `COMMERCIAL UNITS` <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ `TOTAL UNITS` <dbl> 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1…
## $ `LAND SQUARE FEET` <dbl> 1933, 2513, 2492, 1571, 2320, 3…
## $ `GROSS SQUARE FEET` <dbl> 4080, 1428, 972, 1456, 1566, 22…
## $ `YEAR BUILT` <dbl> 1930, 1930, 1950, 1935, 1930, 1…
## $ `TAX CLASS AT TIME OF SALE` <chr> "1", "1", "1", "1", "1", "1", "…
## $ `BUILDING CLASS AT TIME OF SALE` <chr> "S1", "A5", "A5", "A9", "A9", "…
## $ `SALE PRICE` <dbl> 1300000, 849000, 0, 830000, 0, …
## $ `SALE DATE` <dttm> 2020-04-28, 2020-03-18, 2019-0…

The glimpse() function provides a user-friendly way to view the column names and data types for all columns, or variables, in the data frame. With this function, we are also able to view the first few observations in the data frame. This data frame has 20,185 observations, or property sales records. And there are 21 variables, or columns.

#data science tutorials #beginner #r #r tutorial #r tutorials #rstats #tidyverse #tutorial #tutorials