In this tutorial, you will learn how you can process images in Python using the OpenCV library.
In this tutorial, you will learn how you can process images in Python using the OpenCV library.
OpenCV is a free open source library used in real-time image processing. It’s used to process images, videos, and even live streams, but in this tutorial, we will process images only as a first step. Before getting started, let’s install OpenCV.
Table of Contents
To install OpenCV on your system, run the following pip command:
pip install opencv-python
Now OpenCV is installed successfully and we are ready. Let’s have some fun with some images!Rotate an Image
First of all, import the cv2 module.
Now to read the image, use the imread() method of the cv2 module, specify the path to the image in the arguments and store the image in a variable as below:
img = cv2.imread("pyimg.jpg")
The image is now treated as a matrix with rows and columns values stored in img.
Actually, if you check the type of the img, it will give you the following result:
>>>print(type(img)) <class 'numpy.ndarray'>
It’s a NumPy array! That why image processing using OpenCV is so easy. All the time you are working with a NumPy array.
To display the image, you can use the imshow() method of cv2.
cv2.imshow('Original Image', img) cv2.waitKey(0)
The waitkey functions take time as an argument in milliseconds as a delay for the window to close. Here we set the time to zero to show the window forever until we close it manually.
To rotate this image, you need the width and the height of the image because you will use them in the rotation process as you will see later.
height, width = img.shape[0:2]
The shape attribute returns the height and width of the image matrix. If you print
img.shape[0:2] , you will have the following output:
Okay, now we have our image matrix and we want to get the rotation matrix. To get the rotation matrix, we use the getRotationMatrix2D() method of cv2. The syntax of getRotationMatrix2D() is:
cv2.getRotationMatrix2D(center, angle, scale)
Here the center is the center point of rotation, the angle is the angle in degrees and scale is the scale property which makes the image fit on the screen.
To get the rotation matrix of our image, the code will be:
rotationMatrix = cv2.getRotationMatrix2D((width/2, height/2), 90, .5)
The next step is to rotate our image with the help of the rotation matrix.
To rotate the image, we have a cv2 method named wrapAffine which takes the original image, the rotation matrix of the image and the width and height of the image as arguments.
rotatedImage = cv2.warpAffine(img, rotationMatrix, (width, height))
The rotated image is stored in the rotatedImage matrix. To show the image, use imshow() as below:
cv2.imshow('Rotated Image', rotatedImage) cv2.waitKey(0)
After running the above lines of code, you will have the following output:Crop an Image
First, we need to import the cv2 module and read the image and extract the width and height of the image:
import cv2 img = cv2.imread("pyimg.jpg") height, width = img.shape[0:2]
Now get the starting and ending index of the row and column. This will define the size of the newly created image. For example, start from row number 10 till row number 15 will give the height of the image.
Similarly, start from column number 10 until column number 15 will give the width of the image.
You can get the starting point by specifying the percentage value of the total height and the total width. Similarly, to get the ending point of the cropped image, specify the percentage values as below:
startRow = int(height*.15) startCol = int(width*.15) endRow = int(height*.85) endCol = int(width*.85)
Now map these values to the original image. Note that you have to cast the starting and ending values to integers because when mapping, the indexes are always integers.
croppedImage = img[startRow:endRow, startCol:endCol]
Here we specified the range from starting to ending of rows and columns.
Now display the original and cropped image in the output:
cv2.imshow('Original Image', img) cv2.imshow('Cropped Image', croppedImage) cv2.waitKey(0)
The result will be as follows:Resize an Image
To resize an image, you can use the resize() method of openCV. In the resize method, you can either specify the values of x and y axis or the number of rows and columns which tells the size of the image.
Import and read the image:
import cv2 img = cv2.imread("pyimg.jpg")
Now using the resize method with axis values:
newImg = cv2.resize(img, (0,0), fx=0.75, fy=0.75) cv2.imshow('Resized Image', newImg) cv2.waitKey(0)
The result will be as follows:
Now using the row and column values to resize the image:
newImg = cv2.resize(img, (550, 350)) cv2.imshow('Resized Image', newImg) cv2.waitKey(0)
We say we want 550 columns (the width) and 350 rows (the height).
The result will be:Adjust Image Contrast
In Python OpenCV module, there is no particular function to adjust image contrast but the official documentation of OpenCV suggests an equation that can perform image brightness and image contrast both at the same time.
new_img = a * original_img + b
Here a is alpha which defines contrast of the image. If a is greater than 1, there will be higher contrast.
If the value of a is between 0 and 1 (smaller than 1 but greater than 0), there would be lower contrast. If a is 1, there will be no contrast effect on the image.
b stands for beta. The values of b vary from -127 to +127.
To implement this equation in Python OpenCV, you can use the addWeighted() method. We use The addWeighted() method as it generates the output in the range of 0 and 255 for a 24-bit color image.
The syntax of addWeighted() method is as follows:
cv2.addWeighted(source_img1, alpha1, source_img2, alpha2, beta)
This syntax will blend two images, the first source image (source_img1) with a weight of alpha1 and second source image (source_img2).
If you only want to apply contrast in one image, you can add a second image source as zeros using NumPy.
Let’s work on a simple example. Import the following modules:
import cv2 import numpy as np
Read the original image:
img = cv2.imread("pyimg.jpg")
Now apply the contrast. Since there is no other image, we will use the np.zeros which will create an array of the same shape and data type as the original image but the array will be filled with zeros.
contrast_img = cv2.addWeighted(img, 2.5, np.zeros(img.shape, img.dtype), 0, 0) cv2.imshow('Original Image', img) cv2.imshow('Contrast Image', contrast_img) cv2.waitKey(0)
In the above code, the brightness is set to 0 as we only want to apply contrast.
The comparison of the original and contrast image is as follows:Make an image blurry
To make an image blurry, you can use the GaussianBlur() method of OpenCV.
The GaussianBlur() uses the Gaussian kernel. The height and width of the kernel should be a positive and an odd number.
Then you have to specify the X and Y direction that is sigmaX and sigmaY respectively. If only one is specified, both are considered the same.
Consider the following example:
import cv2 img = cv2.imread("pyimg.jpg") blur_image = cv2.GaussianBlur(img, (7,7), 0) cv2.imshow('Original Image', img) cv2.imshow('Blur Image', blur_image) cv2.waitKey(0)
In the above snippet, the actual image is passed to GaussianBlur() along with height and width of the kernel and the X and Y directions.
The comparison of the original and blurry image is as follows:
In median blurring, the median of all the pixels of the image is calculated inside the kernel area. The central value is then replaced with the resultant median value. Median blurring is used when there are salt and pepper noise in the image.
To apply median blurring, you can use the medianBlur() method of OpenCV.
Consider the following example where we have a salt and pepper noise in the image:
import cv2 img = cv2.imread("pynoise.png") blur_image = cv2.medianBlur(img,5)
This will apply 50% noise in the image along with median blur. Now show the images:
cv2.imshow('Original Image', img) cv2.imshow('Blur Image', blur_image) cv2.waitKey(0)
The result will be like the following:
Another comparison of the original image and after blurring:Detect Edges
To detect the edges in an image, you can use the Canny() method of cv2 which implements the Canny edge detector. The Canny edge detector is also known as the optimal detector.
The syntax to Canny() is as follows:
cv2.Canny(image, minVal, maxVal)
Here minVal and maxVal are the minimum and maximum intensity gradient values respectively.
Consider the following code:
import cv2 img = cv2.imread("pyimg.jpg") edge_img = cv2.Canny(img,100,200) cv2.imshow("Detected Edges", edge_img) cv2.waitKey(0)
The output will be the following:
Here is the result of the above code on another image:Convert image to grayscale (Black & White)
The easy way to convert an image in grayscale is to load it like this:
img = cv2.imread("pyimg.jpg", 0)
There is another method using BGR2GRAY.
To convert a color image into a grayscale image, use the BGR2GRAY attribute of the cv2 module. This is demonstrated in the example below:
Import the cv2 module:
Read the image:
img = cv2.imread("pyimg.jpg")
Use the cvtColor() method of the cv2 module which takes the original image and the COLOR_BGR2GRAY attribute as an argument. Store the resultant image in a variable:
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Display the original and grayscale images:
cv2.imshow("Original Image", img) cv2.imshow("Gray Scale Image", gray_img) cv2.waitKey(0)
The output will be as follows:Centroid (Center of blob) detection
To find the center of an image, the first step is to convert the original image into grayscale. We can use the cvtColor() method of cv2 as we did before.
This is demonstrated in the following code:
import cv2 img = cv2.imread("py.jpg") gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
We read the image and convert it to a grayscale image. The new image is stored in gray_img.
Now we have to calculate the moments of the image. Use the moments() method of cv2. In the moments() method, the grayscale image will be passed as below:
moment = cv2.moments(gray_img)
Finally, we have the center of the image. To highlight this center position, we can use the circle method which will create a circle in the given coordinates of the given radius.
The circle() method takes the img, the x and y coordinates where the circle will be created, the size, the color that we want the circle to be and the thickness.
cv2.circle(img, (X, Y), 15, (205, 114, 101), 1)
The circle is created on the image.
cv2.imshow("Center of the Image", img) cv2.waitKey(0)
The original image is:
After detecting the center, our image will be as follows:Apply a mask for a colored image
Image masking means to apply some other image as a mask on the original image or to change the pixel values in the image.
To apply a mask on the image, we will use the HoughCircles() method of the OpenCV module. The HoughCircles() method detects the circles in an image. After detecting the circles, we can simply apply a mask on these circles.
The HoughCircles() method takes the original image, the Hough Gradient (which detects the gradient information in the edges of the circle), and the information from the following circle equation:
(x - xcenter)2 + (y - ycenter)2 = r2
In this equation (xcenter , ycenter) is the center of the circle and r is the radius of the circle.
Our original image is:
After detecting circles in the image, the result will be:
Okay, so we have the circles in the image and we can apply the mask. Consider the following code:
import cv2 import numpy as np img1 = cv2.imread('pyimg.jpg') img1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
Detecting the circles in the image using the HoughCircles() code from OpenCV: Hough Circle Transform:
gray_img = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 3) circles = cv2.HoughCircles(gray_img, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0) circles = np.uint16(np.around(circles))
To create the mask, use np.full which will return a NumPy array of given shape:
masking=np.full((img1.shape, img1.shape),0,dtype=np.uint8) for j in circles[0, :]: cv2.circle(masking, (j, j), j, (255, 255, 255), -1)
The next step is to combine the image and the masking array we created using the bitwise_or operator as follows:
final_img = cv2.bitwise_or(img1, img1, masking=masking)
Display the resultant image:Extracting text from Image (OCR)
To extract text from an image, you can use Google Tesseract-OCR. You can download it from this link
Then you should install the pytesseract module which is a Python wrapper for Tesseract-OCR.
The image from which we will extract the text from is as follows:
Now let’s convert the text in this image to a string of characters and display the text as a string on output:
Import the pytesseract module:
Set the path of the Tesseract-OCR executable file:
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract'
Now use the image_to_string method to convert the image into a string:
The output will be as follows:
Works like charm!Detect and correct text skew
In this section, we will correct the text skew.
The original image is as follows:
Import the modules cv2, NumPy and read the image:
import cv2 import numpy as np img = cv2.imread("pytext1.png")
Convert the image into a grayscale image:
Invert the grayscale image using bitwise_not:
Select the x and y coordinates of the pixels greater than zero by using the column_stack method of NumPy:
coordinates = np.column_stack(np.where(gray_img > 0))
Now we have to calculate the skew angle. We will use the minAreaRect() method of cv2 which returns an angle range from -90 to 0 degrees (where 0 is not included).
The rotated angle of the text region will be stored in the ang variable. Now we add a condition for the angle; if the text region’s angle is smaller than -45, we will add a 90 degrees else we will multiply the angle with a minus to make the angle positive.
if ang<-45: ang=-(90+ang) else: ang=-ang
Calculate the center of the text region:
height, width = img.shape[:2] center_img = (width / 2, height / 2)
Now we have the angle of text skew, we will apply the getRotationMatrix2D() to get the rotation matrix then we will use the wrapAffine() method to rotate the angle (explained earlier).
rotationMatrix = cv2.getRotationMatrix2D(center, angle, 1.0) rotated_img = cv2.warpAffine(img, rotationMatrix, (width, height), borderMode = cv2.BORDER_REFLECT)
Display the rotated image:
cv2.imshow("Rotated Image", rotated_img) cv2.waitKey(0)
Let’s detect the green color from an image:
Import the modules cv2 for images and NumPy for image arrays:
import cv2 import numpy as np
Read the image and convert it into HSV using cvtColor():
img = cv2.imread("pydetect.png") hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
Display the image:
cv2.imshow("HSV Image", hsv_img)
Now create a NumPy array for the lower green values and the upper green values:
lower_green = np.array([34, 177, 76]) upper_green = np.array([255, 255, 255])
Use the inRange() method of cv2 to check if the given image array elements lie between array values of upper and lower boundaries:
masking = cv2.inRange(hsv_img, lower_green, upper_green)
This will detect the green color.
Finally, display the original and resultant images:
cv2.imshow("Original Image", img)
cv2.imshow("Green Color detection", masking) cv2.waitKey(0)
To reduce noise from an image, OpenCV provides the following methods:
Let’s use fastNlMeansDenoisingColored() in our example:
Import the cv2 module and read the image:
2 3 import cv2 img = cv2.imread("pyn1.png")
Apply the denoising function which takes respectively the original image (src), the destination (which we have kept none as we are storing the resultant), the filter strength, the image value to remove the colored noise (usually equal to filter strength or 10), the template patch size in pixel to compute weights which should always be odd (recommended size equals 7) and the window size in pixels to compute average of the given pixel.
result = cv2.fastNlMeansDenoisingColored(img,None,20,10,7,21)
Display original and denoised image:
cv2.imshow("Original Image", img) cv2.imshow("Denoised Image", result) cv2.waitKey(0)
The output will be:Get image contour
Contours are the curves in an image that are joint together. The curves join the continuous points in an image. The purpose of contours is used to detect the objects.
The original image of which we are getting the contours of is given below:
Consider the following code where we used the findContours() method to find the contours in the image:
Import cv2 module:
Read the image and convert it to a grayscale image:
img = cv2.imread('py1.jpg') gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Find the threshold:
retval, thresh = cv2.threshold(gray_img, 127, 255, 0)
Use the findContours() which takes the image (we passed threshold here) and some attributes. See findContours() Official.
img_contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Draw the contours on the image using drawContours() method:
cv2.drawContours(img, img_contours, -1, (0, 255, 0))
Display the image:
cv2.imshow('Image Contours', img) cv2.waitKey(0)
The result will be:Remove Background from an image
To remove the background from an image, we will find the contours to detect edges of the main object and create a mask with np.zeros for the background and then combine the mask and the image using the bitwise_and operator.
Consider the example below:
Import the modules (NumPy and cv2):
import cv2 import numpy as np
Read the image and convert the image into a grayscale image:
img = cv2.imread("py.jpg") gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Find the threshold:
_, thresh = cv2.threshold(gray_img, 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
In the threshold() method, the last argument defines the style of the threshold. See Official documentation of OpenCV threshold.
Find the image contours:
img_contours = cv2.findContours(threshed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2]
Sort the contours:
img_contours = sorted(img_contours, key=cv2.contourArea) for i in img_contours: if cv2.contourArea(i) > 100: break
Generate the mask using np.zeros:
mask = np.zeros(img.shape[:2], np.uint8)
cv2.drawContours(mask, [i],-1, 255, -1)
Apply the bitwise_and operator:
new_img = cv2.bitwise_and(img, img, mask=mask)
Display the original image:
cv2.imshow("Original Image", img)
Display the resultant image:
cv2.imshow("Image with background removed", new_img) cv2.waitKey(0)
Image processing is fun when using OpenCV as you saw. I hope you find the tutorial useful. Keep coming back.
OpenCV Python Tutorial: Computer Vision With OpenCV In Python: Learn Vision Includes all OpenCV Image Processing Features with Simple Examples. Face Detection, Face Recognition. Computer Vision is an AI based, that is, Artificial Intelligence based technology that allows computers to understand and label images. Use OpenCV to work with image files. Create Face Detection Software. Detect Objects, including corner, edge, and grid detection techniques with OpenCV and Python. Use Python and Deep Learning to build image classifiers. Use Python and OpenCV to draw shapes on images and videos. Create Color Histograms with OpenCVOpenCV Python Tutorial: Computer Vision With OpenCV In Python
Computer Vision is an AI based, that is, Artificial Intelligence based technology that allows computers to understand and label images. Its now used in Convenience stores, Driver-less Car Testing, Security Access Mechanisms, Policing and Investigations Surveillance, Daily Medical Diagnosis monitoring health of crops and live stock and so on and so forth..
A common example will be face detection and unlocking mechanism that you use in your mobile phone. We use that daily. That is also a big application of Computer Vision. And today, top technology companies like Amazon, Google, Microsoft, Facebook etc are investing millions and millions of Dollars into Computer Vision based research and product development.
Computer vision allows us to analyze and leverage image and video data, with applications in a variety of industries, including self-driving cars, social network apps, medical diagnostics, and many more.
As the fastest growing language in popularity, Python is well suited to leverage the power of existing computer vision libraries to learn from all this image and video data.
What you'll learn
In this OpenCV Python Tutorial article, we will be covering various aspects of Computer Vision using OpenCV in Python. OpenCV has been a vital part in the development of software for a long time. Learning OpenCV is a good asset to the developer to improve aspects of coding and also helps in building a software development career.
We will be checking out the following concepts:
To simplify the answer to this — Let us consider a scenario.
We all use Facebook, correct? Let us say you and your friends went on a vacation and you clicked a lot of pictures and you want to upload them on Facebook and you did. But now, wouldn’t it take so much time just to find your friends faces and tag them in each and every picture? Well, Facebook is intelligent enough to actually tag people for you.
So, how do you think the auto tag feature works? In simple terms, it works on computer vision.
Computer Vision is an interdisciplinary field that deals with how computers can be made to gain a high-level understanding from digital images or videos.
The idea here is to automate tasks that the human visual systems can do. So, a computer should be able to recognize objects such as that of a face of a human being or a lamppost or even a statue.
Consider the below image:
We can figure out that it is an image of the New York Skyline. But, can a computer find this out all on its own? The answer is no!
The computer reads any image as a range of values between 0 and 255.
For any color image, there are 3 primary channels — Red, green and blue. How it works is pretty simple.
A matrix is formed for every primary color and later these matrices combine to provide a Pixel value for the individual R, G, B colors.
Each element of the matrices provide data pertaining to the intensity of brightness of the pixel.
Consider the following image:
As shown, the size of the image here can be calculated as B x A x 3.
Note: For a black-white image, there is only one single channel.
Next in this article, let us look at what OpenCV actually is.
OpenCV is a Python library which is designed to solve computer vision problems. OpenCV was originally developed in 1999 by Intel but later it was supported by Willow Garage.
OpenCV supports a wide variety of programming languages such as C++, Python, Java etc. Support for multiple platforms including Windows, Linux, and MacOS.
OpenCV Python is nothing but a wrapper class for the original C++ library to be used with Python. Using this, all of the OpenCV array structures gets converted to/from NumPy arrays.
This makes it easier to integrate it with other libraries which use NumPy. For example, libraries such as SciPy and Matplotlib.
Next in this article, let us look at some of the basic operations that we can perform with OpenCV.
Let us look at various concepts ranging from loading images to resizing them and so on.
Import cv2 # colored Image Img = cv2.imread (“Penguins.jpg”,1) # Black and White (gray scale) Img_1 = cv2.imread (“Penguins.jpg”,0))
As seen in the above piece of code, the first requirement is to import the OpenCV module.
Later we can read the image using imread module. The 1 in the parameters denotes that it is a color image. If the parameter was 0 instead of 1, it would mean that the image being imported is a black and white image. The name of the image here is ‘Penguins’. Pretty straightforward, right?
We can make use of the shape sub-function to print out the shape of the image. Refer below.
Import cv2 # Black and White (gray scale) Img = cv2.imread (“Penguins.jpg”,0) Print(img.shape)
By shape of the image, we mean the shape of the NumPy array. As you see from executing the code, the matrix consists of 768 rows and 1024 columns.
Displaying an image using OpenCV is pretty simple and straightforward. Refer below.
import cv2 # Black and White (gray scale) Img = cv2.imread (“Penguins.jpg”,0) cv2.imshow(“Penguins”, img) cv2.waitKey(0) # cv2.waitKey(2000) cv2.destroyAllWindows()
As you can see, we first import the image using imread. We require a window output to display the images, right?
We use the imshow function to display the image by opening a window. There are 2 parameters to theimshow function which is the name of the window and the image object to be displayed.
Later, we wait for a user event. waitKey makes the window static until the user presses a key. The parameter passed to it is the time in milliseconds.
And lastly, we use destroyAllWindows to close the window based on the waitForKey parameter.
Similarly, resizing an image is very easy. Here’s another code snippet:
import cv2 # Black and White (gray scale) img = cv2.imread (“Penguins.jpg”,0) resized_image = cv2.resize(img, (650,500)) cv2.imshow(“Penguins”, resized_image) cv2.waitKey(0) cv2.destroyAllWindows()
Here, resize function is used to resize an image to the desired shape. The parameter here is the shape of the new resized image.
Later, do note that the image object changes from img to resized_image, because of the image object changes now.
Rest of the code is pretty simple to the previous one, correct?
I am sure you guys are curious to look at the penguins, right? This is the image we were looking to output all this while!
There is another way to pass the parameters to the resize function. Refer below.
Resized_image = cv2.resize(img, int(img.shape/2), int(img.shape/2)))
Here, we get the new image shape to be half of that of the original image.
Next up in this article, let us look at how we perform face detection using OpenCV.
This seems complex at first but it is very easy. Let me walk you through the entire process and you will feel the same.
Step 1: Considering our prerequisites, we will require an image, to begin with. Later we need to create a cascade classifier which will eventually give us the features of the face.
Step 2: This step involves making use of OpenCV which will read the image and the features file. So at this point, there are NumPy arrays at the primary data points.
All we need to do is to search for the row and column values of the face NumPy ndarray. This is the array with the face rectangle coordinates.
Step 3: This final step involves displaying the image with the rectangular face box.
Check out the following image, here I have summarized the 3 steps in the form of an image for easier readability:
Pretty straightforward, correct?
First, we create a CascadeClassifier object to extract the features of the face as explained earlier. The path to the XML file which contains the face features is the parameter here.
The next step would be to read an image with a face on it and convert it into a black and white image using COLOR_BGR2GREY. Followed by this, we search for the coordinates for the image. This is done using detectMultiScale.
What coordinates, you ask? It’s the coordinates for the face rectangle. The scaleFactor is used to decrease the shape value by 5% until the face is found. So, on the whole — Smaller the value, greater is the accuracy.
Finally, the face is printed on the window.
This logic is very simple — As simple as making use of a for loop statement. Check out the following image.
We define the method to create a rectangle using cv2.rectangle by passing parameters such as the image object, RGB values of the box outline and the width of the rectangle.
Let us check out the entire code for face detection:
import cv2 # Create a CascadeClassifier Object face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml") # Reading the image as it is img = cv2.imread("photo.jpg") # Reading the image as gray scale image gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Search the co-ordintes of the image faces = face_cascade.detectMultiScale(gray_img, scaleFactor = 1.05, minNeighbors=5) for x,y,w,h in faces: img = cv2.rectangle(img, (x,y), (x+w,y+h),(0,255,0),3) resized = cv2.resize(img, (int(img.shape/7),int(img.shape/7))) cv2.imshow("Gray", resized) cv2.waitKey(0) cv2.destroyAllWindows()
Next up in this article, let us look at how to use OpenCV to capture video with the computer webcam.
Capturing videos using OpenCV is pretty simple as well. the following loop will give you a better idea. Check it out:
The images are read one-by-one and hence videos are produced due to fast processing of frames which makes the individual images move.
Check out the following image:
First, we import the OpenCV library as usual. Next, we have a method called VideoCapture which is used to create the VideoCapture object. This method is used to trigger the camera on the user’s machine. The parameter to this function denotes if the program should make use of the built-in camera or an add-on camera. ‘0’ denotes the built-in camera in this case.
And lastly, the release method is used to release the camera in a few milliseconds.
When you go ahead and type in and try to execute the above code, you will notice that the camera light switches on for a split second and turns off later. Why does this happen?
This happens because there is no time delay to keep the camera functional.
Looking at the above code, we have a new line called time.sleep(3) — This makes the script to stop for 3 seconds. Do note that the parameter passed is the time in seconds. So, when the code is executed, the webcam will be turned on for 3 seconds.
Adding a window to show the video output is pretty simple and can be compared to the same methods used for images. However, there is a slight change. Check out the following code:
I am pretty sure you can make the most sense from the above code apart from one or two lines.
Here, we have defined a NumPy array which we use to represent the first image that the video captures — This is stored in the frame array.
We also have check — This is a boolean datatype which returns True if Python is able to access and read the VideoCapture object.
Check out the output below:
As you can check out, we got the output as True and the part of the frame array is printed.
But we need to read the first frame/image of the video to begin, correct?
To do exactly that, we need to first create a frame object which will read the images of the VideoCapture object.
As seen above, the imshow method is used to capture the first frame of the video.
All this while, we have tried to capture the first image/frame of the video but directly capturing the video.
So how do we go about capturing the video instead of the first image in OpenCV?
In order to capture the video, we will be using the while loop. While condition will be such that, until unless ‘check’ is True. If it is, then Python will display the frames.
Here’s the code snippet image:
We make use of the cvtColor function to convert each frame into a grey-scale image as explained earlier.
waitKey(1) will make sure to generate a new frame after every millisecond of a gap.
It is important here that you note that the while loop is completely in play to help iterate through the frames and eventually display the video.
There is a user event trigger here as well. Once the ‘q’ key is pressed by the user, the program window closes.
OpenCV is pretty easy to grasp, right? I personally love how good the readability is and how quickly a beginner can get started working with OpenCV.
Next up in this article, let us look at how to use a very interesting motion detector use case using OpenCV.
You have been approached by a company that is studying human behavior. Your task is to give them a webcam, that can detect the motion or any movement in front of it. This should return a graph, this graph should contain how long the human/object was in front of the camera.
So, now that we have defined our problem statement, we need to build a solution logic to approach the problem in a structured way.
Consider the below diagram:
Initially, we save the image in a particular frame.
The next step involves converting the image to a Gaussian blur image. This is done so as to ensure we calculate a palpable difference between the blurred image and the actual image.
At this point, the image is still not an object. We define a threshold to remove blemishes such as shadows and other noises in the image.
Borders for the object are defined later and we add a rectangular box around the object as we discussed earlier on the blog.
Lastly, we calculate the time at which the object appears and exits the frame.
Pretty easy, right?
Here’s the code snippet:
The same principle follows through here as well. We first import the package and create the VideoCapture object to ensure we capture video using the webcam.
The while loop iterates through the individual frames of the video. We convert the color frame to a grey-scale image and later we convert this grey-scale image to Gaussian blur.
We need to store the first image/frame of the video, correct? We make use of the if statement for this purpose alone.
Now, let us dive into a little more code:
We make use of the absdiff function to calculate the difference between the first occurring frame and all the other frames.
The threshold function provides a threshold value, such that it will convert the difference value with less than 30 to black. If the difference is greater than 30 it will convert those pixels to white color. THRESH_BINARY is used for this purpose.
Later, we make use of the findContours function to define the contour area for our image. And we add in the borders at this stage as well.
The contourArea function, as previously explained, removes the noises and the shadows. To make it simple, it will keep only that part white, which has an area greater than 1000 pixels as we’ve defined for that.
Later, we create a rectangular box around our object in the working frame.
And followed by this is this simple code:
As discussed earlier, the frame changes every 1 millisecond and when the user enters ‘q’, the loop breaks and the window closes.
We’ve covered all of the major details on this OpenCV Python Tutorial blog. One thing that remains with our use-case is that we need to calculate the time for which the object was in front of the camera.
We make use of DataFrame to store the time values during which object detection and movement appear in the frame.
Followed by that is VideoCapture function as explained earlier. But here, we have a flag bit we call status. We use this status at the beginning of the recording to be zero as the object is not visible initially.
We will change the status flag to 1 when the object is being detected as shown in the above figure. Pretty simple, right?
We are going to make a list of the status for every scanned frame and later record the date and time using datetime in a list if and where a change occurs.
And we store the time values in a DataFrame as shown in the above explanatory diagram. We’ll conclude by writing the DataFrame to a CSV file as shown.
The final step in our use-case to display the results. We are displaying the graph which denotes the motion on 2-axes. Consider the below code:
To begin with, we import the DataFrame from the motion_detector.py file.
The next step involves converting time to a readable string format which can be parsed.
Lastly, the DataFrame of time values is plotted on the browser using Bokeh plots.
I hope this article helps you in learning all the fundamentals needed to get started with OpenCV using Python.
This will be very handy when you are trying to develop applications that require image recognition and similar principles. Now, you should also be able to use these concepts to develop applications easily with the help of OpenCV in Python.
*Originally published at *www.edureka.co
In this article, you will learn to setup OpenCV-Python in your Windows system.
Below steps are tested in a Windows 7-64 bit machine with Visual Studio 2010 and Visual Studio 2012. The screenshots shows VS2012.Installing OpenCV from prebuilt binaries
1 - Below Python packages are to be downloaded and installed to their default locations.
1.1. Python-2.7.x.> 1.2. Numpy.> 1.3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials).
2 - Install all packages into their default locations. Python will be installed to C:/Python27/.
3 - After installation, open Python IDLE. Enter import numpy and make sure Numpy is working fine.
5 - Goto opencv/build/python/2.7 folder.
6 - Copy cv2.pyd to C:/Python27/lib/site-packages.
7 - Open Python IDLE and type following codes in Python terminal.
>>> import cv2 >>> print cv2.__version__
If the results are printed out without any errors, congratulations !!! You have installed OpenCV-Python successfully.Building OpenCV from source
1 - Download and install Visual Studio and CMake.
1.1. Visual Studio 2012> 1.2. CMake
2 - Download and install necessary Python packages to their default locations
2.1. Python 2.7.x> 2.2. Numpy> 2.3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials.)
Note: In this case, we are using 32-bit binaries of Python packages. But if you want to use OpenCV for x64, 64-bit binaries of Python packages are to be installed. Problem is that, there is no official 64-bit binaries of Numpy. You have to build it on your own. For that, you have to use the same compiler used to build Python. When you start Python IDLE, it shows the compiler details. You can get more information here. So your system must have the same Visual Studio version and build Numpy from source.
Note: Another method to have 64-bit Python packages is to use ready-made Python distributions from third-parties like Anaconda, Enthought etc. It will be bigger in size, but will have everything you need. Everything in a single shell. You can also download 32-bit versions also.
3 - Make sure Python and Numpy are working fine.
5 - Extract it to a folder, opencv and create a new folder build in it.
6 - Open CMake-gui (Start > All Programs > CMake-gui)
7 - Fill the fields as follows (see the image below):
7.1. Click on Browse Source... and locate the opencv folder.
7.2. Click on Browse Build... and locate the build folder we created.
7.3. Click on Configure.
7.4. It will open a new window to select the compiler. Choose appropriate compiler (here, Visual Studio 11) and click Finish.
7.5. Wait until analysis is finished.
8 - You will see all the fields are marked in red. Click on the WITH field to expand it. It decides what extra features you need. So mark appropriate fields. See the below image:
9 - Now click on BUILD field to expand it. First few fields configure the build method. See the below image:
10 - Remaining fields specify what modules are to be built. Since GPU modules are not yet supported by OpenCV-Python, you can completely avoid it to save time (But if you work with them, keep it there). See the image below:
11 -Now click on ENABLE field to expand it. Make sure ENABLE*SOLUTIONFOLDERS* is unchecked (Solution folders are not supported by Visual Studio Express edition). See the image below:
12 - Also make sure that in the PYTHON field, everything is filled. (Ignore PYTHONDEBUGLIBRARY). See image below:
13 - Finally click the Generate button.
14 - Now go to our opencv/build folder. There you will find OpenCV.sln file. Open it with Visual Studio.
15 - Check build mode as Release instead of Debug.
16 - In the solution explorer, right-click on the Solution (or ALL_BUILD) and build it. It will take some time to finish.
17 - Again, right-click on INSTALL and build it. Now OpenCV-Python will be installed.
18 - Open Python IDLE and enter import cv2. If no error, it is installed correctly.
Note: We have installed with no other support like TBB, Eigen, Qt, Documentation etc. It would be difficult to explain it here. A more detailed video will be added soon or you can just hack around.