OpenCV

OpenCV

OpenCV is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez. The library is cross-platform and free for use under the open-source BSD license

How to Create A Virtual Drag and Drop System using Opencv and Python

Drag and Drop

In this project I am going to learn how to create a virtual drag and drop system using opencv and python

How to install

  1. Clone this repository on your computer https://github.com/paveldat/drag_and_drop.git
  2. Install all the requirements run libraries.bat or pip install -r requirements.txt
  3. Run the program python main.py

Help

You might face issue with webcam not showing and you get errors. To solve it just change the value in this line (for example to 1). cap = cv2.VideoCapture(0) Increment this number until you see your webcam.

Hand Landmarks

Click

In order to simulate a click, you need to connect the index and middle fingers on your hand. An example of a valid click is shown in the image below.

Result

Download details:
Author: paveldat
Source code: https://github.com/paveldat/drag_and_drop
License:

#opencv #python

How to Create A Virtual Drag and Drop System using Opencv and Python

Building Exercise Monitoring Using OpenCV and Mediapipe

Exercise Monitoring

Unlike most common fitness apps available in the market, this app will make use of computer vision to monitor, analyze, and quantify physical excercises. Instead of just giving the user a set of routines, the app has the capability of understanding proper workout postures (eg. Situps). With that, the app can automatically count the reps if the user is following the proper workout posture for the given routine. 

Concept

Download details:
Author: mecsung
Source code: https://github.com/mecsung/Exercise-Monitoring-Using-Opencv
License:

#opencv #python #mediapipe 

Building Exercise Monitoring Using OpenCV and Mediapipe

A Simple Object Detection Application Utilizing OpenCV

ObjectDetection - Face Detection

This simple object detection app utilizes OpenCV, an open source computer vision and machine learning software library. There are countless users that utilizes the OpenCV library in order to create vast projects. In order to create an object detection application, I needed to research the Haar cascade Algorithm. This algorithm is needed in order to create a functioning application. Check out opencv.org!

HAAR Cascade

Here's a simple explaination of what happens anytime an image is scanned for a specified object with the use of the Haar cascade algorithm. A small matrix goes across the image from left-to-right and top-to-bottom, gets a "feature" from each section covered and then classifies if the section has the specified feature we're looking for ( "Yes or no" ). So how do we define if the specified feature is a Yes or No feature? Well by training a Cascade Classifier we are able to do so. By feeding a classifier Positive and Negative samples of our desired object, it will start to recognize which sections contain the features. OpenCV has pre-trained Cascade Classifiers that will be used within this project!

How it Works / Try it out!

Once you run the program, the application will prompt you for an image file. This image file needs to be located within the 'Assets/Images/' directory of the project in order to continue on forward. If the image file is not detected, a simple try and except statement is executed to ask you again for the file. Once the image has been fed to the program, it will resize while maintaining the resolution and aspect ratio for a user friendly experience. This image will then be converted into grayscale and then ran with our specified cascade classifier (in this case we are using the Haarcascade Frontal Face xml file provided by OpenCV). The program will start to detect any features found. Finally, the result image is displayed on screen with a box highlighting the object detected.

Cascade = cv2.CascadeClassifier("Assets/CascadeClassifier/haarcascade_frontalface_default.xml") 
If you'd like to test out some of the other Cascade Classifiers, I suggest OpenCV's predefined Haars Cascade Classifiers. Simply download and export the classifier to the 'Assets/CascadeClassifier/' directory, and then replace the code highlighted above with the path reference of the new classifier.

Next Steps

Overall, this is a great way of showcasing the first steps in computer vision. My next step will definetly be looking into training my own Cascade Classifier.

Download details:
Author: ABCodez
Source code: https://github.com/ABCodez/ObjectDetection
License:

#opencv #python #machinelearning

A Simple Object Detection Application Utilizing OpenCV

Cvnp: Pybind11 Casts Between Numpy and OpenCV In C++

cvnp: pybind11 casts and transformers between numpy and OpenCV, possibly with shared memory

Explicit transformers between cv::Mat / cv::Matx and numpy.ndarray, with or without shared memory

Notes:

  • When going from Python to C++ (nparray_to_mat), the memory is always shared
  • When going from C++ to Python (mat_to_nparray) , you have to specify whether you want to share memory via the boolean parameter share_memory
    pybind11::array mat_to_nparray(const cv::Mat& m, bool share_memory);
    cv::Mat         nparray_to_mat(pybind11::array& a);

        template<typename _Tp, int _rows, int _cols>
    pybind11::array matx_to_nparray(const cv::Matx<_Tp, _rows, _cols>& m, bool share_memory);
        template<typename _Tp, int _rows, int _cols>
    void            nparray_to_matx(pybind11::array &a, cv::Matx<_Tp, _rows, _cols>& out_matrix);

Warning: be extremely cautious of the lifetime of your Matrixes when using shared memory! For example, the code below is guaranted to be a definitive UB, and a may cause crash much later.

pybind11::array make_array()
{
    cv::Mat m(cv::Size(10, 10), CV_8UC1);               // create a matrix on the stack
    pybind11::array a = cvnp::mat_to_nparray(m, true);  // create a pybind array from it, using
                                                        // shared memory, which is on the stack!
    return a;                                                        
}  // Here be dragons, when closing the scope!
   // m is now out of scope, it is thus freed, 
   // and the returned array directly points to the old address on the stack!

Automatic casts:

Without shared memory

  • Casts without shared memory between cv::Mat, cv::Matx, cv::Vec and numpy.ndarray
  • Casts without shared memory for simple types, between cv::Size, cv::Point, cv::Point3 and python tuple

With shared memory

  • Casts with shared memory between cvnp::Mat_shared, cvnp::Matx_shared, cvnp::Vec_shared and numpy.ndarray

When you want to cast with shared memory, use these wrappers, which can easily be constructed from their OpenCV counterparts. They are defined in cvnp/cvnp_shared_mat.h.

Be sure that your matrixes lifetime if sufficient (do not ever share the memory of a temporary matrix!)

Supported matrix types

Since OpenCV supports a subset of numpy types, here is the table of supported types:

➜ python
>>> import cvnp
>>> cvnp.print_types_synonyms()
  cv_depth   cv_depth_name   np_format   np_format_long
     0          CV_8U           B         np.uint8  
     1          CV_8S           b         np.int8   
     2          CV_16U          H        np.uint16  
     3          CV_16S          h         np.int16  
     4          CV_32S          i         np.int32  
     5          CV_32F          f          float    
     6          CV_64F          d        np.float64

How to use it in your project

  1. Add cvnp to your project. For example:
cd external
git submodule add https://github.com/pthom/cvnp.git
  1. Link it to your python module:

In your python module CMakeLists, add:

add_subdirectory(path/to/cvnp)
target_link_libraries(your_target PRIVATE cvnp)
  1. (Optional) If you want to import the declared functions in your module:

Write this in your main module code:

void pydef_cvnp(pybind11::module& m);

PYBIND11_MODULE(your_module, m)
{
    ....
    ....
    ....
    pydef_cvnp(m);
}

You will get two simple functions:

  • cvnp.list_types_synonyms()
  • cvnp.print_types_synonyms()
>>> import cvnp
>>> import pprint
>>> pprint.pprint(cvnp.list_types_synonyms(), indent=2, width=120)
[ {'cv_depth': 0, 'cv_depth_name': 'CV_8U', 'np_format': 'B', 'np_format_long': 'np.uint8'},
  {'cv_depth': 1, 'cv_depth_name': 'CV_8S', 'np_format': 'b', 'np_format_long': 'np.int8'},
  {'cv_depth': 2, 'cv_depth_name': 'CV_16U', 'np_format': 'H', 'np_format_long': 'np.uint16'},
  {'cv_depth': 3, 'cv_depth_name': 'CV_16S', 'np_format': 'h', 'np_format_long': 'np.int16'},
  {'cv_depth': 4, 'cv_depth_name': 'CV_32S', 'np_format': 'i', 'np_format_long': 'np.int32'},
  {'cv_depth': 5, 'cv_depth_name': 'CV_32F', 'np_format': 'f', 'np_format_long': 'float'},
  {'cv_depth': 6, 'cv_depth_name': 'CV_64F', 'np_format': 'd', 'np_format_long': 'np.float64'}]

Shared and non shared matrices - Demo

Demo based on extracts from the tests:

We are using this struct:

// CvNp_TestHelper is a test helper struct
struct CvNp_TestHelper
{
    // m is a *shared* matrix (i.e `cvnp::Mat_shared`)
    cvnp::Mat_shared m = cvnp::Mat_shared(cv::Mat::eye(cv::Size(4, 3), CV_8UC1));
    void SetM(int row, int col, uchar v) { m.Value.at<uchar>(row, col) = v; }

    // m_ns is a standard OpenCV matrix
    cv::Mat m_ns = cv::Mat::eye(cv::Size(4, 3), CV_8UC1);
    void SetM_ns(int row, int col, uchar v) { m_ns.at<uchar>(row, col) = v; }

    // ...
};

Shared matrices

Changes propagate from Python to C++ and from C++ to Python

def test_mat_shared():
    # CvNp_TestHelper is a test helper object
    o = CvNp_TestHelper()
    # o.m is a *shared* matrix i.e `cvnp::Mat_shared` in the object
    assert o.m.shape == (3, 4)

    # From python, change value in the C++ Mat (o.m) and assert that the changes are visible from python and C++
    o.m[0, 0] = 2
    assert o.m[0, 0] == 2

    # Make a python linked copy of the C++ Mat, named m_linked.
    # Values of m_mlinked and the C++ mat should change together
    m_linked = o.m
    m_linked[1, 1] = 3
    assert o.m[1, 1] == 3

    # Ask C++ to change a value in the matrix, at (0,0)
    # and verify that m_linked as well as o.m are impacted
    o.SetM(0, 0, 10)
    o.SetM(2, 3, 15)
    assert m_linked[0, 0] == 10
    assert m_linked[2, 3] == 15
    assert o.m[0, 0] == 10
    assert o.m[2, 3] == 15

Non shared matrices

Changes propagate from C++ to Python, but not the other way.

def test_mat_not_shared():
    # CvNp_TestHelper is a test helper object
    o = CvNp_TestHelper()
    # o.m_ns is a bare `cv::Mat`. Its memory is *not* shared
    assert o.m_ns.shape == (3, 4)

    # From python, change value in the C++ Mat (o.m) and assert that the changes are *not* applied
    o.m_ns[0, 0] = 2
    assert o.m_ns[0, 0] != 2 # No shared memory!

    # Ask C++ to change a value in the matrix, at (0,0) and verify that the change is visible from python
    o.SetM_ns(2, 3, 15)
    assert o.m_ns[2, 3] == 15

Non continuous matrices

From C++

The conversion of non continuous matrices from C++ to python will fail. You need to clone them to make them continuous beforehand.

Example:

    cv::Mat m(cv::Size(10, 10), CV_8UC1);
    cv::Mat sub_matrix = m(cv::Rect(3, 0, 3, m.cols));

    TEST_NAME("Try to convert a non continuous Mat to py::array, ensure it throws");
    TEST_ASSERT_THROW(
        cvnp::mat_to_nparray(sub_matrix, share_memory)
    );

    TEST_NAME("Clone the mat, ensure the clone can now be converted to py::array");
    cv::Mat sub_matrix_clone = sub_matrix.clone();
    py::array a = cvnp::mat_to_nparray(sub_matrix_clone, share_memory);
    TEST_ASSERT(a.shape()[0] == 10);

From python

The conversion of non continuous matrices from python to python will work, with or without shared memory.

# import test utilities
>>> from cvnp import CvNp_TestHelper, cvnp_roundtrip, cvnp_roundtrip_shared, short_lived_matx, short_lived_mat
>>> o=CvNp_TestHelper()
# o.m is of type `cvnp::Mat_shared`
>>> o.m
array([[1, 0, 0, 0],
       [0, 1, 0, 0],
       [0, 0, 1, 0]], dtype=uint8)

# Create a non continuous array
>>> m = np.zeros((10,10))
>>> sub_matrix = m[4:6, :]
>>> sub_matrix.flags['F_CONTIGUOUS']
False

# Assign it to a `cvnp::Mat_shared`
>>> o.m = m
# Check that memory sharing works
>>> m[0,0]=42
>>> o.m[0,0]
42.0

Build and test

These steps are only for development and testing of this package, they are not required in order to use it in a different project.

Build

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

mkdir build
cd build

# if you do not have a global install of OpenCV and pybind11
conan install .. --build=missing
# if you do have a global install of OpenCV, but not pybind11
conan install ../conanfile_pybind_only.txt --build=missing

cmake ..
make

Test

In the build dir, run:

cmake --build . --target test

Deep clean

rm -rf build
rm -rf venv
rm -rf .pytest_cache
rm  *.so 
rm *.pyd

Notes

Thanks to Dan Mašek who gave me some inspiration here: https://stackoverflow.com/questions/60949451/how-to-send-a-cvmat-to-python-over-shared-memory

This code is intended to be integrated into your own pip package. As such, no pip tooling is provided.


Author: pthom
Source code: https://github.com/pthom/cvnp
License: MIT license

#cpluplus #opencv #numpy 

Cvnp: Pybind11 Casts Between Numpy and OpenCV In C++
Tamale  Moses

Tamale Moses

1658825232

Using C++/MFC/OpenCV to Build A NCC-Based Image Matching Algorithm

Fastest Image Pattern Matching

The best template matching implementation on the Internet.

Using C++/MFC/OpenCV to build a Normalized Cross Corelation-based image alignment algorithm

The result means the similarity of two images, and the formular is as followed: image

Improvements

  1. rotation invariant
  2. using image pyrimid as a searching strategy
  3. minimizing the inspection area on the top level of image pyrimid
  4. optimized rotation time from opencv by setting needed "size" and modifying rotation matrix
  5. rotation precision is as high as possible
  6. SIMD version of image convolution (extremely speed up for large template)

Comparison with commercial libraries

Inspection Image : 4024 X 3036

Template Image: 762 X 521

LibraryIndexScoreAnglePosXPosYExecution Time
My Tool010.0461725.8571045.43376ms
My Tool10.998-119.9792662.8691537.446 
My Tool20.991120.1501768.9362098.494 
Cognex010.0301725.9601045.470125ms
Cognex10.989-119.9602663.7501538.040 
Cognex20.983120.0901769.2502099.410 
Aisys0101726.0001045.500202ms
Aisys10.990-119.9352663.6301539.060 
Aisys20.979120.0001769.632099.780 

note: if you want to get a best performance, please make sure you are using release verson (both this project and OpenCV dll). That's because O2-related settings significantly affects efficiency, and the difference of Debug and Release can up to 7 times for some cases.

Tests

test0 - with user interface

image

test1 (164ms 80ms (SIMD version), TargetNum=5, Overlap=0.8, Score=0.8, Tolerance Angle=180)

image

test2 (237 ms, 175ms (SIMD Version))

image

test3 (152 ms, 100ms (SIMD Version))

image

test4 (21 ms, Target Number=38, Score=0.8, Tolerance Angle=0, Min Reduced Area=256)

image

test5 (27 ms)

image

test6 (1157ms, 657ms (SIMD Version), Target Number=15, Score=0.8, Tolerance Angle=180, Min Reduced Area=256)

image

Steps to build this project

  1. Download Visual Studio 2017 or newer versions
  2. Check on the option of "x86 and x64 version of C++ MFC"
  3. Install
  4. Open MatchTool.vcxproj
  5. Upgrade if it is required
  6. Open this project's property page
  7. Modified "General-Output Directory" to the .exe directory you want (usually the directory where your opencv_worldXX.dll locates)
  8. Choose the SDK version you have in "General-Windows SDK Version"
  9. Choose the right toolset you have in "General-Platform Toolset" (for me, it is Visual Studio 2017 (v141))
  10. Go to "VC++ Directories", and type in "Include Directories" for your own OpenCV (e.g. C:\OpenCV3.1\opencv\build\include or C:\OpenCV4.0\opencv\build\include)
  11. Type in "Library Directories" for your own OpenCV's library path (the directory where your opencv_worldXX.lib locates)
  12. Go to "Linker-Input", and type in library name (e.g. opencv_world310d_vs2017.lib or opencv_world401d.lib)
  13. Make sure that your opencv_worldXX.dll and MatchTool.Lang are in the same directory as .exe of this project

Adaptation for OpenCV4.X

1.Select Debug_4.X or Release_4.X in "Solution Configuration" image

2.Do step 10~12 in previous section

Usage of this project

  1. Select the Language you want
  2. Drag Source Image to the Left Area
  3. Drag Dst Image to the Right Top Area
  4. Push "Execute Button"

Parameters Setting

  1. Target Number: possible max objects you want to find in the inspection image
  2. Max OverLap Ratio: (the overlap area between two findings) / area of golden sample
  3. Score (Similarity): accepted similarity of findings (0~1), lower score causes more execution time
  4. Tolerance Angle: possible rotation of targets in the inspection image (180 means search range is from -180~180), higher angle causes more execution time or you can push "↓" button to select 2 angle range
  5. Min Reduced Area: the min area of toppest level in image pyrimid (trainning stage)

About outputs

  1. results are sorted by score (decreasing order)
  2. Angles: inspected rotation of findings
  3. PosX, PosY: pixel position of findings

Demonstration Video

youtube link

Image

This project can also be used as OCR

youtube link

image


Author: DennisLiu1993
Source code: https://github.com/DennisLiu1993/Fastest_Image_Pattern_Matching
License: BSD-2-Clause license
#cpluplus #opencv 

Using C++/MFC/OpenCV to Build A NCC-Based Image Matching Algorithm
Royce  Reinger

Royce Reinger

1658338740

Ruby-opencv: Versioned fork Of The OpenCV Gem For Ruby

Ruby-opencv

An OpenCV wrapper for Ruby.

Install

Linux/Mac

  1. Install OpenCV
  2. Install ruby-opencv
$ gem install ruby-opencv -- --with-opencv-dir=/path/to/opencvdir

Note: /path/to/opencvdir is the directory where you installed OpenCV.

Windows (RubyInstaller)

See install-ruby-opencv-with-rubyinstaller-on-windows.md.

Sample code

Load and Display an Image

A sample to load and display an image. An equivalent code of this tutorial.

require 'opencv'
include OpenCV

if ARGV.size == 0
  puts "Usage: ruby #{__FILE__} ImageToLoadAndDisplay"
  exit
end

image = nil
begin
  image = CvMat.load(ARGV[0], CV_LOAD_IMAGE_COLOR) # Read the file.
rescue
  puts 'Could not open or find the image.'
  exit
end

window = GUI::Window.new('Display window') # Create a window for display.
window.show(image) # Show our image inside it.
GUI::wait_key # Wait for a keystroke in the window.

Face Detection

A sample to detect faces from an image.

require 'opencv'
include OpenCV

if ARGV.length < 2
  puts "Usage: ruby #{__FILE__} source dest"
  exit
end

data = './data/haarcascades/haarcascade_frontalface_alt.xml'
detector = CvHaarClassifierCascade::load(data)
image = CvMat.load(ARGV[0])
detector.detect_objects(image).each do |region|
  color = CvColor::Blue
  image.rectangle! region.top_left, region.bottom_right, :color => color
end

image.save_image(ARGV[1])
window = GUI::Window.new('Face detection')
window.show(image)
GUI::wait_key

For more samples, see examples/*.rb


Requirement


Author: Ruby-opencv
Source Code: https://github.com/ruby-opencv/ruby-opencv 
License: View license

#ruby #opencv 

Ruby-opencv: Versioned fork Of The OpenCV Gem For Ruby

How to Train Object Detector with Minimum DataSets

Train Object Detection With Small Datasets

Object detection, the task of localising and classifying objects in a scene, one of the most popular tasks in Computer Vision, has a main drawback: a large annotated dataset is necessary to train the model. Indeed, annotating a dataset is expensive, and the free available datasets are not enough, as they do not contain all the classes we are interested in. Thus, the goal of the tutorial is to introduce the main techniques to train a good object detector utilising the minimum amount of annotated data.

#computervision #opencv #machinelearning 

How to Train Object Detector with Minimum DataSets
Zak Dyer

Zak Dyer

1655956537

Pants: ML Video Filter to Add Pants or Blur Out Your Lower Half on Zoom Calls

Pants filter

Add pants or blur out everything from the waist down for extra safety on Zoom calls.

The pants filter uses OpenCV and MediaPipe's Pose detection to add a real-time pants filter to video input. The result is piped to a virtual camera output using pyvirtualcam.

To use the resulting output, you must have a virtual camera device. The easiest way to do this on any OS is to download and install OBS, open it up, then click "Start Virtual Camera" on the bottom right. You can now close OBS for good. With the pants filter running, select "OBS Virtual Camera" as your video source in Zoom/Teams/etc.

Hit the p key on the preview window to toggle your pants among ten different styles, included in the pants folder as a standard PNG template.

Hit ESC on the preview window to exit the pants filter, or CTRL+C to close the Python process.

Optional flags:

  • --help - Display the below options
  • --input - Choose a camera or video file path (defaults to device 0)
  • --pants - Draw pants by default, instead of only blurring
  • --width - Proportional width of pants beyond mid-leg width (defaults to 0.4)
  • --flip - Flip along the y-axis for selfie view
  • --landmarks - Draw detected body landmarks from MediaPipe
  • --record - Write the output to a timestamped AVI recording in the current folder

Example usage:

  • python pantser.py -h - Show all argument options
  • python pantser.py --input 2 --pants 1 --width .6 --flip 1 --landmarks 1 --record 1 - Camera device 2; slightly wider hips; flip the image; draw landmarks; generate a recording
  • python pantser.py -i "/Downloads/shakira.mp4" -w .9 - Use video file as input; extra hips

Download Details: 
Author: everythingishacked
Source Code: https://github.com/everythingishacked/Pants 
#python #opencv 
 

Pants: ML Video Filter to Add Pants or Blur Out Your Lower Half on Zoom Calls

JavaCV: Java interface to OpenCV, FFmpeg, and More

Introduction

JavaCV uses wrappers from the JavaCPP Presets of commonly used libraries by researchers in the field of computer vision (OpenCV, FFmpeg, libdc1394, FlyCapture, Spinnaker, OpenKinect, librealsense, CL PS3 Eye Driver, videoInput, ARToolKitPlus, flandmark, Leptonica, and Tesseract) and provides utility classes to make their functionality easier to use on the Java platform, including Android.

JavaCV also comes with hardware accelerated full-screen image display (CanvasFrame and GLCanvasFrame), easy-to-use methods to execute code in parallel on multiple cores (Parallel), user-friendly geometric and color calibration of cameras and projectors (GeometricCalibrator, ProCamGeometricCalibrator, ProCamColorCalibrator), detection and matching of feature points (ObjectFinder), a set of classes that implement direct image alignment of projector-camera systems (mainly GNImageAligner, ProjectiveTransformer, ProjectiveColorTransformer, ProCamTransformer, and ReflectanceInitializer), a blob analysis package (Blobs), as well as miscellaneous functionality in the JavaCV class. Some of these classes also have an OpenCL and OpenGL counterpart, their names ending with CL or starting with GL, i.e.: JavaCVCL, GLCanvasFrame, etc.

To learn how to use the API, since documentation currently lacks, please refer to the Sample Usage section below as well as the sample programs, including two for Android (FacePreview.java and RecordActivity.java), also found in the samples directory. You may also find it useful to refer to the source code of ProCamCalib and ProCamTracker as well as examples ported from OpenCV2 Cookbook and the associated wiki pages.

Please keep me informed of any updates or fixes you make to the code so that I may integrate them into the next release. Thank you! And feel free to ask questions on the mailing list or the discussion forum if you encounter any problems with the software! I am sure it is far from perfect...

Downloads

Archives containing JAR files are available as releases. The binary archive contains builds for Android, iOS, Linux, Mac OS X, and Windows. The JAR files for specific child modules or platforms can also be obtained individually from the Maven Central Repository.

To install manually the JAR files, follow the instructions in the Manual Installation section below.

We can also have everything downloaded and installed automatically with:

  • Maven (inside the pom.xml file)
  <dependency>
    <groupId>org.bytedeco</groupId>
    <artifactId>javacv-platform</artifactId>
    <version>1.5.7</version>
  </dependency>
  • Gradle (inside the build.gradle file)
  dependencies {
    implementation group: 'org.bytedeco', name: 'javacv-platform', version: '1.5.7'
  }
  • Leiningen (inside the project.clj file)
  :dependencies [
    [org.bytedeco/javacv-platform "1.5.7"]
  ]
  • sbt (inside the build.sbt file)
  libraryDependencies += "org.bytedeco" % "javacv-platform" % "1.5.7"

This downloads binaries for all platforms, but to get binaries for only one platform we can set the javacpp.platform system property (via the -D command line option) to something like android-arm, linux-x86_64, macosx-x86_64, windows-x86_64, etc. Please refer to the README.md file of the JavaCPP Presets for details. Another option available to Gradle users is Gradle JavaCPP, and similarly for Scala users there is SBT-JavaCV.

Required Software

To use JavaCV, you will first need to download and install the following software:

Further, although not always required, some functionality of JavaCV also relies on:

Finally, please make sure everything has the same bitness: 32-bit and 64-bit modules do not mix under any circumstances.

Manual Installation

Simply put all the desired JAR files (opencv*.jar, ffmpeg*.jar, etc.), in addition to javacpp.jar and javacv.jar, somewhere in your class path. Here are some more specific instructions for common cases:

NetBeans (Java SE 7 or newer):

  1. In the Projects window, right-click the Libraries node of your project, and select "Add JAR/Folder...".
  2. Locate the JAR files, select them, and click OK.

Eclipse (Java SE 7 or newer):

  1. Navigate to Project > Properties > Java Build Path > Libraries and click "Add External JARs...".
  2. Locate the JAR files, select them, and click OK.

Visual Studio Code (Java SE 7 or newer):

  1. Navigate to Java Projects > Referenced Libraries, and click +.
  2. Locate the JAR files, select them, and click OK.

IntelliJ IDEA (Android 7.0 or newer):

  1. Follow the instructions on this page: http://developer.android.com/training/basics/firstapp/
  2. Copy all the JAR files into the app/libs subdirectory.
  3. Navigate to File > Project Structure > app > Dependencies, click +, and select "2 File dependency".
  4. Select all the JAR files from the libs subdirectory.

After that, the wrapper classes for OpenCV and FFmpeg, for example, can automatically access all of their C/C++ APIs:

Sample Usage

The class definitions are basically ports to Java of the original header files in C/C++, and I deliberately decided to keep as much of the original syntax as possible. For example, here is a method that tries to load an image file, smooth it, and save it back to disk:

import org.bytedeco.opencv.opencv_core.*;
import org.bytedeco.opencv.opencv_imgproc.*;
import static org.bytedeco.opencv.global.opencv_core.*;
import static org.bytedeco.opencv.global.opencv_imgproc.*;
import static org.bytedeco.opencv.global.opencv_imgcodecs.*;

public class Smoother {
    public static void smooth(String filename) {
        Mat image = imread(filename);
        if (image != null) {
            GaussianBlur(image, image, new Size(3, 3), 0);
            imwrite(filename, image);
        }
    }
}

JavaCV also comes with helper classes and methods on top of OpenCV and FFmpeg to facilitate their integration to the Java platform. Here is a small demo program demonstrating the most frequently useful parts:

import java.io.File;
import java.net.URL;
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.*;
import org.bytedeco.opencv.opencv_core.*;
import org.bytedeco.opencv.opencv_imgproc.*;
import org.bytedeco.opencv.opencv_calib3d.*;
import org.bytedeco.opencv.opencv_objdetect.*;
import static org.bytedeco.opencv.global.opencv_core.*;
import static org.bytedeco.opencv.global.opencv_imgproc.*;
import static org.bytedeco.opencv.global.opencv_calib3d.*;
import static org.bytedeco.opencv.global.opencv_objdetect.*;

public class Demo {
    public static void main(String[] args) throws Exception {
        String classifierName = null;
        if (args.length > 0) {
            classifierName = args[0];
        } else {
            URL url = new URL("https://raw.github.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_alt.xml");
            File file = Loader.cacheResource(url);
            classifierName = file.getAbsolutePath();
        }

        // We can "cast" Pointer objects by instantiating a new object of the desired class.
        CascadeClassifier classifier = new CascadeClassifier(classifierName);
        if (classifier == null) {
            System.err.println("Error loading classifier file \"" + classifierName + "\".");
            System.exit(1);
        }

        // The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),
        // DC1394FrameGrabber, FlyCapture2FrameGrabber, OpenKinectFrameGrabber, OpenKinect2FrameGrabber,
        // RealSenseFrameGrabber, RealSense2FrameGrabber, PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
        FrameGrabber grabber = FrameGrabber.createDefault(0);
        grabber.start();

        // CanvasFrame, FrameGrabber, and FrameRecorder use Frame objects to communicate image data.
        // We need a FrameConverter to interface with other APIs (Android, Java 2D, JavaFX, Tesseract, OpenCV, etc).
        OpenCVFrameConverter.ToMat converter = new OpenCVFrameConverter.ToMat();

        // FAQ about IplImage and Mat objects from OpenCV:
        // - For custom raw processing of data, createBuffer() returns an NIO direct
        //   buffer wrapped around the memory pointed by imageData, and under Android we can
        //   also use that Buffer with Bitmap.copyPixelsFromBuffer() and copyPixelsToBuffer().
        // - To get a BufferedImage from an IplImage, or vice versa, we can chain calls to
        //   Java2DFrameConverter and OpenCVFrameConverter, one after the other.
        // - Java2DFrameConverter also has static copy() methods that we can use to transfer
        //   data more directly between BufferedImage and IplImage or Mat via Frame objects.
        Mat grabbedImage = converter.convert(grabber.grab());
        int height = grabbedImage.rows();
        int width = grabbedImage.cols();

        // Objects allocated with `new`, clone(), or a create*() factory method are automatically released
        // by the garbage collector, but may still be explicitly released by calling deallocate().
        // You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc. on objects allocated this way.
        Mat grayImage = new Mat(height, width, CV_8UC1);
        Mat rotatedImage = grabbedImage.clone();

        // The OpenCVFrameRecorder class simply uses the VideoWriter of opencv_videoio,
        // but FFmpegFrameRecorder also exists as a more versatile alternative.
        FrameRecorder recorder = FrameRecorder.createDefault("output.avi", width, height);
        recorder.start();

        // CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
        // It can also switch into full-screen mode when called with a screenNumber.
        // We should also specify the relative monitor/camera response for proper gamma correction.
        CanvasFrame frame = new CanvasFrame("Some Title", CanvasFrame.getDefaultGamma()/grabber.getGamma());

        // Let's create some random 3D rotation...
        Mat randomR    = new Mat(3, 3, CV_64FC1),
            randomAxis = new Mat(3, 1, CV_64FC1);
        // We can easily and efficiently access the elements of matrices and images
        // through an Indexer object with the set of get() and put() methods.
        DoubleIndexer Ridx = randomR.createIndexer(),
                   axisIdx = randomAxis.createIndexer();
        axisIdx.put(0, (Math.random() - 0.5) / 4,
                       (Math.random() - 0.5) / 4,
                       (Math.random() - 0.5) / 4);
        Rodrigues(randomAxis, randomR);
        double f = (width + height) / 2.0;  Ridx.put(0, 2, Ridx.get(0, 2) * f);
                                            Ridx.put(1, 2, Ridx.get(1, 2) * f);
        Ridx.put(2, 0, Ridx.get(2, 0) / f); Ridx.put(2, 1, Ridx.get(2, 1) / f);
        System.out.println(Ridx);

        // We can allocate native arrays using constructors taking an integer as argument.
        Point hatPoints = new Point(3);

        while (frame.isVisible() && (grabbedImage = converter.convert(grabber.grab())) != null) {
            // Let's try to detect some faces! but we need a grayscale image...
            cvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
            RectVector faces = new RectVector();
            classifier.detectMultiScale(grayImage, faces);
            long total = faces.size();
            for (long i = 0; i < total; i++) {
                Rect r = faces.get(i);
                int x = r.x(), y = r.y(), w = r.width(), h = r.height();
                rectangle(grabbedImage, new Point(x, y), new Point(x + w, y + h), Scalar.RED, 1, CV_AA, 0);

                // To access or pass as argument the elements of a native array, call position() before.
                hatPoints.position(0).x(x - w / 10     ).y(y - h / 10);
                hatPoints.position(1).x(x + w * 11 / 10).y(y - h / 10);
                hatPoints.position(2).x(x + w / 2      ).y(y - h / 2 );
                fillConvexPoly(grabbedImage, hatPoints.position(0), 3, Scalar.GREEN, CV_AA, 0);
            }

            // Let's find some contours! but first some thresholding...
            threshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);

            // To check if an output argument is null we may call either isNull() or equals(null).
            MatVector contours = new MatVector();
            findContours(grayImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
            long n = contours.size();
            for (long i = 0; i < n; i++) {
                Mat contour = contours.get(i);
                Mat points = new Mat();
                approxPolyDP(contour, points, arcLength(contour, true) * 0.02, true);
                drawContours(grabbedImage, new MatVector(points), -1, Scalar.BLUE);
            }

            warpPerspective(grabbedImage, rotatedImage, randomR, rotatedImage.size());

            Frame rotatedFrame = converter.convert(rotatedImage);
            frame.showImage(rotatedFrame);
            recorder.record(rotatedFrame);
        }
        frame.dispose();
        recorder.stop();
        grabber.stop();
    }
}

Furthermore, after creating a pom.xml file with the following content:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.bytedeco.javacv</groupId>
    <artifactId>demo</artifactId>
    <version>1.5.7</version>
    <properties>
        <maven.compiler.source>1.7</maven.compiler.source>
        <maven.compiler.target>1.7</maven.compiler.target>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>javacv-platform</artifactId>
            <version>1.5.7</version>
        </dependency>

        <!-- Additional dependencies required to use CUDA and cuDNN -->
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>opencv-platform-gpu</artifactId>
            <version>4.5.5-1.5.7</version>
        </dependency>

        <!-- Optional GPL builds with (almost) everything enabled -->
        <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>ffmpeg-platform-gpl</artifactId>
            <version>5.0-1.5.7</version>
        </dependency>
    </dependencies>
    <build>
        <sourceDirectory>.</sourceDirectory>
    </build>
</project>

And by placing the source code above in Demo.java, or similarly for other classes found in the samples, we can use the following command to have everything first installed automatically and then executed by Maven:

 $ mvn compile exec:java -Dexec.mainClass=Demo

Note: In case of errors, please make sure that the artifactId in the pom.xml file reads javacv-platform, not javacv only, for example. The artifact javacv-platform adds all the necessary binary dependencies.

Build Instructions

If the binary files available above are not enough for your needs, you might need to rebuild them from the source code. To this end, the project files were created for:

Once installed, simply call the usual mvn install command for JavaCPP, its Presets, and JavaCV. By default, no other dependencies than a C++ compiler for JavaCPP are required. Please refer to the comments inside the pom.xml files for further details.

Instead of building the native libraries manually, we can run mvn install for JavaCV only and rely on the snapshot artifacts from the CI builds:

Download Details:
Author: bytedeco
Source Code: https://github.com/bytedeco/javacv
License: View license

#computervision  #java #opencv 

JavaCV: Java interface to OpenCV, FFmpeg, and More
Trycia  Hintz

Trycia Hintz

1654404900

How to using Python Style-Transfer-Quality Library and TensorFlow

Do you want to learn how to merge images with using AI , Python, and style transfer libaray? You need to check out Python Neural Style Transfer. Neural Style Transfer is a process that uses neural networks to apply the artistic style from one image to another. This means that you can take famous artworks and their styles and apply them to your own images. Even better yet, you can do it in just 10ish minutes using Python and Open Source tools like Tensorflow and Matplotlib.

In this video you'll go through: 
1. Downloading a model from Tensorflow Model Hub
2. Preprocessing images for neural style transfer
3. Applying and visualizing style transfer

#opencv #python #tensorflow #AI 

How to using Python Style-Transfer-Quality Library and TensorFlow
Trycia  Hintz

Trycia Hintz

1654317060

How to Merge Images with Neural Styles using AI and Python

This is a cool and fun Python project That enables to blend and merge an image with a new styling, and produce an turn over the image to a new concept, this tutorial covers the full process installing and running this cool feature.

#opencv #python #AI 

How to Merge Images with Neural Styles using AI and Python

How to Create The Game Of Tetris using Opencv & Python

In this post, we’ll create the game of Tetris as shown in the video above.

Tetris

Most readers are probably familiar with Tetris – a popular and addictive video game created by Russian software engineer Alexey Pajitnov in 1984.

Let’s look at the parts of the game and the rules.

Board

The game consists of a board that is 10 cells across and 20 cells high as shown below.

Tetris boardFigure 1 : The Tetris board is a 10 x 20 grid of cells.

Tetris Pieces (a.k.a tetrominoes)

In Tetris, blocks fall from the top of the board vertically down in chunks of 4. The chunks are called tetrominoes but in this post we will simply call them “tetris pieces.”

In Figure 1, we can see a number of Tetris pieces on the bottom of the board (each colored differently), and one blue colored line piece falling down.

There are seven different kinds of pieces. We denote them using the letters “O”, “I”, “S”, “Z”, “L”, “J”, and “T” in our code.

Tetris pieces

Figure 2 : Tetris Pieces

Keyboard Controls

The tetris pieces fall from the top to the bottom of the board.

You can move the pieces by pressing keyboard control keys. In this post, we are using the following keys for controlling the motion of the pieces

  1. Pressing A moves the piece left
  2. D moves the piece right
  3. J rotates piece left
  4. L rotates piece right
  5. I holds the current piece for future use
  6. S moves the piece down by 1 cell. This is also called “soft drop.”
  7. W drops the piece vertically down to the lowest possible cell. It is also called “hard drop.”

Rules of the game

If you make all the cells in one row full by moving and placing the pieces intelligently as they fall, the full line clears and you get points based on how many lines you clear.

If a single line is cleared by an action, you receive 40 points. If two lines are cleared in one shot, you receive 100 points, and if three lines are cleared you get 300 points.

Figure 3: Tetris line clear example. On the left we show four lines are completely full because of the blue block on the right most column. This state changes to the one shown on the right giving the user a TETRIS or 1200 points.

Clearing 4 lines in a single shot gets you 1200 points! This is called a TETRIS and is the highest score you can get in one shot. This is shown in Figure 3.

Your objective is to score as many points as possible before the pile of tetris pieces grows too high.

Creating Tetris using OpenCV and numpy

Let’s see how we can use OpenCV’s drawing functions and keyboard handler along with numpy to create the game of Tetris.

First, we will import some standard libraries.

import cv2
import numpy as np
from random import choice

Now, we will make a board, initialize some other variables, and define the parameter SPEED to be the speed at which the tetris pieces fall.

SPEED = 1 # Controls the speed of the tetris pieces

# Make a board

board = np.uint8(np.zeros([20, 10, 3]))

# Initialize some variables

quit = False
place = False
drop = False
switch = False
held_piece = ""
flag = 0
score = 0

Tetris pieces can have one of seven different shapes.

Seven Kinds of Tetris Pieces

# All the tetris pieces
next_piece = choice(["O", "I", "S", "Z", "L", "J", "T"])

A new Tetris piece always appears in a specific location on the screen.

Next, we will write a function that

  1. Creates a tetris piece
  2. Assigns a color to the piece.

Below we have a function that gets the spawn location and color of a given tetris piece.

def get_info(piece):
    if piece == "I":
        coords = np.array([[0, 3], [0, 4], [0, 5], [0, 6]])
        color = [255, 155, 15]
    elif piece == "T":
        coords = np.array([[1, 3], [1, 4], [1, 5], [0, 4]])
        color = [138, 41, 175]
    elif piece == "L":
        coords = np.array([[1, 3], [1, 4], [1, 5], [0, 5]])
        color = [2, 91, 227]
    elif piece == "J":
        coords = np.array([[1, 3], [1, 4], [1, 5], [0, 3]])
        color = [198, 65, 33]
    elif piece == "S":
        coords = np.array([[1, 5], [1, 4], [0, 3], [0, 4]])
        color = [55, 15, 215]
    elif piece == "Z":
        coords = np.array([[1, 3], [1, 4], [0, 4], [0, 5]])
        color = [1, 177, 89]
    else:
        coords = np.array([[0, 4], [0, 5], [1, 4], [1, 5]])
        color = [2, 159, 227]
    
    return coords, color

Display board

Let’s now write a function for displaying the board and capturing keyboard events.

def display(board, coords, color, next_info, held_info, score, SPEED):
    # Generates the display
    
    border = np.uint8(127 - np.zeros([20, 1, 3]))
    border_ = np.uint8(127 - np.zeros([1, 34, 3]))
    
    dummy = board.copy()
    dummy[coords[:,0], coords[:,1]] = color
    
    right = np.uint8(np.zeros([20, 10, 3]))
    right[next_info[0][:,0] + 2, next_info[0][:,1]] = next_info[1]
    left = np.uint8(np.zeros([20, 10, 3]))
    left[held_info[0][:,0] + 2, held_info[0][:,1]] = held_info[1]
    
    dummy = np.concatenate((border, left, border, dummy, border, right, border), 1)
    dummy = np.concatenate((border_, dummy, border_), 0)
    dummy = dummy.repeat(20, 0).repeat(20, 1)
    dummy = cv2.putText(dummy, str(score), (520, 200), cv2.FONT_HERSHEY_DUPLEX, 1, [0, 0, 255], 2)
    
    # Instructions for the player
    
    dummy = cv2.putText(dummy, "A - move left", (45, 200), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "D - move right", (45, 225), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "S - move down", (45, 250), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "W - hard drop", (45, 275), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "J - rotate left", (45, 300), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "L - rotate right", (45, 325), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    dummy = cv2.putText(dummy, "I - hold", (45, 350), cv2.FONT_HERSHEY_DUPLEX, 0.6, [0, 0, 255])
    
    cv2.imshow("Tetris", dummy)
    key = cv2.waitKey(int(1000/SPEED))
    
    return key

Main Loop

This is the main part of the code. We have a while loop where at every iteration we place a new piece in the game.

In Tetris, you can press a certain key to hold a piece. A piece that is held is such a way is available to be used in the future by swapping it with the current piece.

In the code below, we first check if the user wants to swap the current piece with the held piece using the switch variable.

if __name__ == "__main__":
    while not quit:
        # Check if user wants to swap held and current pieces
        if switch:
           # swap held_piece and current_piece
            held_piece, current_piece = current_piece, held_piece
            switch = False

If the switch variable is set to false, we assign the current_piece to the next_piece and randomly choose a new next_piece.

else:
    # Generates the next piece and updates the current piece
    current_piece = next_piece
    next_piece = choice(["I", "T", "L", "J", "Z", "S", "O"])

if flag > 0:
    flag -= 1

Next, we determine the color and position of the current_piece, next_piece, and the held_piece.

# Determines the color and position of the current, next, and held pieces
if held_piece == "":
    held_info = np.array([[0, 0]]), [0, 0, 0]
else:
   held_info = get_info(held_piece)

next_info = get_info(next_piece)

coords, color = get_info(current_piece)
if current_piece == "I":
    top_left = [-2, 3]

This if statement just checks if the game needs to be terminated (i.e., the tetris pieces have stacked too high), and we do this by checking if the next piece’s spawn location doesn’t overlap with another piece.

if not np.all(board[coords[:,0], coords[:,1]] == 0):
    break

Next, we add another while loop inside of the main one. Each iteration of this new loop corresponds to the piece moving down by one block.

First, we show the board using our display() function and receive the keyboard input.

We also make a copy of the original position.

while True:
    # Shows the board and gets the key press
    key = display(board, coords, color, next_info, held_info, score, SPEED)
    # Create a copy of the position
    dummy = coords.copy()

The key variable above stores the ASCII code for the pressed keyboard entry. Depending on which key was pressed, we take different actions.

The a and d keys control the left and right movement of the piece.

if key == ord("a"):
    # Moves the piece left if it isn't against the left wall
    if np.min(coords[:,1]) > 0:
        coords[:,1] -= 1
    if current_piece == "I":
        top_left[1] -= 1
elif key == ord("d"):
    # Moves the piece right if it isn't against the right wall
    if np.max(coords[:,1]) < 9:
        coords[:,1] += 1
        if current_piece == "I":
            top_left[1] += 1

The keys j and l are used to rotate the pieces.

To code rotation, we have three kinds of pieces to deal with – the square piece, the line piece, and all others.

For the square piece, rotating is simple; you don’t do anything!

For anything that’s not a square piece, we can inscribe the piece in a square, which we can rotate.

The line piece is inscribed in a 4×4 square and not a 3×3 square, and therefore we need to treat it differently.

elif key == ord("j") or key == ord("l"):
    # Rotation mechanism
    # arr is the array of nearby points which get rotated and pov is the indexes of the blocks within arr
    
    if current_piece != "I" and current_piece != "O":
        if coords[1,1] > 0 and coords[1,1] < 9:
            arr = coords[1] - 1 + np.array([[[x, y] for y in range(3)] for x in range(3)])
            pov = coords - coords[1] + 1
            
    elif current_piece == "I":
        # The straight piece has a 4x4 array, so it needs seperate code
        
        arr = top_left + np.array([[[x, y] for y in range(4)] for x in range(4)])
        pov = np.array([np.where(np.logical_and(arr[:,:,0] == pos[0], arr[:,:,1] == pos[1])) for pos in coords])
        pov = np.array([k[0] for k in np.swapaxes(pov, 1, 2)])
    
    # Rotates the array and repositions the piece to where it is now
    
    if current_piece != "O":
        if key == ord("j"):
            arr = np.rot90(arr, -1)
        else:
            arr = np.rot90(arr)
        coords = arr[pov[:,0], pov[:,1]]

Lastly, we will handle the w, i, DELETE, and ESC keys.

Pressing w implements a hard drop. Pressing i holds the piece.

The DELETE and ESC keys end the program.

elif key == ord("w"):
    # Hard drop set to true
    drop = True
elif key == ord("i"):
    # Goes out of the loop and tells the program to switch held and current pieces
    if flag == 0:
        if held_piece == "":
            held_piece = current_piece
        else:
            switch = True
        flag = 2
        break
elif key == 8 or key == 27:
    quit = True
    break

We next need to check for collisions with other pieces and prevent the piece from moving into or rotating into another piece.

If such a collision occurs, we change the new position back to the original one using our copy of the coords stored in the dummy variable.

# Checks if the piece is overlapping with other pieces or if it's outside the board, and if so, changes the position to the position before anything happened
            
if np.max(coords[:,0]) < 20 and np.min(coords[:,0]) >= 0:
    if not (current_piece == "I" and (np.max(coords[:,1]) >= 10 or np.min(coords[:,1]) < 0)):
        if not np.all(board[coords[:,0], coords[:,1]] == 0):
            coords = dummy.copy()
    else:
        coords = dummy.copy()
else:
    coords = dummy.copy()

Finally, we code the “hard drop.” We use a while loop to check if the piece can move one step down, and stop moving down if it collides with an existing piece or reaches the bottom of the board.

if drop:
    # Every iteration of the loop moves the piece down by 1 and if the piece is resting on the ground or another piece, then it stops and places it
    
    while not place:
        if np.max(coords[:,0]) != 19:
            # Checks if the piece is resting on something
            for pos in coords:
                if not np.array_equal(board[pos[0] + 1, pos[1]], [0, 0, 0]):
                    place = True
                    break
        else:
            # If the position of the piece is at the ground level, then it places
            place = True
        
        if place:
            break
        
        # Keeps going down and checking when the piece needs to be placed
        
        coords[:,0] += 1
        score += 1
        if current_piece == "I":
            top_left[0] += 1
            
    drop = False

If we don’t hard drop, then we just need to check if the piece needs to be placed (i.e. stop moving). A piece is placed when the piece either reaches the bottom of the board or hits another piece.

If none of the above cases apply, we move the piece down by one.

else:
    # Checks if the piece needs to be placed
    if np.max(coords[:,0]) != 19:
        for pos in coords:
            if not np.array_equal(board[pos[0] + 1, pos[1]], [0, 0, 0]):
                place = True
                break
    else:
        place = True
    
if place:
    # Places the piece where it is on the board
    for pos in coords:
        board[tuple(pos)] = color
        
    # Resets place to False
    place = False
    break

# Moves down by 1

coords[:,0] += 1
if key == ord("s"):
    score += 1
if current_piece == "I":
    top_left[0] += 1

Finally, for each iteration of the outer while loop, (aka each time a piece is placed,) we check if any lines were scored and we update the points.

# Clears lines and also counts how many lines have been cleared and updates the score
        
lines = 0
        
for line in range(20):
    if np.all([np.any(pos != 0) for pos in board[line]]):
        lines += 1
        board[1:line+1] = board[:line]
                
if lines == 1:
    score += 40
elif lines == 2:
    score += 100
elif lines == 3:
    score += 300
elif lines == 4:
    score += 1200

Link: https://learnopencv.com/tetris-with-opencv-python/

#python #opencv 

How to Create The Game Of Tetris using Opencv & Python
Trycia  Hintz

Trycia Hintz

1654219500

How to Convert any Image into A 3D Image Videos using Python

This is a nice and fun Python project That enables to convert any image into a 3D image videos. This video is part of python fun projects playlist.

The repository is in this link :https://github.com/feitgemel/3d-photo-inpainting

#opencv #python 

How to Convert any Image into A 3D Image Videos using Python
Trycia  Hintz

Trycia Hintz

1654052700

Build A Neural Network to Classify Fruits and Vegetables TensorFlow

This is one of my neural network projects projects. In this tutorial we will cover a full process of building a neural network model to classify objects in images. Follow this tutorial and you can learn how to build your very own object model with TensorFlow.

In this deep learning tutorial I will walk you through the process and code in order to setup your own neural network , the layers , and give you the basic tools to build your own model.

#opencv #tensorflow #python 

Build A Neural Network to Classify Fruits and Vegetables TensorFlow