Rupert  Beatty

Rupert Beatty

1673983980

Magic: Scanner for Decks Of Cards with Bar Codes Printed on Card Edges

The Nettle Magic Project

This deck of cards has a bar code printed on the edge of each card. Scanning these bar codes would reveal where every card is (or isn't - if cards are missing.)

Think card magic.

A deck of cards with digital marks printed on the edge of each card.

This wouldn't be a very good magic trick if you could see the marks. We need invisible marks.

One of these decks is unmarked, the other is marked with this special ink that is only visible under specific IR conditions.

Two decks of cards - each viewed from the same end. Both decks appear normal.

This device (a Raspberry Pi Zero W with a NoIR camera) can see these marks. The shiny circle is a special IR filter.

A scanning server runs on this small device.

A small computer module, about the size of a thumb. It has a small camera attached. The lens of the camera is covered with what looks like s small round mirror.

This is Abra, the iOS client application running on my iPad. It shows what the server's camera sees along with the decoded deck. As you can see, the IR marks are quite visible to the camera.

A screenshot of an app containing an array of playing cards in suit and numerical order, with a black-and-white image of a deck of playing cards with edge-marks clearly visible.

Your iDevices can also be a server, but they can't see those infrared marks, even with special filters. However, they can see black ink marks and marks made using a different type of invisible ink - ultraviolet fluorescing ink.

A deck of cards with marks on the edges of cards that are glowing brightly under the light of a UV pen light. Next to the deck is an iPad showing the deck from it's camera's perspective.

For hard core developers, I've included the testbed, which has a bunch of visualization tools to understand how things work.

A screenshot of an app that shows a deck of cards in a viewport with marks outlined digitally, and various statistics listed below.

The testbed only runs on Mac. However, the server app is a generic Linux console app and it includes a text-based GUI mode.

A text-based console app with an image of a deck of cards printed using alphanumeric characters. Statistics appear below this text-based viewport.

Performance is critical.

The statistical model requires a full 30Hz of data. Also, this can be strapped to a person's body during a performance. Efficiency means longer battery, less heat.

It can scan/decode a 1080p image to an ordered deck in as little as 4ms. On a rPI.

Get started

Full documentation is available here.

Download Details:

Author: Nettlep
Source Code: https://github.com/nettlep/magic 
License: BSD-3-Clause license

#swift #magic #computer #vision 

Magic: Scanner for Decks Of Cards with Bar Codes Printed on Card Edges
Royce  Reinger

Royce Reinger

1667895908

Vision: Datasets, Transforms and Models Specific to Computer Vision

Torchvision

The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.

Installation

We recommend Anaconda as Python package management system. Please refer to pytorch.org for the detail of PyTorch (torch) installation. The following is the corresponding torchvision versions and supported Python versions.

torchtorchvisionpython
main / nightlymain / nightly>=3.7, <=3.10
1.13.00.14.0>=3.7, <=3.10
1.12.00.13.0>=3.7, <=3.10
1.11.00.12.0>=3.7, <=3.10
1.10.20.11.3>=3.6, <=3.9
1.10.10.11.2>=3.6, <=3.9
1.10.00.11.1>=3.6, <=3.9
1.9.10.10.1>=3.6, <=3.9
1.9.00.10.0>=3.6, <=3.9
1.8.20.9.2>=3.6, <=3.9
1.8.10.9.1>=3.6, <=3.9
1.8.00.9.0>=3.6, <=3.9
1.7.10.8.2>=3.6, <=3.9
1.7.00.8.1>=3.6, <=3.8
1.7.00.8.0>=3.6, <=3.8
1.6.00.7.0>=3.6, <=3.8
1.5.10.6.1>=3.5, <=3.8
1.5.00.6.0>=3.5, <=3.8
1.4.00.5.0==2.7, >=3.5, <=3.8
1.3.10.4.2==2.7, >=3.5, <=3.7
1.3.00.4.1==2.7, >=3.5, <=3.7
1.2.00.4.0==2.7, >=3.5, <=3.7
1.1.00.3.0==2.7, >=3.5, <=3.7
<=1.0.10.2.2==2.7, >=3.5, <=3.7

Anaconda:

conda install torchvision -c pytorch

pip:

pip install torchvision

From source:

python setup.py install
# or, for OSX
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install

We don't officially support building from source using pip, but if you do, you'll need to use the --no-build-isolation flag. In case building TorchVision from source fails, install the nightly version of PyTorch following the linked guide on the contributing page and retry the install.

By default, GPU support is built if CUDA is found and torch.cuda.is_available() is true. It's possible to force building GPU support by setting FORCE_CUDA=1 environment variable, which is useful when building a docker image.

Image Backend

Torchvision currently supports the following image backends:

  • Pillow (default)
  • Pillow-SIMD - a much faster drop-in replacement for Pillow with SIMD. If installed will be used as the default.
  • accimage - if installed can be activated by calling torchvision.set_image_backend('accimage')
  • libpng - can be installed via conda conda install libpng or any of the package managers for debian-based and RHEL-based Linux distributions.
  • libjpeg - can be installed via conda conda install jpeg or any of the package managers for debian-based and RHEL-based Linux distributions. libjpeg-turbo can be used as well.

Notes: libpng and libjpeg must be available at compilation time in order to be available. Make sure that it is available on the standard library locations, otherwise, add the include and library paths in the environment variables TORCHVISION_INCLUDE and TORCHVISION_LIBRARY, respectively.

Video Backend

Torchvision currently supports the following video backends:

  • pyav (default) - Pythonic binding for ffmpeg libraries.
  • video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any conflicting version of ffmpeg installed. Currently, this is only supported on Linux.
conda install -c conda-forge ffmpeg
python setup.py install

Using the models on C++

TorchVision provides an example project for how to use the models on C++ using JIT Script.

Installation From source:

mkdir build
cd build
# Add -DWITH_CUDA=on support for the CUDA if needed
cmake ..
make
make install

Once installed, the library can be accessed in cmake (after properly configuring CMAKE_PREFIX_PATH) via the TorchVision::TorchVision target:

find_package(TorchVision REQUIRED)
target_link_libraries(my-target PUBLIC TorchVision::TorchVision)

The TorchVision package will also automatically look for the Torch package and add it as a dependency to my-target, so make sure that it is also available to cmake via the CMAKE_PREFIX_PATH.

For an example setup, take a look at examples/cpp/hello_world.

Python linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any Python dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link to Python. This can be done by passing -DUSE_PYTHON=on to CMake.

TorchVision Operators

In order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that you #include <torchvision/vision.h> in your project.

Documentation

You can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html

Contributing

See the CONTRIBUTING file for how to help out.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Pre-trained Model License

The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See SWAG LICENSE for additional details.

Download Details:

Author: Pytorch
Source Code: https://github.com/pytorch/vision 
License: BSD-3-Clause license

#machinelearning #computer #vision #dataset 

Vision: Datasets, Transforms and Models Specific to Computer Vision
Royce  Reinger

Royce Reinger

1667563343

Caffe: A Fast Open Framework for Deep Learning

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Custom distributions

Community

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

Download Details:

Author: BVLC
Source Code: https://github.com/BVLC/caffe 
License: View license

#machinelearning #deeplearning #vision 

Caffe: A Fast Open Framework for Deep Learning
Rupert  Beatty

Rupert Beatty

1666446725

iOS-11-by-Examples: Examples Of New iOS 11 APIs

iOS 11 by Examples 

Code examples for new APIs of iOS 11.

Note: The project requires Xcode 9, Swift 4 and iOS 11.

Core ML

Image classification demo using Core ML framework. Show description of an object on selected photo.

Thanks @hollance for his useful CoreMLHelpers.

Vision

  • Face detection. Detects all faces on selected photo.

  • Face landmarks. An image analysis that finds facial features (such as the eyes and mouth) in an image.

  • Object tracking. Track any object using camera.

ARKit

Augmented reality experiences in your app or game.

Drag and Drop

Easy way to move content.

drag-and-drop-example.gif

Core NFC

Reading of NFC tag payloads. Don't forget to enable NFC Tag Reading for App ID in the Apple Developer site.

Note: select CoreNFC-Example scheme and run.

MapKit

New map type, new anotation views and clustering!

mapkit-example.gif

IdentityLookup

SMS and MMS filtering using IdentityLookup framework. Don't forget to turn on an extension in Messages > Unknown & Spam > SMS filtering.

DeviceCheck

Identifying devices that have already taken advantage of a promotional offer that you provide, or flagging a device that you have determined to be fraudulent.

Note: select DeviceChecking scheme and run.

SpriteKit by mkowalski87

Attributed text for SKLabelNode and SKTransformNode.

sprite-kit-example.gif

Blogs/Newsletter

List of online sources which have mentioned iOS 11 by Examples:

Download Details:

Author: Artemnovichkov
Source Code: https://github.com/artemnovichkov/iOS-11-by-Examples 
License: MIT license

#swift #vision #xcode #ios 

iOS-11-by-Examples: Examples Of New iOS 11 APIs
Lawrence  Lesch

Lawrence Lesch

1664439960

JSFeat: JavaScript Computer Vision Library

jsfeat

JavaScript Computer Vision library

The project aim is to explore JS/HTML5 possibilities using modern & state-of-art computer vision algorithms.

Features

  • Custom data structures
  • Basic image processing methods (grayscale, derivatives, box-blur, resample, etc.)
  • grayscale (Demo)
  • box blur (Demo)
  • gaussian blur (Demo)
  • equalize histogram (Demo)
  • canny edges (Demo)
  • sobel deriv (Demo)
  • scharr deriv (Demo)
  • find more at Examples and Documentation page
  • Linear Algebra module
  • LU (Gaussian elimination) solver
  • Cholesky solver
  • SVD decomposition, solver and pseudo-inverse
  • Eigen Vectors and Values
  • Multiview module (Demo)
  • Affine2D motion kernel
  • Homography2D motion kernel
  • RANSAC motion estimator
  • LMEDS motion estimator
  • Matrix Math module for various matrix operation such as traspose, multiply etc.
  • Features 2D
  • Fast Corners feature detector (Demo)
  • YAPE06 feature detector (Demo)
  • YAPE feature detector (Demo)
  • ORB feature descriptor (Demo)
  • Lucas-Kanade optical flow (Demo - click to add points)
  • HAAR object detector (Demo)
  • BBF object detector (Demo)

Examples and Documentation

Download Details:

Author: inspirit
Source Code: https://github.com/inspirit/jsfeat 
License: MIT license

#javascript #computer #vision #library 

JSFeat: JavaScript Computer Vision Library
Royce  Reinger

Royce Reinger

1641997200

Deep Learning toolkit for Computer Vision

(Jan 2020) Luminoth is not maintained anymore. We recommend switching to Facebook's Detectron2, which implements more modern algorithms supporting additional use cases.

Luminoth is an open source toolkit for computer vision. Currently, we support object detection, but we are aiming for much more. It is built in Python, using TensorFlow and Sonnet.

Read the full documentation here.

Example of Object Detection with Faster R-CNN

DISCLAIMER: Luminoth is still alpha-quality release, which means the internal and external interfaces (such as command line) are very likely to change as the codebase matures.

Installation

Luminoth currently supports Python 2.7 and 3.4–3.6.

Pre-requisites

To use Luminoth, TensorFlow must be installed beforehand. If you want GPU support, you should install the GPU version of TensorFlow with pip install tensorflow-gpu, or else you can use the CPU version using pip install tensorflow.

Installing Luminoth

Just install from PyPI:

pip install luminoth

Optionally, Luminoth can also install TensorFlow for you if you install it with pip install luminoth[tf] or pip install luminoth[tf-gpu], depending on the version of TensorFlow you wish to use.

Google Cloud

If you wish to train using Google Cloud ML Engine, the optional dependencies must be installed:

pip install luminoth[gcloud]

Installing from source

First, clone the repo on your machine and then install with pip:

git clone https://github.com/tryolabs/luminoth.git
cd luminoth
pip install -e .

Check that the installation worked

Simply run lumi --help.

Supported models

Currently, we support the following models:

We are planning on adding support for more models in the near future, such as RetinaNet and Mask R-CNN.

We also provide pre-trained checkpoints for the above models trained on popular datasets such as COCO and Pascal.

Usage

There is one main command line interface which you can use with the lumi command. Whenever you are confused on how you are supposed to do something just type:

lumi --help or lumi <subcommand> --help

and a list of available options with descriptions will show up.

Working with datasets

See Adapting a dataset.

Training

See Training your own model to learn how to train locally or in Google Cloud.

Visualizing results

We strive to get useful and understandable summary and graph visualizations. We consider them to be essential not only for monitoring (duh!), but for getting a broader understanding of what's going under the hood. The same way it is important for code to be understandable and easy to follow, the computation graph should be as well.

By default summary and graph logs are saved to jobs/ under the current directory. You can use TensorBoard by running:

tensorboard --logdir path/to/jobs

Why the name?

The Dark Visor is a Visor upgrade in Metroid Prime 2: Echoes. Designed by the Luminoth during the war, it was used by the Champion of Aether, A-Kul, to penetrate Dark Aether's haze in battle against the Ing.

-- Dark Visor - Wikitroid

Author: Tryolabs
Source Code: https://github.com/tryolabs/luminoth 
License: BSD-3-Clause License

#python #machine-learning #vision 

Deep Learning toolkit for Computer Vision
Royce  Reinger

Royce Reinger

1641982020

Finding Duplicate Images Made Easy!

Image Deduplicator (imagededup)

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

This package provides functionality to make use of hashing algorithms that are particularly good at finding exact duplicates as well as convolutional neural networks which are also adept at finding near duplicates. An evaluation framework is also provided to judge the quality of deduplication for a given dataset.

Following details the functionality provided by the package:

Detailed documentation for the package can be found at: https://idealo.github.io/imagededup/

imagededup is compatible with Python 3.6+ and runs on Linux, MacOS X and Windows. It is distributed under the Apache 2.0 license.

📖 Contents

⚙️ Installation

There are two ways to install imagededup:

  • Install imagededup from PyPI (recommended):
pip install imagededup

⚠️ Note: The TensorFlow >=2.1 and TensorFlow 1.15 release now include GPU support by default. Before that CPU and GPU packages are separate. If you have GPUs, you should rather install the TensorFlow version with GPU support especially when you use CNN to find duplicates. It's way faster. See the TensorFlow guide for more details on how to install it for older versions of TensorFlow.

  • Install imagededup from the GitHub source:
git clone https://github.com/idealo/imagededup.git
cd imagededup
pip install "cython>=0.29"
python setup.py install

🚀 Quick Start

In order to find duplicates in an image directory using perceptual hashing, following workflow can be used:

  • Import perceptual hashing method
from imagededup.methods import PHash
phasher = PHash()
  • Generate encodings for all images in an image directory
encodings = phasher.encode_images(image_dir='path/to/image/directory')
  • Find duplicates using the generated encodings
duplicates = phasher.find_duplicates(encoding_map=encodings)
  • Plot duplicates obtained for a given file (eg: 'ukbench00120.jpg') using the duplicates dictionary
from imagededup.utils import plot_duplicates
plot_duplicates(image_dir='path/to/image/directory',
                duplicate_map=duplicates,
                filename='ukbench00120.jpg')

The output looks as below:

The complete code for the workflow is:

from imagededup.methods import PHash
phasher = PHash()

# Generate encodings for all images in an image directory
encodings = phasher.encode_images(image_dir='path/to/image/directory')

# Find duplicates using the generated encodings
duplicates = phasher.find_duplicates(encoding_map=encodings)

# plot duplicates obtained for a given file using the duplicates dictionary
from imagededup.utils import plot_duplicates
plot_duplicates(image_dir='path/to/image/directory',
                duplicate_map=duplicates,
                filename='ukbench00120.jpg')

For more examples, refer this part of the repository.

For more detailed usage of the package functionality, refer: https://idealo.github.io/imagededup/

⏳ Benchmarks

Detailed benchmarks on speed and classification metrics for different methods have been provided in the documentation. Generally speaking, following conclusions can be made:

  • CNN works best for near duplicates and datasets containing transformations.
  • All deduplication methods fare well on datasets containing exact duplicates, but Difference hashing is the fastest.

🤝 Contribute

We welcome all kinds of contributions. See the Contribution guide for more details.

📝 Citation

Please cite Imagededup in your publications if this is useful for your research. Here is an example BibTeX entry:

@misc{idealods2019imagededup,
  title={Imagededup},
  author={Tanuj Jain and Christopher Lennan and Zubin John and Dat Tran},
  year={2019},
  howpublished={\url{https://github.com/idealo/imagededup}},
}

🏗 Maintainers

Author: Idealo
Source Code: https://github.com/idealo/imagededup 
License: Apache-2.0 License

#python #hash #vision #tensorflow

Finding Duplicate Images Made Easy!

GluonCV provides implementations of the state-of-the-art (SOTA)

Gluon CV Toolkit

GluonCV provides implementations of the state-of-the-art (SOTA) deep learning models in computer vision.

It is designed for engineers, researchers, and students to fast prototype products and research ideas based on these models. This toolkit offers four main features:

  1. Training scripts to reproduce SOTA results reported in research papers
  2. Supports both PyTorch and MXNet
  3. A large number of pre-trained models
  4. Carefully designed APIs that greatly reduce the implementation complexity
  5. Community supports

Demo


 

Check the HD video at Youtube or Bilibili.

Supported Applications

ApplicationIllustrationAvailable Models
Image Classification:
recognize an object in an image.
classification50+ models, including
ResNet, MobileNet,
DenseNet, VGG, ...
Object Detection:
detect multiple objects with their
bounding boxes in an image.
detectionFaster RCNN, SSD, Yolo-v3
Semantic Segmentation:
associate each pixel of an image
with a categorical label.
semanticFCN, PSP, ICNet, DeepLab-v3, DeepLab-v3+, DANet, FastSCNN
Instance Segmentation:
detect objects and associate
each pixel inside object area with an
instance label.
instanceMask RCNN
Pose Estimation:
detect human pose
from images.
poseSimple Pose
Video Action Recognition:
recognize human actions
in a video.
action_recognitionMXNet: TSN, C3D, I3D, I3D_slow, P3D, R3D, R2+1D, Non-local, SlowFast
PyTorch: TSN, I3D, I3D_slow, R2+1D, Non-local, CSN, SlowFast, TPN
Depth Prediction:
predict depth map
from images.
depthMonodepth2
GAN:
generate visually deceptive images
lsunWGAN, CycleGAN, StyleGAN
Person Re-ID:
re-identify pedestrians across scenes
re-idMarket1501 baseline

Installation

GluonCV is built on top of MXNet and PyTorch. Depending on the individual model implementation(check model zoo for the complete list), you will need to install either one of the deep learning framework. Of course you can always install both for the best coverage.

Please also check installation guide for a comprehensive guide to help you choose the right installation command for your environment.

Installation (MXNet)

GluonCV supports Python 3.6 or later. The easiest way to install is via pip.

Stable Release

The following commands install the stable version of GluonCV and MXNet:

pip install gluoncv --upgrade
# native
pip install -U --pre mxnet -f https://dist.mxnet.io/python/mkl
# cuda 10.2
pip install -U --pre mxnet -f https://dist.mxnet.io/python/cu102mkl

The latest stable version of GluonCV is 0.8 and we recommend mxnet 1.6.0/1.7.0

Nightly Release

You may get access to latest features and bug fixes with the following commands which install the nightly build of GluonCV and MXNet:

pip install gluoncv --pre --upgrade
# native
pip install -U --pre mxnet -f https://dist.mxnet.io/python/mkl
# cuda 10.2
pip install -U --pre mxnet -f https://dist.mxnet.io/python/cu102mkl

There are multiple versions of MXNet pre-built package available. Please refer to mxnet packages if you need more details about MXNet versions.

Installation (PyTorch)

GluonCV supports Python 3.6 or later. The easiest way to install is via pip.

Stable Release

The following commands install the stable version of GluonCV and PyTorch:

pip install gluoncv --upgrade
# native
pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
# cuda 10.2
pip install torch==1.6.0 torchvision==0.7.0

There are multiple versions of PyTorch pre-built package available. Please refer to PyTorch if you need other versions.

The latest stable version of GluonCV is 0.8 and we recommend PyTorch 1.6.0

Nightly Release

You may get access to latest features and bug fixes with the following commands which install the nightly build of GluonCV:

pip install gluoncv --pre --upgrade
# native
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
# cuda 10.2
pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html

Docs 📖

GluonCV documentation is available at our website.

Examples

All tutorials are available at our website!

Image Classification

Object Detection

Semantic Segmentation

Instance Segmentation

Video Action Recognition

Depth Prediction

Generative Adversarial Network

Person Re-identification

Resources

Check out how to use GluonCV for your own research or projects.

Citation

If you feel our code or models helps in your research, kindly cite our papers:

@article{gluoncvnlp2020,
  author  = {Jian Guo and He He and Tong He and Leonard Lausen and Mu Li and Haibin Lin and Xingjian Shi and Chenguang Wang and Junyuan Xie and Sheng Zha and Aston Zhang and Hang Zhang and Zhi Zhang and Zhongyue Zhang and Shuai Zheng and Yi Zhu},
  title   = {GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing},
  journal = {Journal of Machine Learning Research},
  year    = {2020},
  volume  = {21},
  number  = {23},
  pages   = {1-7},
  url     = {http://jmlr.org/papers/v21/19-429.html}
}

@article{he2018bag,
  title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
  author={He, Tong and Zhang, Zhi and Zhang, Hang and Zhang, Zhongyue and Xie, Junyuan and Li, Mu},
  journal={arXiv preprint arXiv:1812.01187},
  year={2018}
}

@article{zhang2019bag,
  title={Bag of Freebies for Training Object Detection Neural Networks},
  author={Zhang, Zhi and He, Tong and Zhang, Hang and Zhang, Zhongyue and Xie, Junyuan and Li, Mu},
  journal={arXiv preprint arXiv:1902.04103},
  year={2019}
}

@article{zhang2020resnest,
  title={ResNeSt: Split-Attention Networks},
  author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
  journal={arXiv preprint arXiv:2004.08955},
  year={2020}
}

Author: DMLC
Source Code: https://github.com/dmlc/gluon-cv 
License: Apache-2.0 License

#machine-learning #vision #python 

GluonCV provides implementations of the state-of-the-art (SOTA)

Our CEO, Sanjay Ghinaiya Has Pioneered Way to Digitization: GoodFirms

Recently our CEO, Mr. Sanjay Ghinaiya interviewed with GoodFirms and talked about success of World Web Technology Pvt. Ltd.. Also, he shared about his #businessjourney, research on what makes our #marketing work & strategic #vision of what we are preparing to become and How he has pioneered the Way to a New Era of #Digitization.

𝗖𝗵𝗲𝗰𝗸 𝗵𝗲𝗿𝗲 ➨ https://www.worldwebtechnology.com/blog/World-Web-Technology%E2%80%99s-CEO-Sanjay-Ghinaiya-Has-Pioneered-the-Way-to-a-New-Era-of-Digitization-GoodFirms/

#worldwebtechnology 

Our CEO, Sanjay Ghinaiya Has Pioneered Way to Digitization: GoodFirms
Dominic  Feeney

Dominic Feeney

1620458262

Computer Vision Using TensorFlow Keras - Analytics India Magazine

Computer Vision attempts to perform the tasks that a human brain does with the aid of human eyes. Computer Vision is a branch of Deep Learning that deals with images and videos. Computer Vision tasks can be roughly classified into two categories:

  1. Discriminative tasks
  2. Generative tasks

Discriminative tasks, in general, are about predicting the probability of occurrence (e.g. class of an image) given probability distribution (e.g. features of an image). Generative tasks, in general, are about generating the probability distribution (e.g. generating an image) given the probability of occurrence (e.g. class of an image) and/or other conditions.

Discriminative Computer Vision finds applications in image classificationobject detectionobject recognitionshape detectionpose estimationimage segmentation, etc. Generative Computer Vision finds applications in photo enhancementimage synthesisaugmentationdeepfake videos, etc.

This article aims to give a strong foundation to Computer Vision by exploring image classification tasks using Convolutional Neural Networks built with TensorFlow Keras. More importance has been given to both the coding part and the key concepts of theory and math behind each operation. Let’s start our Computer Vision journey!

Readers are expected to have a basic understanding of deep learning. This article, “Getting Started With Deep Learning Using TensorFlow Keras”, helps one grasp the fundamentals of deep learning.

#developers corner #computer vision #fashion mnist #image #image classification #keras #tensorflow #vision

Computer Vision Using TensorFlow Keras - Analytics India Magazine
Paula  Hall

Paula Hall

1619510640

Customizing Pandas-Profiling Summaries

Exploiting the visions typesystem for fun and profit

If you’ve previously used pandas-profiling, you might have observed that column summaries are unique to the data types of each feature in your data. However, until recently it wasn’t possible to customize those summaries, so, if you wanted to automatically compute the average surface area of a sequence of shapely geometries, or the set of domain names in a sequence of email addresses, you were out of luck — until now.

The recently completed migration of pandas-profiling to the visions type system brings fully customizable type detection logic, summary algorithms, and is the first step towards end-to-end report customization including customized renderings. Over the rest of blog post I’m going to show you how to get started with visions and easily customize your data summaries using pandas-profiling.

#eda #dylan-profiler #data-science #pandas-profiling #vision

Customizing Pandas-Profiling Summaries
Crypto Like

Crypto Like

1608711589

What is APY Vision (VISION) | What is VISION token

What Is APY.Vision (VISION)?

APY.Vision is an analytics platform that provides clarity for liquidity providers contributing capital on Automated Market Making (AMM) protocols. Innovations in blockchain technology and Decentralized Finance (DeFi) have opened the gates to allow anyone, with any amount of spare capital, to contribute liquidity to markets and earn a fee from doing so.

We are a tool that tracks impermanent losses of a user’s pooled tokens and keeps track of the user’s financial analytics. In addition, we provide historical pool performance and actionable insights for liquidity providers.

VISION is the membership token that is used for accessing the PRO edition of the tool. We provide our PRO members with additional analytics. Furthermore, token holders can vote on new features to determine the roadmap of the product. In the future, when we expand to other DeFi verticals such as decentralized options and derivatives, VISION holders can gain access to those analytics modules.

We believe the future is DeFi, and we want to build the best tools and provide the best analytics to this new breed of investors.

How Many VISION tokens Are There in Circulation?

VISION launched the membership token on Nov 15, 2020. The max supply of the token is 5,000,000 and 15% of the tokens are reserved for the foundation, while 4% of the token supply is earmarked for marketing and promotions, while 1% of the tokens is reserved for giving back to the ecosystem.

Where Can I Buy APY.Vision Membership Tokens (VISION)?

You can acquire the membership tokens on our bonding curve or on Uniswap under the VISION/ETH pair.

Image for post

How APY Vision gives you 20/20 vision

Having been LPs ourselves, we experienced firsthand how there was a lack of visibility into an LP’s holdings and your profits and losses. We decided to solve this problem for ourselves by creating a tool that tracks impermanent gains and losses of your pooled tokens, allowing you to have all the analytics you need at your fingertips with actionable insights to ensure you’re in the best pools.

We truly believe AMMs are here to stay and we want to enable anyone, anywhere, who wants to be an LP to have the best information and knowledge they need to become successful in this fast moving, high stakes game of market making.

Beat the rest to be the best

We believe in a democratized world — after all, that’s why the ethos of blockchain appeals to us first and foremost. With that being said, the huge amount of work we’re doing needs support to continue to provide value to all our users. Our aim is that APY Vision will always be a free tool. For the more advanced LPs however, who require additional insights into the pools they are providing liquidity for, we provide a pro offering that unlocks additional features to give you a leg up over everyone else.

Our pro offering will enable:

  • Real-time price quotes (free members get refreshed quotes every hour)
  • Remembering previous addresses
  • Grouping wallet addresses into one single account view
  • Expedited query speeds (your queries will be prioritized)
  • Viewing historical gain/losses (free members can only see current liquidity pool positions) *
  • Tracking Total APY and returns with farming rewards included (a common use case for LPs that farm with staking contracts) *
  • Pool Insights advanced search (min 2000 VISION tokens)
  • Dark mode option
  • Daily summary emails *
  • Additional AMMs *
  • Vote for new features (min 2000 VISION tokens) *
  • Dedicated #gold channel on Discord (min 2000 VISION tokens) *

*Features will be released in subsequent releases

(At launch, we will be supporting a few of these features but we are working hard on rolling all the pro features out!)

Become a pro, hold a token bro

We’ve been inspired by the innovative products being born in the DeFi space and have modeled our pro membership on these projects. To become a pro member and unlock pro features, hold our membership tokens in your wallet.

Normally, a subscription service costs the same regardless of your level of usage.

However, with blockchain technology, we can be a bit more creative and innovative to ensure fair access for all.

To activate our pro features, you only need to hold 100 VISION membership tokens per $10,000 of USD tracked in your wallet(s). This ensures that people who are not big portfolio holders can benefit by holding just a small amount of VISION tokens in their wallet. As you provide more liquidity, you can add more VISION tokens to your wallet to activate the pro features — it’s that simple!

Tokens — not that big of a deal around here

First and foremost, we’d like to stress that the VISION tokens are not a security token. The token is designed to not hold value and does not have any inherent value. It is merely a way to unlock subscription access to our pro features. It is not meant to be speculated on. We are not an ICO or claim to return you any gains by acquiring the VISION token. This is simply a membership token and not an asset.

We will be launching our membership token based on a bonding curve. A bonding curve contract is one where the tokens being acquired cost more for each subsequent one. The initial cost of a VISION token is 0.0005 ETH, which means it will cost 0.05 ETH to track $10,000 USD worth in a portfolio (for life).

While we are working on delivering all the pro features, we want to enable our community to start supporting the project by being an early adopter. Thus, the cost of 0.0005 ETH per VISION token will stay that way until 250000 VISION tokens have been distributed.

Early bird gets the worm — initial phase

To ensure that there is product market fit for APY Vision, there is an option to exchange the VISION tokens back to ETH in the bonding curve contract in the beginning until the 250000th VISION tokens. In this phase, users can exchange VISION back to ETH at 100% of the price that they used to exchange the VISION tokens with in the first place (0.0005 ETH per VISION).

This ensures that if the project doesn’t gain any traction, early users can get their ETH back. That’s because we’re that committed to providing value to our community.

Also important to note is that in this phase, the foundation cannot sell tokens to the curve (in addition to the vesting terms below).

Normal Exchange Phase

After the 250000th VISION tokens have been exchanged, the token will be sold on the curve at the current price. A few days after the initial phase, we will be adding a Uniswap pool so that existing token holders can sell on the Uniswap pool and new users can choose to either buy or sell on the bonding curve.

Image for post

You get a token, everyone gets a token

There is a maximum cap of 5,000,000 VISION tokens.

The breakdown:

  • 7.5% (to the initial founding team, subject to a 36 month vesting period with 1/36 of the amount vests each month)
  • 7.5% going to a fund for contributors (subject to vesting)
  • 4% marketing / promotions / giveaways
  • 1% public goods projects
  • 80% bonding curve token contract

The master plan

At the heart of it, we’re nerds. We want to provide awesome tooling and analytics, especially since the tooling piece is sorely missing for Liquidity Providers today. That will always guide what we do.

The next phase of the Liquidity Network will be to enable monitoring and alerts to ensure Liquidity Providers can take action if there are any sudden pool movements.

Once we perfect the analytics and monitoring pieces, we want to enable a way for Liquidity Providers to automatically enter/exit liquidity pools based on alerts and parameters they set up. This will be done via smart contract wallets that only the users have access to and it will be non-custodial (because we don’t want to touch your funds with a nine foot pole, even if you paid us).

We will also be licensing our API for enterprise use. To access the API on a commercial basis, companies will need to pay for a monthly/yearly plan (in VISION tokens) and the tokens collected will be burned.

Finally, because Liquidity Provider tokens are currently held in a wallet (and not doing much), we will be looking at ways in which we can leverage them. Imagine being able to collateralize your LP tokens and borrow/lend against it to magnify your gains. Rest assured our valuable community members (you) will be able to vote on the final product.

Update — the bonding curve contract is LIVE

You can view the bonding curve contact here:

Contract: https://etherscan.io/address/0xf406f7a9046793267bc276908778b29563323996#code

Token Exchange Website:

https://curve.apy.vision/#/

Please do not acquire more than what you need. This is a membership token and it is inherently worthless. It costs 100 VISION tokens to track $10,000 USD worth. If you are unsatisfied you can return the VISION token for 100% of the ETH when less than 250000 VISION tokens have been sold. The contract has not been audited, so please use at your own risk.

FAQ (or the questions you’re too scared to ask)

Is the token bonding curve contract audited?

No, the contract has not been audited — please use it at your own risk. We will not be held responsible or liable for any losses that occur as a result of the contract. We did, however, base our contract off well-known and audited contracts and tweaked the parameters to our use.

Where is the contract address?

The contract address will be released in a subsequent blog post along with step by step instructions for acquiring the VISION tokens.

If I don’t like the service during the initial phase, can I cancel at any time?

You’ll really hurt our feelings but yes! You can simply exchange the VISION tokens back to ETH in the initial phase (where there are less than 250,000 VISION tokens sold). In that case, you get 100% back of the initial exchange rate. After the initial phase, you can sell it back on Uniswap after we create the pool.

Would you like to earn many tokens and cryptocurrencies right now! ☞ CLICK HERE

Looking for more information…

☞ Website
☞ Explorer
☞ Source Code
☞ Social Channel
Message Board
☞ Coinmarketcap

Create an Account and Trade Cryptocurrency NOW

Binance
Bittrex
Poloniex

Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!

#bitcoin #crypto #apy vision #vision

What is APY Vision (VISION) | What is VISION token
Hollie  Ratke

Hollie Ratke

1602792000

Automatically Pixelate Faces on iOS using Native Swift Code for Face Detection

I recently came across an excellent article from Signal where they introduced a new feature that gives users the ability to automatically blur faces—incredibly useful in a time when protestors and demonstrators need to communicate while protecting their identities.In the article, Signal also hinted at technologies they’re using, which are strictly platform-level libraries. For iOS, I would guess they have used Vision , an API made by Apple to perform a variety of image and video processing.In this article, I’ll use Apple’s native library to create an iOS application that will pixelate faces on any given image.

Overview:

  1. Why the built-in/on-device solutionCreate Face Detection helpersBuild the iOS applicationResultsConclusion

This is a look at the final result:

Image for post

I have included code in this article where it’s most instructive. Full code and data can be found on my GitHub page. Let’s get started.

Why the built-in/on-device solution

  • **On-device: **The most powerful argument for on-device solutions is the latency—the whole process is performed on the phone and doesn’t necessitate communication with an external/remote API. There is also the argument of privacy. Since everything happens on-device, there is no data transferred from the phone to a remote server. A cloud-based API can be risky in terms of an entity being in the middle of the communication, or the service provided being able to literally store images for other reasons than the advertised intent.**Built-in: **They are many ways to use/create on-device models that can be, in some cases, better than Apple’s built-in solution. Google’s ML Kit provides an on-device solution on iOS for Face Detection (in my experience, similar to Apple’s in terms of accuracy), which is free and has more features than Apple’s solution. But for our use case, we just need to detect faces and draw bounding boxes. You can also build your own model using Turi Create’s object detection API or any other framework of your choice. Either way, you still need a huge amount of diverse and annotated data to come even close to Apple’s or Google’s accuracy and performance.

Apple has been active in providing iOS developers with powerful APIs centered on computer vision and even other AI disciplines (i.e. NLP). They have continuously improved them by trying to represent the complex spectrum of use cases,** from gender differences to racial diversity**. I remember the first version of the Face Detection API being very bad at detecting darker skinned faces. They have since improved it, but there is no perfect system so far, and detection is not 100% accurate.

#vision #ios-app-development #heartbeat #swift #computer-vision

Automatically Pixelate Faces on iOS using Native Swift Code for Face Detection
Wilford  Pagac

Wilford Pagac

1602680400

Top 7 Tips To Become A UI/UX Pro | Hacker Noon

Recently, I’ve been receiving similar questions from a lot of people:

  • How can I get more into UI/UX?
  • How do you know what is good design and bad design
  • What does it take to become a designer?

“How do I start?”

This question brings me back to time when we started our design studio.

The first thing you should know is:

“You don’t have to be born with it.”

We’re not some unicorn creatures that were meant to be designers and were just born artistic like that. Design is learned. Design is about solving problems. It’s a process of constantly finding problems and creating solutions for them.

There are many areas of design: UI, UX, product designers, graphic designers, interaction designers, information architect, and the list goes on. Start by figuring out which specialty interest you more. For now, let’s focus on the most common type: a mix of interface and experience: UI/UX designer.

1. Familiarize yourself with UI principles.

Before practicing design, the first thing you need to do is learn some design principles. From this, you’ll be able to enter the design world and start thinking “creatively”. You will learn the psychological aspects of design: why it can look good and why it can fail.

Here are some basic principles you should know about.

1. Color

Color vocabulary, fundamentals and the psychology of colors.

Principles of design: Color

2. Balance

Symmetry and assymetry.

Principles of design: Balance

3. Contrast

Using contrast to organize information, build hierarchy and create focus.

Principles of design: contrast

4. Typography

Choosing fonts and creating readable text on the web.

10 Principles Of Readability And Web Typography

5. Consistency

The most important principle, creating intuitive and usable designs starts here.

Design principle: Consistency

Here** are some great do’s and don’ts to design a good UI.**

#ui-design #ui #design #web-design #productivity #vision #technology #good-company

Top 7 Tips To Become A UI/UX Pro | Hacker Noon
Oleta  Becker

Oleta Becker

1601420400

How to Train Your First Deep Learning Model

If you are trying to learn about Deep Learning today, there are tons of online courses, books and material for that. Then, something like this appears in the very first lesson:

Image for post

Part of the backpropagation equations

Deep Learning is at it’s heart a data-analysis technique, thus the underlying concepts are definitely math-intensive. However, these complicated equations and formulas are really stressful to look at if we are just trying to learn something new! (Especially if we do not have PhDs in Math or Computer science. Or the last time we did integration was 10 years ago in school.)

This post will be the first part in a series where I introduce basic concepts of Deep Learning (DL), based on the fast.ai course. Fast.ai teaches DL using the top-down approach: which means showing students what they can do and providing hands-on experience from the start, then moving on to explain the underlying concepts. This is largely different from typical online courses we see, where knowledge is built ground-up, starting from the underlying concepts.

#deep-learning #machine-learning #fastai #vision

How to Train Your First Deep Learning Model