Noah  Rowe

Noah Rowe

1593536824

Hyper-parameter optimization using EDASpy

Tuning machine learning hyper-parameters is a tedious task we tend to postpone to the very last of the project. Hyper-parameters are everywhere are manual tuning is nearly impossible.

Imagine we have only one hyper-parameter and we want to optimize it. We would have to execute the program, script, algorithm or whatever we are tuning, N times, being N the number of possible values of the parameter. With two parameters, we would have to execute N times for each time of the second parameter, thus, N**2. And so on.

There exist lot of possibilities for hyper-parameter tuning. In this case I will explain an approach from the evolutionary algorithms. Particularly, the _Estimation of Distribution Algorithms (EDAs). _Similar to the Genetic Algorithms. The kernel of this algorithm is that iteratively, the algorithm propose some solutions from which, select the best ones that fit the cost function we want to optimize. From these selection we generate new solutions based on the normal distribution built from them. Figure 1 shows a flow diagram. In each iteration a generation with a group of solutions is sampled. This is called a generation with multiple individuals. In the selection of the best individuals, a percentage of this generation is selected (ALPHA parameter)

This type of algorithm can be used for further tasks as Feature Selection (FS) or optimization of some variables depending on some further fixed values of other variables. Future stories will talk about this topics.

Figure 1. Flow diagram of an Estimation of Distribution Algorithm (EDA).

Lets see an easy example solved using the Python package EDAspy. To install the package just do:

pip install EDAspy

We have a cost function as follows. The three hyper-parameters we want to optimize are dictionary[‘param1’], dictionary[‘param2’] and dictionary[‘param3’]. Also the cost function incorporates some weights. The algorithm must find the optimum values to the hyper-parameters in order to minimize the cost function.

weights = [20,10,-4]

def cost_function(dictionary):
    function = weights[0]*dictionary['param1']**2 + weights[1]*(np.pi/dictionary['param2']) - 2 - weights[2]*dictionary['param3']
    if function < 0:
        return 9999999
    return function

The algorithm needs as an input an initial range in which start to evaluate the solutions. Optionally, the algorithm can set a maximum and a minimum for the hyper-parameters. They are all set in a pandas table with a row for each data and a column for each hyper-parameter name. If max and min are not set, do not introduce the max and min rows.

from EDAspy.optimization.univariate import EDA_continuous as EDAc
import pandas as pd
import numpy as np

wheights = [20,10,-4]

def cost_function(dictionary):
    function = wheights[0]*dictionary['param1']**2 + wheights[1]*(np.pi/dictionary['param2']) - 2 - wheights[2]*dictionary['param3']
    if function < 0:
        return 9999999
    return function

vector = pd.DataFrame(columns=['param1', 'param2', 'param3'])
vector['data'] = ['mu', 'std', 'min', 'max']
vector = vector.set_index('data')
vector.loc['mu'] = [5, 8, 1]
vector.loc['std'] = 20
vector.loc['min'] = 0
vector.loc['max'] = 100

EDA = EDAc(SIZE_GEN=40, MAX_ITER=200, DEAD_ITER=20, ALPHA=0.7, vector=vector, aim='minimize', cost_function=cost_function)
bestcost, params, history = EDA.run()
print(bestcost)
print(params)
print(history)

#machine-learning #optimization #algorithms

What is GEEK

Buddha Community

Hyper-parameter optimization using EDASpy

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website

Noah  Rowe

Noah Rowe

1593536824

Hyper-parameter optimization using EDASpy

Tuning machine learning hyper-parameters is a tedious task we tend to postpone to the very last of the project. Hyper-parameters are everywhere are manual tuning is nearly impossible.

Imagine we have only one hyper-parameter and we want to optimize it. We would have to execute the program, script, algorithm or whatever we are tuning, N times, being N the number of possible values of the parameter. With two parameters, we would have to execute N times for each time of the second parameter, thus, N**2. And so on.

There exist lot of possibilities for hyper-parameter tuning. In this case I will explain an approach from the evolutionary algorithms. Particularly, the _Estimation of Distribution Algorithms (EDAs). _Similar to the Genetic Algorithms. The kernel of this algorithm is that iteratively, the algorithm propose some solutions from which, select the best ones that fit the cost function we want to optimize. From these selection we generate new solutions based on the normal distribution built from them. Figure 1 shows a flow diagram. In each iteration a generation with a group of solutions is sampled. This is called a generation with multiple individuals. In the selection of the best individuals, a percentage of this generation is selected (ALPHA parameter)

This type of algorithm can be used for further tasks as Feature Selection (FS) or optimization of some variables depending on some further fixed values of other variables. Future stories will talk about this topics.

Figure 1. Flow diagram of an Estimation of Distribution Algorithm (EDA).

Lets see an easy example solved using the Python package EDAspy. To install the package just do:

pip install EDAspy

We have a cost function as follows. The three hyper-parameters we want to optimize are dictionary[‘param1’], dictionary[‘param2’] and dictionary[‘param3’]. Also the cost function incorporates some weights. The algorithm must find the optimum values to the hyper-parameters in order to minimize the cost function.

weights = [20,10,-4]

def cost_function(dictionary):
    function = weights[0]*dictionary['param1']**2 + weights[1]*(np.pi/dictionary['param2']) - 2 - weights[2]*dictionary['param3']
    if function < 0:
        return 9999999
    return function

The algorithm needs as an input an initial range in which start to evaluate the solutions. Optionally, the algorithm can set a maximum and a minimum for the hyper-parameters. They are all set in a pandas table with a row for each data and a column for each hyper-parameter name. If max and min are not set, do not introduce the max and min rows.

from EDAspy.optimization.univariate import EDA_continuous as EDAc
import pandas as pd
import numpy as np

wheights = [20,10,-4]

def cost_function(dictionary):
    function = wheights[0]*dictionary['param1']**2 + wheights[1]*(np.pi/dictionary['param2']) - 2 - wheights[2]*dictionary['param3']
    if function < 0:
        return 9999999
    return function

vector = pd.DataFrame(columns=['param1', 'param2', 'param3'])
vector['data'] = ['mu', 'std', 'min', 'max']
vector = vector.set_index('data')
vector.loc['mu'] = [5, 8, 1]
vector.loc['std'] = 20
vector.loc['min'] = 0
vector.loc['max'] = 100

EDA = EDAc(SIZE_GEN=40, MAX_ITER=200, DEAD_ITER=20, ALPHA=0.7, vector=vector, aim='minimize', cost_function=cost_function)
bestcost, params, history = EDA.run()
print(bestcost)
print(params)
print(history)

#machine-learning #optimization #algorithms

Mya  Lynch

Mya Lynch

1599095520

Complete Guide to Adam Optimization

In the 1940s, mathematical programming was synonymous with optimization. An optimization problem included an objective function that is to be maximized or minimized by choosing input values from an allowed set of values [1].

Nowadays, optimization is a very familiar term in AI. Specifically, in Deep Learning problems. And one of the most recommended optimization algorithms for Deep Learning problems is Adam.

Disclaimer: basic understanding of neural network optimization. Such as Gradient Descent and Stochastic Gradient Descent is preferred before reading.

In this post, I will highlight the following points:

  1. Definition of Adam Optimization
  2. The Road to Adam
  3. The Adam Algorithm for Stochastic Optimization
  4. Visual Comparison Between Adam and Other Optimizers
  5. Implementation
  6. Advantages and Disadvantages of Adam
  7. Conclusion and Further Reading
  8. References

1. Definition of Adam Optimization

The Adam algorithm was first introduced in the paper Adam: A Method for Stochastic Optimization [2] by Diederik P. Kingma and Jimmy Ba. Adam is defined as “a method for efficient stochastic optimization that only requires first-order gradients with little memory requirement” [2]. Okay, let’s breakdown this definition into two parts.

First, stochastic optimization is the process of optimizing an objective function in the presence of randomness. To understand this better let’s think of Stochastic Gradient Descent (SGD). SGD is a great optimizer when we have a lot of data and parameters. Because at each step SGD calculates an estimate of the gradient from a random subset of that data (mini-batch). Unlike Gradient Descent which considers the entire dataset at each step.

Image for post

#machine-learning #deep-learning #optimization #adam-optimizer #optimization-algorithms

Rylan  Becker

Rylan Becker

1624496700

Optimize Your Algorithms Tail Call Optimization

While writing code and algorithms you should consider tail call optimization (TCO).

What is tail call optimization?

The tail call optimization is the fact of optimizing the recursive functions in order to avoid building up a tall call stack. You should as well know that some programming languages are doing tail call optimizations.

For example, Python and Java decided to don’t use TCO. While JavaScript allows to use TCO since ES2015-ES6.

Even if you know that your favorite language support natively TCO or not, I would definitely recommend you to assume that your compiler/interpreter will not do the work for you.

How to do a tail call optimization?

There are two famous methods to do a tail call optimization and avoid tall call stacks.

1. Going bottom-up

As you know recursions are building up the call stack so if we avoid such recursions in our algorithms it will will allow us to save on the memory usage. This strategy is called the bottom-up (we start from the beginning, while a recursive algorithm starts from the end after building a stack and works backwards.)

Let’s take an example with the following code (top-down — recursive code):

function product1ToN(n) {
  return (n > 1) ? (n * product1ToN(n-1)) : 1;
}

As you can see this code has a problem: it builds up a call stack of size O(n), which makes our total memory cost O(n). This code makes us vulnerable to a stack overflow error, where the call stack gets too big and runs out of space.

In order to optimize our example we need to go bottom-down and remove the recursion:

function product1ToN(n) {
  let result = 1;
  for (let num = 1; num <= n; num++) {
    result *= num;
  }
  return result;
}

This time we are not stacking up our calls in the call stack, and we do use a O(1) space complexity(with a O(n) time complexity).

#memoization #programming #algorithms #optimization #optimize

Makenzie  Pagac

Makenzie Pagac

1594193520

A faster Hyper Parameter Tuning using Nature-Inspired Algorithms in Python

Nature-inspired algorithms are really powerful and are commonly used for solving NP-hard problems (eg. traveling salesman problem) or other computationally expensive tasks. They are also called optimization algorithms. Nature-inspired algorithms are trying to find the best solution for the problem, however, it is not guaranteed the best solution will be found.

Hyperparameter tuning is usually done using the grid search or random search. The problem of the grid search is that it is really expensive since it tries all of the possible parameter combinations. Random search will try a certain number of random parameter combinations. It is unlikely that it will find the best combination of parameters, however, it is much faster than grid search.

The echolocation of bats is the fundamental part of the Bat Algorithm (Xin-She Yang 2010).

And here comes the nature-inspired algorithms. They are faster than grid search and comes with the possibility of finding the best solution — the best combination of hyperparameters. However, the result will vary from algorithm to algorithm (there are many nature-inspired algorithms, for example: Bat AlgorithmFirefly Algorithm…), also these algorithms have their own parameters that control the way how they search for the solution. If you know how these algorithms work, you might want to set these parameters to improve the search process.

#nature-inspired #scikit-learn #data-science #hyper-parameter-tuning #python