1669289340

## How To Create A Perfect Decision Tree?

Decision Tree: How To Create A Perfect Decision Tree?

A Decision Tree has many analogies in real life and turns out, it has influenced a wide area of Machine Learning, covering both Classification and Regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making.

So the outline of what I’ll be covering in this blog is as follows.

• What is a Decision Tree?
• Advantages and Disadvantages of a Decision Tree
• Creating a Decision Tree

## What is a Decision Tree?

A decision tree is a map of the possible outcomes of a series of related choices. It allows an individual or organization to weigh possible actions against one another based on their costs, probabilities, and benefits.

As the name goes, it uses a tree-like model of decisions. They can be used either to drive informal discussion or to map out an algorithm that predicts the best choice mathematically.

A decision tree typically starts with a single node, which branches into possible outcomes. Each of those outcomes leads to additional nodes, which branch off into other possibilities. This gives it a tree-like shape.

There are three different types of nodes: chance nodes, decision nodes, and end nodes. A chance node, represented by a circle, shows the probabilities of certain results. A decision node, represented by a square, shows a decision to be made, and an end node shows the final outcome of a decision path.

## Advantages & Disadvantages of Decision Trees

### Advantages

• Decision trees generate understandable rules.
• Decision trees perform classification without requiring much computation.
• Decision trees are capable of handling both continuous and categorical variables.
• Decision trees provide a clear indication of which fields are most important for prediction or classification.

### Disadvantages

• Decision trees are less appropriate for estimation tasks where the goal is to predict the value of a continuous attribute.
• Decision trees are prone to errors in classification problems with many class and a relatively small number of training examples.
• Decision trees can be computationally expensive to train. The process of growing a decision tree is computationally expensive. At each node, each candidate splitting field must be sorted before its best split can be found. In some algorithms, combinations of fields are used and a search must be made for optimal combining weights. Pruning algorithms can also be expensive since many candidate sub-trees must be formed and compared.

## Creating a Decision Tree

Let us consider a scenario where a new planet is discovered by a group of astronomers. Now the question is whether it could be ‘the next earth?’ The answer to this question will revolutionize the way people live. Well, literally!

There is n number of deciding factors which need to be thoroughly researched to take an intelligent decision. These factors can be whether water is present on the planet, what is the temperature, whether the surface is prone to continuous storms, flora and fauna survives the climate or not, etc.

Let us create a decision tree to find out whether we have discovered a new habitat.

The habitable temperature falls into the range 0 to 100 Celsius.

Whether water is present or not?

Whether flora and fauna flourishes?

The planet has a stormy surface?

Thus, we a have a decision tree with us.

## Classification Rules:

Classification rules are the cases in which all the scenarios are taken into consideration and a class variable is assigned to each.

### Class Variable:

Each leaf node is assigned a class-variable. A class-variable is the final output which leads to our decision.

Let us derive the classification rules from the Decision Tree created:

1. If Temperature is not between 273 to 373K, -> Survival Difficult

2. If Temperature is between 273 to 373K, and  water is not present, -> Survival Difficult

3. If Temperature is between 273 to 373K, water is present, and flora and fauna is not present -> Survival Difficult

4. If Temperature is between 273 to 373K, water is present, flora and fauna is present, and a stormy surface is not present -> Survival Probable

5. If Temperature is between 273 to 373K, water is present, flora and fauna is present, and a stormy surface is present -> Survival Difficult

## Decision Tree

A decision tree has the following constituents :

• Root Node: The factor of ‘temperature’ is considered as the root in this case.
• Internal Node: The nodes with one incoming edge and 2 or more outgoing edges.
• Leaf Node: This is the terminal node with no out-going edge.

As the decision tree is now constructed, starting from the root-node we check the test condition and assign the control to one of the outgoing edges, and so the condition is again tested and a node is assigned. The decision tree is said to be complete when all the test conditions lead to a leaf node. The leaf node contains the class-labels, which vote in favor or against the decision.

Now, you might think why did we start with the ‘temperature’ attribute at the root? If you choose any other attribute, the decision tree constructed will be different.

Correct. For a particular set of attributes, there can be numerous different trees created. We need to choose the optimal tree which is done by following an algorithmic approach. We will now see ‘the greedy approach’ to create a perfect decision tree.

## The Greedy Approach

“Greedy Approach is based on the concept of Heuristic Problem Solving by making an optimal local choice at each node. By making these local optimal choices, we reach the approximate optimal solution globally.”

The algorithm can be summarized as :

1. At each stage (node), pick out the best feature as the test condition.

2. Now split the node into the possible outcomes (internal nodes).

3. Repeat the above steps till all the test conditions have been exhausted into leaf nodes.

When you start to implement the algorithm, the first question is: ‘How to pick the starting test condition?’

The answer to this question lies in the values of ‘Entropy’ and ‘Information Gain’. Let us see what are they and how do they impact our decision tree creation.

Entropy: Entropy in Decision Tree stands for homogeneity. If the data is completely homogenous, the entropy is 0, else if the data is divided (50-50%) entropy is 1.

Information Gain: Information Gain is the decrease/increase in Entropy value when the node is split.

An attribute should have the highest information gain to be selected for splitting. Based on the computed values of Entropy and Information Gain, we choose the best attribute at any particular step.

Let us consider the following data:

There can be n number of decision trees that can be formulated from these set of attributes.

### Tree Creation Trial-1 :

Here we take up the attribute ‘Student’ as the initial test condition.

### Tree Creation Trial-2 :

Similarly, why to choose ‘Student’? We can choose ‘Income’ as the test condition.

## Creating the Perfect Decision Tree With Greedy Approach

Let us follow the ‘Greedy Approach’ and construct the optimal decision tree.

There are two classes involved: ‘Yes’ i.e. whether the person buys a computer or ‘No’ i.e. he does not. To calculate Entropy and Information Gain, we are computing the value of Probability for each of these 2 classes.

»Positive: For ‘buys_computer=yes’ probability will come out to be  :

»Negative: For ‘buys_computer=no’ probability comes out to be :

Entropy in D: We now put calculate the Entropy by putting probability values in the formula stated above.

We have already classified the values of Entropy, which are:

Entropy =0: Data is completely homogenous (pure)

Entropy =1: Data is divided into 50- 50 % (impure)

Our value of Entropy is 0.940, which means our set is almost impure.

Let’s delve deep, to find out the suitable attribute and calculate the Information Gain.

What is information gain if we split on “Age”?

This data represents how many people falling into a specific age bracket, buy and do not buy the product.

For example, for people with Age 30 or less, 2 people buy (Yes) and 3 people do not buy (No) the product, the Info (D) is calculated for these 3 categories of people, that is represented in the last column.

The Info (D) for the age attribute is computed by the total of these 3 ranges of age values. Now, the question is what is the ‘information gain’ if we split on ‘Age’ attribute.

The difference of the total Information value (0.940) and the information computed for age attribute (0.694) gives the ‘information gain’.

This is the deciding factor for whether we should split at ‘Age’ or any other attribute. Similarly, we calculate the ‘information gain’ for the rest of the attributes:

Information Gain (Age) =0.246

Information Gain (Income) =0.029

Information Gain (Student) = 0.151

Information Gain (credit_rating) =0.048

On comparing these values of gain for all the attributes, we find out that the ‘information gain’ for ‘Age’ is the highest. Thus, splitting at ‘age’ is a good decision.

Similarly, at each split, we compare the information gain to find out whether that attribute should be chosen for split or not.

Thus, the optimal tree created looks like :

The classification rules for this tree can be jotted down as:

If a person’s age is less than 30 and he is not a student, he will not buy the product.

Age(<30) ^ student(no) = NO

If a person’s age is less than 30 and he is a student, he will buy the product.
Age(<30) ^ student(yes) = YES

If a person’s age is between 31 and 40, he is most likely to buy.

Age(31…40) = YES

If a person’s age is greater than 40 and has an excellent credit rating, he will not buy.

Age(>40) ^ credit_rating(excellent) = NO

If a person’s age is greater than 40, with a fair credit rating, he will probably buy.
Age(>40) ^ credit_rating(fair) = Yes

Thus, we achieve the perfect Decision Tree!!

Now that you have gone through our Decision Tree blog, you can check out Edureka’s Data Science Certification Training. Got a question for us? Please mention it in the comments section and we will get back to you.

Original article source at: https://www.edureka.co/

1666438080

## AbstractTrees

A package for dealing with generalized tree-like data structures.

## Examples

``````julia> t = [[1,2], [3,4]];  # AbstractArray and AbstractDict are trees

julia> children(t)
2-element Vector{Vector{Int64}}:
[1, 2]
[3, 4]

julia> getdescendant(t, (2,1))
3

julia> collect(PreOrderDFS(t))  # iterate from root to leaves
7-element Vector{Any}:
[[1, 2], [3, 4]]
[1, 2]
1
2
[3, 4]
3
4

julia> collect(PostOrderDFS(t))  # iterate from leaves to root
7-element Vector{Any}:
1
2
[1, 2]
3
4
[3, 4]
[[1, 2], [3, 4]]

julia> collect(Leaves(t))  # iterate over leaves
4-element Vector{Int64}:
1
2
3
4

julia> struct FloatTree  # make your own trees
x::Float64
children::Vector{FloatTree}
end;

julia> AbstractTrees.children(t::FloatTree) = t.children;

julia> AbstractTrees.nodevalue(t::FloatTree) = t.x;

julia> print_tree(FloatTree(NaN, [FloatTree(Inf, []), FloatTree(-Inf, [])]))
NaN
├─ Inf
└─ -Inf``````

## Download Details:

Author: JuliaCollections
Source Code: https://github.com/JuliaCollections/AbstractTrees.jl
License: View license

1664934000

## Octrees

Use locational codes for random access of cells of an octree. In particular, this technique is useful for quickly finding the nearest neighbors of an arbitrary point in the octree.

Installation

``````]add https://github.com/alainchau/Octrees.jl
``````

Example

``````julia> Using Octrees

# Generate artifical data and project onto xy-plane
julia> X = randn(3,100); X[3,:] .= 0

# Create octree
julia> octree = Octree(X);
Creating Octree with minimum side length δ = 0.02745

julia> using Plots
julia> include("src/misc/plotstuff.jl")
plot3d! (generic function with 2 methods)

julia> plot!(octree)

# Find nearest neighbors
julia> nn = knn(octree, [0,0,0], 1) |> collect;
julia> scatter!(X[1,nn], X[2,nn], markersize=3, color=:red)

# Draw circle to verify nearest neighbors
julia> ts = range(0.,2π,length=100)
julia> xs, ys = cos.(ts), sin.(ts)
julia> plot!(xs, ys, marker=0, fillcolor=:red, fillalpha=0.5, seriestype=:shape)
``````

Relevant literature: http://ronaldperry.org/treeTraversalJGTWithCode.pdf

## Download Details:

Author: Alainchau
Source Code: https://github.com/alainchau/Octrees.jl
License: MIT license

1664930040

## RegionTrees.jl: Quadtrees, Octrees, and their N-Dimensional Cousins

This Julia package is a lightweight framework for defining N-Dimensional region trees. In 2D, these are called region quadtrees, and in 3D they are typically referred to as octrees. A region tree is a simple data structure used to describe some kind of spatial data with varying resolution. Each element in the tree can be a leaf, representing an N-dimensional rectangle of space, or a node which is divided exactly in half along each axis into 2^N children. In addition, each element in a `RegionTrees.jl` tree can carry an arbitrary data payload. This makes it easy to use `RegionTrees` to approximate functions or describe other interesting spatial data.

## Features

• Lightweight code with few dependencies (only `StaticArrays.jl` and `Iterators.jl` are required)
• Optimized for speed and for few memory allocations
• Liberal use of `@generated` functions lets us unroll most loops and prevent allocating temporary arrays
• Built-in support for general adaptive sampling techniques

## Usage

See examples/demo/demo.ipynb for a tour through the API. You can also check out:

[1] Frisken et al. "Adaptively Sampled Distance Fields: A General Representation of Shape for Computer Graphics". SIGGRAPH 2000.

## Gallery

An adaptively sampled distance field, from `examples/adaptive_distances.ipynb`:

An adaptively sampled model-predictive control problem, from `examples/adaptive_mpc.ipynb`:

An adaptive distance field in 3D, from AdaptiveDistanceFields.jl:

## Download Details:

Author: Rdeits
Source Code: https://github.com/rdeits/RegionTrees.jl
License: View license

1664926080

## OctTrees

A Julia library for Quadtree and Octree functionality.

Original author: skariel

Updated to at least v0.7

## Examples

``````#from `runtests.jl`

q = QuadTree(100)

OctTrees.insert!(q, Point(0.1, 0.1))
OctTrees.insert!(q, Point(0.9, 0.9))

q = OctTree(100)

OctTrees.insert!(q, Point(0.1, 0.1, 0.1))
OctTrees.insert!(q, Point(0.9, 0.9, 0.9))

``````

RegionTrees.jl

Octrees.jl

## Download Details:

Author: JuliaGeometry
Source Code: https://github.com/JuliaGeometry/OctTrees.jl
License: View license

1660239960

## TimeTrees

A tiny package that implements the TimeTree type for representing fully-resolved phylogenetic time trees in Julia. A constructor which generates `TimeTree`s from Newick strings is provided, as are methods for manipulating existing trees. In addition, a plot method is implemented which generates an ASCII depiction of a tree.

Here's an example of a TimeTree being generated from its Newick representation and an ASCII visualization displayed in an interactive Julia session:

``````julia> using TimeTrees
julia> t = TimeTree(newick)
julia> plot(t)
/----------------------* 1
/-------------------------------+----------------------* 2
|                         /----------------------------* 3
/-------------+                        /+        /-------------------* 4
|             |                        |\--------+                 /-* 5
|             \------------------------+         \-----------------+-* 6
|                                      |                   /---------* 7
|                                      |               /---+---------* 8
|                                      \---------------+/------------* 9
+                                                      \+           /* 10
|                                                       \-----------+* 11
|                                                                   \* 12
|                                     /------------------------------* 13
|      /------------------------------+   /--------------+-----------* 14
|      |                              |   |              \-----------* 15
|      |                              \---+          /--------------+* 16
\------+                                  \----------+              \* 17
|                                             \---------------* 18
|                                                           /-* 19
\-----------------------------------------------------------+-* 20
``````

(Assuming that the variable `newick` holds a string containing the Newick representation.)

Trees with non-contemporaneous leaf ages are also supported:

``````julia> plot(t, labelLeaves = false)
/-------*
/-------------------+-----------------------------------------------*
|   /-----------------------------------------*
+   |                          /--------------------*
|   |      /-------------------+    /---------*
\---+      |                   \----+---------------+---*
|      |                                        \---------*
|      |                       /-------------*
\------+-----------------------+        *
|                       \--------+    /-+----*
|                                |    | \------*
|                                \----+-----------+-*
+                                                 \------*
|               /-----------------------------------------*
|     /---------+     /--------------*
|     |         \-----+--------------------------------*
\-----+                       /--------------------------*
\------------+----------+----------*
|                            /-------*
\----------------------------+---------*
``````

## Installation

TimeTrees is not yet a registered Julia package, so you'll need to install it directly from the Github repository:

``````Pkg.clone("http://github.com/tgvaughan/TimeTrees.jl")
``````

## Documentation

Documentation is available through Julia's built-in help system. To get started, enter the following once the package is installed:

``````using TimeTrees
?TimeTrees
``````

For license information, see the LICENSE.md file in this directory.

## Download Details:

Author: tgvaughan
Source Code: https://github.com/tgvaughan/TimeTrees.jl
License: View license

1660209900

## Introduction

The objective of `PhyloTrees.jl` is to provide fast and simple tools for working with rooted phylogenetic trees in Julia.

## Installation

The current release can be installed from the Julia REPL with:

``````pkg> add PhyloTrees
``````

The development version (master branch) can be installed with:

``````pkg> add PhyloTrees#master
``````

## Usage

There are several ways to add nodes and branches to our `Tree`, see below for examples

``````> # Initialize the tree
> exampletree = Tree()

Phylogenetic tree with 0 nodes and 0 branches

> # Add a node to the tree
> addnode!(exampletree)

Phylogenetic tree with 1 nodes and 0 branches
``````

Branches have `Float64` lengths

``````> # Add a node, connect it to node 1 with a branch 5.0 units in length
> branch!(exampletree, 1, 5.0)

Phylogenetic tree with 2 nodes and 1 branches

> # Add 2 nodes
> addnodes!(exampletree, 2)

Phylogenetic tree with 4 nodes and 1 branches

> # Add a branch from node 2 to node 3 10.0 units in length
> addbranch!(exampletree, 2, 3, 10.0)

Phylogenetic tree with 4 nodes and 2 branches
``````

We can quickly look at the nodes present in our `Tree`:

``````> collect(exampletree.nodes)

[unattached node]
[branch 1]-->[internal node]-->[branch 2]
[branch 2]-->[leaf node]
[root node]-->[branch 1]
``````

### Other capabilities

Distance between nodes can be calculated using the `distance` function. A node visit ordering for postorder traversal of a tree can be found with `postorder`.

A plot recipe is provided for `Tree`s. The following `Tree` has been generated and plotted using code in READMETREE.jl.

There are many other functions available that are helpful when dealing with trees including: `changesource!`, `changetarget!`, `indegree`, `outdegree`, `isroot`, `isleaf`, `isinternal`, `findroots`, `findleaves`, `findinternal`, `findnonroots`, `findnonleaves`, `findexternal`, `areconnected`, `nodepath`, `branchpath`, `parentnode`, `childnodes`, `descendantnodes`, `descendantcount`, `leafnodes`, `leafcount`, `ancestorcount`, `ancestornodes`, and `nodetype`. These work nicely with Julia's elegant function vectorization. An example of this in action can be seen in the in our plot recipe code.

## Download Details:

Author: jangevaare
Source Code: https://github.com/jangevaare/PhyloTrees.jl
License: View license

1660153920

## Description

IntervalTrees provides the type: `IntervalTree{K, V}`. It implements an associative container mapping `(K, K)` pairs to to values of type `V`. `K` may be any ordered type, but only pairs `(a, b)` where `a ≤ b` can be stored. In other words they are associative contains that map intervals to values.

## Installation

You can install the IntervalTrees package from the Julia REPL. Press `]` to enter pkg mode, then enter the following command:

``````add IntervalTrees
``````

If you are interested in the cutting edge of the development, please check out the master branch to try new features before release.

## Testing

IntervalTrees is tested against Julia `0.7-1.X` on Linux, OS X, and Windows.

## Contributing

We appreciate contributions from users including reporting bugs, fixing issues, improving performance and adding new features.

Take a look at the contributing files detailed contributor and maintainer guidelines, and code of conduct.

## Questions?

If you have a question about contributing or using BioJulia software, come on over and chat to us on the Julia Slack workspace, or you can try the Bio category of the Julia discourse site.

## Download Details:

Author: BioJulia
Source Code: https://github.com/BioJulia/IntervalTrees.jl
License: MIT license

1658479080

## KDTrees

Kd trees for Julia.

Note: This package is deprecated in favor of `NearestNeighbors.jl` which can be found at: https://github.com/KristofferC/NearestNeighbors.jl.

This package contains an optimized kd tree to perform k nearest neighbour searches and range searches.

The readme includes some usage examples, different benchmarks and a comparison for kNN to scipy's cKDTree.

## Usage

### Creating the tree

The tree is created with:

``````KDTree(data [, leafsize=10, reorder=true])
``````

The `data` argument for the tree should be a matrix of floats of dimension `(n_dim, n_points)`. The argument `leafsize` determines for what number of points the tree should stop splitting. The default value is `leafsize = 10` which is a decent value. However, the optimal leafsize is dependent on the cost of the distance function which is dependent on the dimension of the data.

The `reorder` argument is a bool which determines if the input data should be reordered to optimize for memory access. Points that are likely to be accessed close in time are also put close in memory. The default is to enable this.

### K-Nearest-Neighbours

The function `knn(tree, point, k)` finds the k nearest neighbours to a given point. The function returns a tuple of two lists with the indices and the distances from the given points respectively. These are sorted in the order of smallest to largest distance.

``````using KDTrees
tree = KDTree(randn(3, 1000))
knn(tree, [0.0, 0.0, 0.0], 5)
``````

gives both the indices and distances:

``````([300,119,206,180,845],[0.052019,0.200885,0.220441,0.22447,0.235882])
``````

### Range searches

#### Tree - point range search

The function `inball(tree, point, radius [, sort=false])` finds all points closer than `radius` to `point`. The function returns a list of the indices of the points in range. If `sort` is set to true, the indices will be sorted before being returned.

``````using KDTrees
tree = KDTree(randn(3, 1000))
inball(tree, [0.0, 0.0, 0.0], 0.4, true)
``````

gives the indices:

``````8-element Array{Int64,1}:
184
199
307
586
646
680
849
906
926
``````

#### Tree-tree range search

KDTrees.jl also supports dual tree range searches where the query points are put in their own separate tree and both trees are traversed at the same time while extracting the pairs of points that are in a given range.

Dual tree range searches are performed with the function `inball(tree1, tree2, radius [, sort=false])` and returns a list of list such that the i:th list contains the indices for the points in tree2 that are in range to point i in tree. If `sort = true` the lists are sorted before being returned. Currently, trees where the data has been optimized for memory allocation is not supported. This function has not gotten the same amount of optimization as the others so it might be faster just to loop through the points one by one.

``````using KDTrees
tree = KDTree(rand(1, 12), reorder = false)
tree2 = KDTree(rand(1, 16), reorder = false)
inball(tree, tree2, 0.1)
``````

gives the result

``````12-element Array{Array{Int64,1},1}:
[16,11,15,5,9,14]
[6]
[5,7]
[6]
[5,7]
[10,3,2]
[5,7]
[4,1]
[16,12,11,15,9,14]
[4,1]
[7,6]
[5,7]
``````

## Benchmarks

The benchmarks have been made with computer with a 4 core Intel i5-2500K @ 4.2 GHz with Julia v0.4.0-dev+3034 with `reorder = true` in the building of the trees.

Clicking on a plot takes you to the Plotly site for the plot where the exact data can be seen.

### Short comparison vs scipy's cKDTree

One of the most popular packages for scientific computing in Python is the scipy package. It can therefore be interesting to see how KDTrees.jl compares against scipy's cKDTree.

A KNN search for a 100 000 point tree was performed for the five closest neighbours. The code and the resulting search speed are shown, first for cKDTree and then for KDTrees.jl

cKDTree:

``````>>> import numpy as np
>>> from scipy.spatial import cKDTree
>>> import timeit

>>> t = timeit.Timer("tree.query(queries, k=5)",
"""
import numpy as np
from scipy.spatial import cKDTree
data = np.random.rand(10**5, 3)
tree = cKDTree(data)
queries = np.random.rand(10**5, 3)
""")
>>> t = min(t.repeat(3, 10)) / 10

>>> print("knn / sec: ", 10**5 / t)
('knn / sec: ', 251394)
``````

KDTrees.jl:

``````julia> tree = KDTree(rand(3,10^5));
julia> t = @elapsed for i = 1:10^5
knn(tree, rand(3), 5)
end;
julia> print("knn / sec: ", 10^5 / t)
knn / sec: 700675
``````

### Contribution

Contributions are more than welcome. If you have an idea that would make the tree have better performance or be more general please create a PR. Make sure you run the benchmarks before and after your changes.

Author: JuliaGeometry
Source Code: https://github.com/JuliaGeometry/KDTrees.jl
License: View license

1656179040

## dTree: A Library for Visualizing Data Trees with Multiple Parents

dTree

A library for visualizing data trees with multiple parents built on top of D3.

Using dTree? Send me a message with a link to your website to be listed below.

## Installation

There are several ways to use dTree. One way is to simply include the compiled file `dTree.js` that then exposes a `dTree` variable. dTree is available on both NPM and Bower as d3-dtree.

``````npm install d3-dtree
bower install d3-dtree
yarn add d3-dtree
``````

Lastly dTree is also available through several CDNs such as jsDelivr:

``````https://cdn.jsdelivr.net/npm/d3-dtree@2.4.1/dist/dTree.min.js
``````

## Requirements

To use the library the follow dependencies must be loaded:

## Usage

To create a graph from data use the following command:

``````tree = dTree.init(data, options);
``````

The data object should have the following structure:

``````[{
name: "Father",                         // The name of the node
class: "node",                          // The CSS class of the node
textClass: "nodeText",                  // The CSS class of the text in the node
depthOffset: 1,                         // Generational height offset
marriages: [{                           // Marriages is a list of nodes
spouse: {                             // Each marriage has one spouse
name: "Mother",
},
children: [{                          // List of children nodes
name: "Child",
}]
}],
extra: {}                               // Custom data passed to renderers
}]
``````

The following CSS sets some good defaults:

``````.linage {
fill: none;
stroke: black;
}
.marriage {
fill: none;
stroke: black;
}
.node {
background-color: lightblue;
border-style: solid;
border-width: 1px;
}
.nodeText{
font: 10px sans-serif;
}
.marriageNode {
background-color: black;
border-radius: 50%;
}
``````

The options object has the following default values:

``````{
target: '#graph',
debug: false,
width: 600,
height: 600,
hideMarriageNodes: true,
marriageNodeSize: 10,
callbacks: {
/*
Callbacks should only be overwritten on a need to basis.
See the section about callbacks below.
*/
},
margin: {
top: 0,
right: 0,
bottom: 0,
left: 0
},
nodeWidth: 100,
styles: {
node: 'node',
linage: 'linage',
marriage: 'marriage',
text: 'nodeText'
}
}
``````

### Zooming

The returned object, `tree = dTree.init(data, options)`, contains functions to control the viewport.

• `tree.resetZoom(duration = 500)` - Reset zoom and position to initial state
• `zoomTo(x, y, zoom = 1, duration = 500)` - Zoom to a specific position
• `zoomToNode(nodeId, zoom = 2, duration = 500)` - Zoom to a specific node
• `zoomToFit(duration = 500)` - Zoom to fit the entire tree into the viewport

### Callbacks

Below follows a short descriptions of the available callback functions that may be passed to dTree. See dtree.js for the default implementations. Information about e.g. mouse cursor position can retrieved by interacting with the `this` object, i.e. `d3.mouse(this)`.

#### nodeClick

``````function(name, extra, id)
``````

The nodeClick function is called by dTree when the node or text is clicked by the user. It shouldn't return any value.

#### nodeRightClick

``````function(name, extra, id)
``````

The nodeRightClick function is called by dTree when the node or text is right-clicked by the user. It shouldn't return any value.

#### nodeRenderer

``````function(name, x, y, height, width, extra, id, nodeClass, textClass, textRenderer)
``````

The nodeRenderer is called once for each node and is expected to return a string containing the node. By default the node is rendered using a div containing the text returned from the default textRendeder. See the JSFiddle above for an example on how to set the callback.

#### nodeHeightSeperation

``````function(nodeWidth, nodeMaxHeight)
``````

The nodeHeightSeperation is called during intial layout calculation. It shall return one number representing the distance between the levels in the graph.

#### nodeSize

``````function(nodes, width, textRenderer)
``````

This nodeSize function takes all nodes and a preferred width set by the user. It is then expected to return an array containing the width and height for all nodes (they all share the same width and height during layout though nodes may be rendered as smaller by the nodeRenderer).

#### nodeSorter

``````function(aName, aExtra, bName, bExtra)
``````

The nodeSorterer takes two nodes names and extra data, it then expected to return -1, 0 or 1 depending if A is less, equal or greater than B. This is used for sorting the nodes in the tree during layout.

#### textRenderer

``````function(name, extra, textClass)
``````

The textRenderer function returns the formatted text to the nodeRenderer. This way the user may chose to overwrite only what text is shown but may opt to keep the default nodeRenderer.

#### marriageClick

``````function(extra, id)
``````

Same as `nodeClick` but for the marriage nodes (connector).

#### marriageRightClick

``````function(extra, id)
``````

Same as `nodeRightClick` but for the marriage nodes (connector).

#### marriageRenderer

``````function(x, y, height, width, extra, id, nodeClass)
``````

Same as `nodeRenderer` but for the marriage nodes (connector).

#### marriageSize

``````function(nodes, size)
``````

Same as `nodeSize` but for the marriage nodes (connector).

## Development

dTree has the following development environment:

• node v11.x (use Docker image `node:11`)
• gulp 3.x
• Yarn instead of npm.

To setup and build the library from scratch follow these steps:

1. `yarn install`
2. `yarn run build`

A demo is available by running:

``````yarn run demo
``````

It hosts a demo on localhost:3000/ by serving test/demo and using the latest compiled local version of the library.

## The Online Viewer

There exists an online viewer for dTree graphs called Treehouse, similar to https://bl.ocks.org/ for D3. Treehouse allows anybody to host a dTree graph without having to create a website or interact directly with the library. It fetches data from Github's gists and displays it in a nice format. All graphs are unlisted so without your Gist ID nobody else can view them. Checkout the demo graph for dTree:

https://treehouse.gartner.io/ErikGartner/58e58be650453b6d49d7

The same demo is also available on JSFiddle.

## Contributing

Contributions are very welcomed! Checkout the CONTRIBUTING document for style information. A good place to start is to make a pull request to solve an open issue. Feel free to ask questions regarding the issue since most have a sparse description.

Author: ErikGartner
Source Code: https://github.com/ErikGartner/dTree
License: MIT license

1629011520

## JavaScript Algorithms and Data Structures: Trees - Breadth-First Search

Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a 'search key') and explores the neighbor nodes first, before moving to the next level neighbors.

## Pseudocode

``````BFS(root)
Pre: root is the node of the BST
Post: the nodes in the BST have been visited in breadth first order
q ← queue
while root = ø
yield root.value
if root.left = ø
q.enqueue(root.left)
end if
if root.right = ø
q.enqueue(root.right)
end if
if !q.isEmpty()
root ← q.dequeue()
else
root ← ø
end if
end while
end BFS
``````

## References

The Original Article can be found on https://github.com

#javascript #algorithms #datastructures #trees

1629007800

## JavaScript Algorithms and Data Structures: Trees - Depth-First Search

Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far as possible along each branch before backtracking.

## References

The Original Article can be found on https://github.com

#javascript #algorithms #datastructures #trees

1624951320

## Late-Night LeetCode Ep1 | Trees | Medium

Trying my new approach by picking unseen problems and solving them on-camera at late-night!

Problems Solved:

LeetCode Collection: https://github.com/manosriram/LeetCode/

FOLLOW ME HERE:
Website: https://manosriram.com
Github: https://github.com/manosriram
LinkedIn: https://www.linkedin.com/in/manosriram
My Blog: https://blog.manosriram.com
Twitter: https://twitter.com/_manosriram

📻 Join the Discord Server: https://discord.gg/TaE9CTAmV9

Share your ideas for my next video: https://request.manosriram.com
Find pair-programming partners: https://www.mydevfriend.com

#late-night leetcode #trees #medium

1624260165

## What is Safetrees (TREES) | What is Safetrees token | What is TREES token

In this article, we’ll discuss information about the Safetrees project and TREES token

SAFETREES reward mechanism will benefit TREES token holders indefinitely. The technology this community build creates environmental and economic value for the local tree growers, environmental organization and green philanthropist worldwide.

TREES token is a utility asset use for offsetting carbon footprints. It is designed to reward the token holders as well as serves as a token to the community’s Tree Adoption Marketplace. SAFETREES project designed a free and easy-to-use solution that will help tree growers monitor, authenticate and validate tree growth status through a mobile application.

The tree growers can tokenize their real growing trees as TREES-NFT and traded them to the Tree Adoption Marketplace. Everyone can adopt a TREES-NFT using the TREES token or buy carbon offsets that support our environment-centre projects.

#### Sustainable value creation

TREES token is community-centric and managed by volunteers since team ownership has been burned. Launch token allocation was completely seeded as liquidity. The 2% of all transactions are distributed to holders. It adds value to the token because the smart contract was designed to exponentially cuts the total supply in circulation by enormous quantities, hence deflationary. The novel frictionless auto-liquidity mechanism sustainably increases the liquidity of the token rapidly. Watch your wallet balance climb instantly the minute you begin holding the TREES token.

#### Our Purpose

SAFETREES ECO-MISSION

10 Billion newly planted and mature trees worldwide are geo-tagged by 2025, minted and stored in the blockchain. Those trees are minted as NFTs and can be traded for TREES token in SAFETREES tree adoption market.

Our mission is to make sure that tree improvement, long-term maintenance and end-use of the trees are monitored, traceable and authenticated.

We also want to make sure that our technology creates environmental and economic value and can serve as a reliable source of income for millions of smallholder tree growers worldwide.

#### Research and Innovation

UTILITY DEVELOPMENT

SAFETREES project is creating a free and easy-to-use solution that will help tree growers monitor and validate tree growth status through a mobile application.

A Tree Adoption Marketplace i.e. is being developed where real tree assets are minted and traded with TREES token to buy carbon offsets that support our environment-centre projects. This disruptive techno-economic framework allows users to geo-tag a tree and directly compensated with digital asset (i.e. TREES token) based on the value of the geo-tagged trees’ capacity to absorb CO 2.

This solution creates transparency and reliability for the benefit of tree growers, donors, environmental organizations, token holders and environment in general.

#### Our Pipeline

IN PROGRESS… TO BE RELEASED VERY SOON

With cross-chain mobile application and blockchain technology, we provide a traceability solution for trees and ownership authentication as well as economic benefits to our App users.

The mechanism employs a pay-to-grow-tend-track-model, which allows app users to measure real tree attributes and get an equivalent TREES token for the amount of CO2 the tree absorbed as a carbon offset. TREES token  can be bought and held as an asset, or burnt to offset an individual’s carbon footprint.

Navigate to our App prototype by clicking the button to get a preview of the tree monitoring app. Final features are being developed to obtain accurate and reliable data to be published soon.

#### Our Top Priorities

Tree planting and tending activities are essential factors for forest to thrive. Our top priorities are the overall outcome of these activities for the environment and the people involved in sustainable change. We empower them with simple, accurate, transparent and reliable technology that helps restore their forests and monitor their impact. SAFETREES project contributes to the achievement of all of the UN’s Sustainable Development Goals (SDGs):

CLIMATE ACTION

Take urgent action to combat climate change and its impacts.

LIFE ON LAND

Use existing bio-based resources in a sustainable manner.

NO POVERTY

Incentives to tree growers can lift local communities out of poverty.

GENDER EQUALITY

Empower women via tech-transfer and small-enterprises like nurseries.

DECENT WORK

Sponsoring forestry projects to promote work in local communities.

#### Road Map

Q1 2021

• Idea generation and partnership creation
• One-pager (lite-white paper) proposal for teams and partners
• Token creation, token metrics and launch liquidity planning

Q2 2021

• Establishment of technical (dev) and marketing team
• Website design and social media releases
• Public launch DEX listing announcements
• Release of Lite Paper v1.0
• UI design and prototyping of SAFETREES platforms

Q3 and Q4 2021

• Quality control for SAFETREES application (mobile and web)
• Release of SAFETREES platform (mobile) IOS and Android
• Release of the Auction platform
• Outreach and knowledge transfer to tree grower’s community

2022 and BEYOND

• Livestock traceability and authentication
• Agricultural Commodities traceability and authentication
• Food Process digitalization – monitoring and traceability
• Food product traceability and authentication

#### Frequent Ask Questions

What makes TREES token different from other coin?

SAFETREES created the TREES token as a valuable deflationary asset for the purpose of incentivizing individuals through the pay-to-grow-tend-track model for their efforts in restoring balance in the environment. Another unique feature of the TREES token is that it serves as an asset for carbon footprint offset. In addition, TREES token holders earn passive rewards through static reflection built-in to the smart contract algorithm, so token holders are guaranteed to watch their wallet grow indefinitely.

How does SAFETREES incentivized the tree growers with TREES token?

The tree grower uses the SAFETREES App to capture the trees he/she wants to monitor (i.e. The concept is like Pokémon Go, but instead of capturing Pokémon, individuals should capture trees). The app has a built-in algorithm that calculates and verifies the long-term ecological impact of trees notably, how much CO 2 was captured over time, calculate the ecological and financial benefits (i.e. TREES token equivalent) and then converts it into digitalized TREES tokens as earned carbon credits through SAFETREES adoption platform. The platform users can buy or adopt the minted trees using the TREES token.

How can I make sure that I’m not investing in a scam project?

Make due diligence about the project, read the white paper and look at whether the project has a utility purpose, mission and real-world applications or a project is just to make money for the team. The safety of SAFETREES investors’ funds is our top priority, thus we design TREES token 100% safe and technically unruggable. The initial liquidity is locked away forever. For the auto-added liquidity by our smart contract, we burn the LP Tokens regularly so that it’s technically impossible to remove liquidity at any time.

How do I stake and what is the APY for auto-staking TREES token?

Staking and earning rewards is done through holding or hodling the TREES token in your wallet (i.e. BSC enabled metamask or trust wallet). The amount of your TREES token will automatically rise as soon as you begin holding it, thanks to auto-staking coded in the SAFETREES smart contract. There is no APY for staking TREES token for holding it in your wallet. Your reward earning depends entirely on the overall transactions on the blockchain. The more transactions there are in the market, the more you get from TREES token’s fees transactions, which are distributed to holders.

Who is the team behind SAFETREES?

The real team is our community since the SAFETREES project is 100% community-driven! Although some in the team at present wants to remain anonymous, we are no strangers to creating projects on the blockchain. SAFETREES team is a group of developers, environmentalist, and crypto enthusiasts, who have been involved in the crypto-verse and projects in the past and bringing a wealth of knowledge to the project, to make it the biggest success possible. The co-founders have a strong academic background in computer science and agricultural engineering with a PhD and MSc degree. They also are involved in developing projects in Africa and Southeast Asia.

#### How and Where to Buy TREES token ?

TREES token is now live on the Binance mainnet. The token address for TREES is 0xd3b77ac07c963b8cead47000a5208434d9a8734d. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.

Just be sure you have enough BNB in your wallet to cover the transaction fees.

Join To Get BNB (Binance Coin)! ☞ CLICK HERE

You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.

Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)

Next step

You need a wallet address to Connect to Pancakeswap Decentralized Exchange, we use Metamask wallet

If you don’t have a Metamask wallet, read this article and follow the steps

Transfer \$BNB to your new Metamask wallet from your existing wallet

Next step

Connect Metamask Wallet to Pancakeswap Decentralized Exchange and Buy, Swap TREES token

Contract: 0xd3b77ac07c963b8cead47000a5208434d9a8734d

The top exchange for trading in TREES token is currently Pancakeswap v2

Find more information TREES

🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.

🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner

⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!

☞ **-----https://geekcash.org-----**⭐ ⭐ ⭐

I hope this post will help you. Don’t forget to leave a like, comment and sharing it with others. Thank you!

#blockchain #bitcoin #trees #safetrees

1623066480

## JavaScript Data Structures and Algorithms (Search Algorithms, Part 1)

Hello! Today I would like to review what I’ve learned about search algorithms and the importance of each one in computer programming. I’ve never been great at writing preambles, so without further ado, let’s get started.

It’s safe to say we’ve all used some kind of search engine when browsing the web. Whether it’s Google, Bing, DuckDuckGo, etc., we have experienced the benefits of their search algorithms, individually and as a society. But how do search engines conduct these searches for optimized results from around the web? And how do they determine how results get sorted? The answer is more complicated than fundamental algorithms, though those still exist at the core of these search engine behemoths.

## Linear Search

As I mentioned, Google’s search algorithms are more than simple lookup() methods but are still used at the most rudimentary level. One thing I can say confidently is that this first algorithm is not how Google executes searches. **_Linear search _**algorithms search a sorted structure, such as an array or a binary search tree, to find a match to the desired value. For example, if I had declared a sorted array as [0,1,2,3,4,5,6], and I ran a linear search for the value 6, our search algorithm would start the search at [0], then visit every index in the array, comparing each index’s value with our target value, until a match is found.

Linear Search is arguably the most rudimentary of search algorithms. But like every other algorithm, trade-offs exist when implementing this as a solution. This search algorithm would not be great in the use-case of an array with 1,000,000 indices to search through for a value, given its time complexity. You see, this search algorithm has a potential time complexity of O(n), or linear time complexity. The time complexity of this algorithm directly correlates to the size of the data structure (An array with one index can run a search in O(1) time, two indices in 0(2 + n) time, simplified to O(n), and so on). Unfortunately, this also means an array with 1,000,000 indices would execute a linear search for a value at index 300,000 in O(300,000 + n) time or, simplified, O(n) time. We needn’t worry about having to conduct linear searches through large amounts of data; there are, indeed, more optimal solutions available.

## Binary Search

Binary searches, named after the trees they are programmed to search, search arrays in order, given that they are sorted. Binary searches work best when using a sorted array to build a binary search tree since we can create them effectively. To begin, the root node is defined as the median value of a sorted array. Going back to the previous array [1, 2, 3, 4, 5, 6, 7], this can be used to build a binary tree, which is more comprehensible for understanding and writing search algorithms. Does this look familiar to you?

This median key’s value gets compared to the search’s value to see if it is greater than or less than the target. Based on this comparison, a binary search algorithm will then search either array’s values to the left of the median if less than the median, or the right if greater. By now I’m hoping you remember that the lookup() method in a binary search tree works in this manner. The median value of a sorted array can be compared to the root node of a binary search tree, and lookup() for both require that the data structure is sorted. And, like with tree traversal, a binary search must visit every value in a sequence until a match is found. In a binary search tree with nodes valued [1,2,3,5,10,15,25,30,40,45,50], we would build a binary search tree using 15 as the tree’s root node, adding key values with lower values than the search value to the left, and ones with greater values to the right. If we were searching for a node with a value of 45, the algorithm would traverse in this order: root, left-child, right-child. An array representation of the nodes traversed for this would look like this: [15,25,40].

lookup(), as implemented in a binary search tree.

That’s all for now! In the next post, I will go over two more search algorithms: breadth-first search, and depth-first search, and how they are implemented in both trees and graphs. Be sure to check out https://visualgo.net for powerful visualizations of data structures and their algorithms, as well as a step-by-step breakdown of each of the latter. It’s probably the most useful learning aid I’ve used when learning.

#data-structures #trees #algorithms #javascript