Monty  Boehm

Monty Boehm

1656307620

Wordtokenizers.jl: High Performance Tokenizers for Natural Language

WordTokenizers

Some basic tokenizers for Natural Language Processing.

Installation:

As per standard Julia package installation:

pkg> add WordTokenizers

Usage

The normal way to use this package is to call tokenize(str) to split up a string into words or split_sentences(str) to split up a string into sentences. Maybe even tokenize.(split_sentences(str)) to do both.

tokenize and split_sentences are configurable functions that call one of the tokenizers or sentence splitters defined below. They have sensible defaults set, but you can override the method used by calling set_tokenizer(func) or set_sentence_splitter(func) passing in your preferred function func from the list below (or from elsewhere) Configuring them this way will throw up a method overwritten warning, and trigger recompilation of any methods that use them.

This means if you are using a package that uses WordTokenizers.jl to do tokenization/sentence splitting via the default methods, changing the tokenizer/splitter will change the behavior of that package. This is a feature of CorpusLoaders.jl. If as a package author you don't want to allow the user to change the tokenizer in this way, you should use the tokenizer you want explicitly, rather than using the tokenize method.

Example Setting Tokenizer (TinySegmenter.jl)

You might like to, for example use TinySegmenter.jl's tokenizer for Japanese text. We do not include TinySegmenter in this package, because making use of it within WordTokenizers.jl is trivial. Just import TinySegmenter; set_tokenizer(TinySegmenter.tokenize).

Full example:

julia> using WordTokenizers

julia> text = "私の名前は中野です";

julia> tokenize(text) |> print # Default tokenizer
["私の名前は中野です"]

julia> import TinySegmenter

julia> set_tokenizer(TinySegmenter.tokenize)

julia> tokenize(text) |> print # TinySegmenter's tokenizer
SubString{String}["私", "の", "名前", "は", "中野", "です"]

(Word) Tokenizers

The word tokenizers basically assume sentence splitting has already been done.

Poorman's tokenizer: (poormans_tokenize) Deletes all punctuation, and splits on spaces. (In some ways worse than just using split)

Punctuation space tokenize: (punctuation_space_tokenize) Marginally improved version of the poorman's tokenizer, only deletes punctuation occurring outside words.

Penn Tokenizer: (penn_tokenize) This is Robert MacIntyre's original tokenizer used for the Penn Treebank. Splits contractions.

Improved Penn Tokenizer: (improved_penn_tokenize) NLTK's improved Penn Treebank Tokenizer. Very similar to the original, some improvements on punctuation and contractions. This matches to NLTK's nltk.tokenize.TreeBankWordTokenizer.tokenize.

NLTK Word tokenizer: (nltk_word_tokenize) NLTK's even more improved version of the Penn Tokenizer. This version has better Unicode handling and some other changes. This matches to the most commonly used nltk.word_tokenize, minus the sentence tokenizing step.

(To me it seems like a weird historical thing that NLTK has 2 successive variations on improving the Penn tokenizer, but for now, I am matching it and having both. See [NLTK#2005].)

  • Reversible Tokenizer: (rev_tokenize and rev_detokenize) This tokenizer splits on punctuations, space and special symbols. The generated tokens can be de-tokenized by using the rev_detokenizer function into the state before tokenization.
  • TokTok Tokenizer: (toktok_tokenize) This tokenizer is a simple, general tokenizer, where the input has one sentence per line; thus only final period is tokenized. This is an enhanced version of the original toktok Tokenizer. It has been tested on and gives reasonably good results for English, Persian, Russian, Czech, French, German, Vietnamese, Tajik, and a few others. (default tokenizer)
  • Tweet Tokenizer: (tweet_tokenizer) NLTK's casual tokenizer for that is solely designed for tweets. Apart from being twitter specific, this tokenizer has good handling for emoticons and other web aspects like support for HTML Entities. This closely matches NLTK's nltk.tokenize.TweetTokenizer

Sentence Splitters

We currently only have one sentence splitter.

  • Rule-Based Sentence Spitter: (rulebased_split_sentences), uses a rule that periods, question marks, and exclamation marks, followed by white-space end sentences. With a large list of exceptions.

split_sentences is exported as an alias for the most useful sentence splitter currently implemented. (Which ATM is the only sentence splitter: rulebased_split_sentences) (default sentence_splitter)

Example

julia> tokenize("The package's tokenizers range from simple (e.g. poorman's), to complex (e.g. Penn).") |> print
SubString{String}["The", "package", "'s", "tokenizers", "range", "from", "simple", "(", "e.g.", "poorman", "'s", ")",",", "to", "complex", "(", "e.g.", "Penn", ")", "."]
julia> text = "The leatherback sea turtle is the largest, measuring six or seven feet (2 m) in length at maturity, and three to five feet (1 to 1.5 m) in width, weighing up to 2000 pounds (about 900 kg). Most other species are smaller, being two to four feet in length (0.5 to 1 m) and proportionally less wide. The Flatback turtle is found solely on the northerncoast of Australia.";

julia> split_sentences(text)
3-element Array{SubString{String},1}:
 "The leatherback sea turtle is the largest, measuring six or seven feet (2 m) in length at maturity, and three to five feet (1 to 1.5 m) in width, weighing up to 2000 pounds (about900 kg). "
 "Most other species are smaller, being two to four feet in length (0.5 to 1 m) and proportionally less wide. "
 "The Flatback turtle is found solely on the northern coast of Australia."

julia> tokenize.(split_sentences(text))
3-element Array{Array{SubString{String},1},1}:
 SubString{String}["The", "leatherback", "sea", "turtle", "is", "the", "largest", ",", "measuring", "six"  …  "up", "to", "2000", "pounds", "(", "about", "900", "kg", ")", "."]
 SubString{String}["Most", "other", "species", "are", "smaller", ",", "being", "two", "to", "four"  …  "0.5", "to", "1", "m", ")", "and", "proportionally", "less", "wide", "."]
 SubString{String}["The", "Flatback", "turtle", "is", "found", "solely", "on", "the", "northern", "coast", "of", "Australia", "."]

Experimental API

I am trying out an experimental API where these are added as dispatches to Base.split.

So
split(foo, Words) is the same as tokenize(foo),
and
split(foo, Sentences) is the same as split_sentences(foo).

Using TokenBuffer API for Custom Tokenizers

We offer a TokenBuffer API and supporting utility lexers for high-speed tokenization.

Writing your own TokenBuffer tokenizers

TokenBuffer turns a string into a readable stream, used for building tokenizers. Utility lexers such as spaces and <span class="x x-first x-last">number</span> read characters from the stream and into an array of tokens.

Lexers return true or false to indicate whether they matched in the input stream. They can, therefore, be combined easily, e.g.

spacesornumber(ts) = spaces(ts) || number(ts)

either skips whitespace or parses a number token, if possible.

The simplest useful tokenizer splits on spaces.

using WordTokenizers: TokenBuffer, isdone, spaces, character

function tokenise(input)
    ts = TokenBuffer(input)
    while !isdone(ts)
        spaces(ts) || character(ts)
    end
    return ts.tokens
end

tokenise("foo bar baz") # ["foo", "bar", "baz"]

Many prewritten components for building custom tokenizers can be found in src/words/fast.jl and src/words/tweet_tokenizer.jl These components can be mixed and matched to create more complex tokenizers.

Here is a more complex example.

julia> using WordTokenizers: TokenBuffer, isdone, character, spaces # Present in fast.jl

julia> using WordTokenizers: nltk_url1, nltk_url2, nltk_phonenumbers # Present in tweet_tokenizer.jl

julia> function tokeinze(input)
           urls(ts) = nltk_url1(ts) || nltk_url2(ts)

           ts = TokenBuffer(input)
           while !isdone(ts)
               spaces(ts) && continue
               urls(ts) ||
               nltk_phonenumbers(ts) ||
               character(ts)
           end
           return ts.tokens
       end
tokeinze (generic function with 1 method)

julia> tokeinze("A url https://github.com/JuliaText/WordTokenizers.jl/ and phonenumber +0 (987) - 2344321")
6-element Array{String,1}:
 "A"
 "url"
 "https://github.com/JuliaText/WordTokenizers.jl/" # URL detected.
 "and"
 "phonenumber"
 "+0 (987) - 2344321" # Phone number detected.

Tips for writing custom tokenizers and your own TokenBuffer Lexer

  1. The order in which the lexers are written needs to be taken care of in some cases-

For example: 987-654-3210 matches as a phone number as well as numbers, but number will only match up to 987 and split after it.

julia> using WordTokenizers: TokenBuffer, isdone, character, spaces, nltk_phonenumbers, number

julia> order1(ts) = number(ts) || nltk_phonenumbers(ts)
order1 (generic function with 1 method)

julia> order2(ts) = nltk_phonenumbers(ts) || number(ts)
order2 (generic function with 1 method)

julia> function tokenize1(input)
           ts = TokenBuffer(input)
           while !isdone(ts)
               order1(ts) ||
               character(ts)
           end
           return ts.tokens
       end
tokenize1 (generic function with 1 method)

julia> function tokenize2(input)
           ts = TokenBuffer(input)
           while !isdone(ts)
               order2(ts) ||
               character(ts)
           end
           return ts.tokens
       end
tokenize2 (generic function with 1 method)

julia> tokenize1("987-654-3210") # number(ts) || nltk_phonenumbers(ts)
5-element Array{String,1}:
 "987"
 "-"
 "654"
 "-"
 "3210"

julia> tokenize2("987-654-3210") # nltk_phonenumbers(ts) || number(ts)
1-element Array{String,1}:
 "987-654-3210"

BoundsError and errors while handling edge cases are most common and need to be taken of while writing the TokenBuffer lexers.

For some TokenBuffer ts, use flush!(ts) over push!(ts.tokens, input[i:j]), to make sure that characters in the Buffer (i.e. ts.Buffer) also gets flushed out as separate tokens.

julia> using WordTokenizers: TokenBuffer, flush!, spaces, character, isdone

julia> function tokenize(input)
           ts = TokenBuffer(input)

           while !isdone(ts)
               spaces(ts) && continue
               my_pattern(ts) ||
               character(ts)
           end
           return ts.tokens
       end

julia> function my_pattern(ts) # Matches the pattern for 2 continuous `_`
           ts.idx + 1 <= length(ts.input) || return false

           if ts[ts.idx] == '_' && ts[ts.idx + 1] == '_'
               flush!(ts, "__") # Using flush!
               ts.idx += 2
               return true
           end

           return false
       end
my_pattern (generic function with 1 method)

julia> tokenize("hi__hello")
3-element Array{String,1}:
 "hi"
 "__"
 "hello"

julia> function my_pattern(ts) # Matches the pattern for 2 continuous `_`
           ts.idx + 1 <= length(ts.input) || return false

           if ts[ts.idx] == '_' && ts[ts.idx + 1] == '_'
               push!(ts.tokens, "__") # Without using flush!
               ts.idx += 2
               return true
           end

           return false
       end
my_pattern (generic function with 1 method)

julia> tokenize("hi__hello")
2-element Array{String,1}:
 "__"
 "hihello"

Statistical Tokenizer

Sentencepiece Unigram Encoder is basically the Sentencepiece processor's re-implementation in julia. It can used vocab file generated by sentencepiece library containing both vocab and log probability.

For more detail about implementation refer the blog post here

Note :

  • SentencePiece escapes the whitespace with a meta symbol "▁" (U+2581).

Pretrained

Wordtokenizer provides pretrained vocab file of Albert (both version-1 and version-2)

julia> subtypes(PretrainedTokenizer)
2-element Array{Any,1}:
 ALBERT_V1
 ALBERT_V2

julia> tokenizerfiles(ALBERT_V1)
4-element Array{String,1}:
 "albert_base_v1_30k-clean.vocab"   
 "albert_large_v1_30k-clean.vocab"  
 "albert_xlarge_v1_30k-clean.vocab" 
 "albert_xxlarge_v1_30k-clean.vocab"

DataDeps will handle all the downloading part for us. You can also create an issue or PR for other pretrained models or directly load by providing path in load function

julia> spm = load(Albert_Version1) #loading Default Albert-base vocab in Sentencepiece
WordTokenizers.SentencePieceModel(Dict("▁shots"=>(-11.2373, 7281),"▁ordered"=>(-9.84973, 1906),"dev"=>(-12.0915, 14439),"▁silv"=>(-12.6564, 21065),"▁doubtful"=>(-12.7799, 22569),"▁without"=>(-8.34227, 367),"▁pol"=>(-10.7694, 4828),"chem"=>(-12.3713, 17661),"▁1947,"=>(-11.7544, 11199),"▁disrespect"=>(-13.13, 26682)…), 2)

julia> tk = tokenizer(spm, "i love the julia language") #or tk = spm("i love the julia language")
4-element Array{String,1}:
 "▁i"       
 "▁love"
 "▁the"    
 "▁julia"   
 "▁language"

julia> subword = tokenizer(spm, "unfriendly")
2-element Array{String,1}:
 "▁un"
 "friendly"

julia> para = spm("Julia is a high-level, high-performance dynamic language for technical computing")
17-element Array{String,1}:
 "▁"          
 "J"          
 "ulia"       
 "▁is"        
 "▁a"         
 "▁high"      
 "-"          
 "level"      
 ","          
 "▁high"      
 "-"          
 "performance"
 "▁dynamic"   
 "▁language"  
 "▁for"       
 "▁technical" 
 "▁computing" 

Indices is usually used for deep learning models. Index of special tokens in ALBERT are given below:

1 ⇒ [PAD]
2 ⇒ [UNK]
3 ⇒ [CLS]
4 ⇒ [SEP]
5 ⇒ [MASK]

julia> ids_from_tokens(spm, tk)
4-element Array{Int64,1}:
   32
  340
   15
 5424
  817
#we can also get sentences back from tokens
julia> sentence_from_tokens(tk)
 "i love the julia language"

julia> sentence_from_token(subword)
 "unfriendly"

julia> sentence_from_tokens(para)
 "Julia is a high-level, high-performance dynamic language for technical computing"

Contributing

Contributions, in the form of bug-reports, pull-requests, additional documentation are encouraged. They can be made to the GitHub repository.

We follow the ColPrac guide for collaborative practices. New contributor should make sure to read that guide.

All contributions and communications should abide by the Julia Community Standards.

Software contributions should follow the prevailing style within the code-base. If your pull request (or issues) are not getting responses within a few days do not hesitate to "bump" them, by posting a comment such as "Any update on the status of this?". Sometimes GitHub notifications get lost.

Support

Feel free to ask for help on the Julia Discourse forum, or in the #natural-language channel on julia-slack. (Which you can join here). You can also raise issues in this repository to request improvements to the documentation.

Author: JuliaText
Source Code: https://github.com/JuliaText/WordTokenizers.jl 
License: View license

#julia #text #tokenization #nlp 

Wordtokenizers.jl: High Performance Tokenizers for Natural Language
Markus zusak

Markus zusak

1649316081

Real Estate Tokenization | Tokenize your Real estate

Real estate tokenization is the process of turning physical property into digital tokens. These tokens can be both liquidated and held.

Start Tokenize Here: https://www.blockchainappfactory.com/real-estate-tokenization

#realestate #realestatetokenization #tokenizerealestate #tokenization #crypto #cryptocurrency #blockchain

Real Estate Tokenization | Tokenize your Real estate
Ben Taylor

Ben Taylor

1647313293

What is Tokenomics | The Real Value of a Token

In this post, you'll learn What is tokenomics and why do they matter?

So much about the crypto ecosystem is novel and disruptive, including its vocabulary, which features entirely new words, invented to describe entirely new concepts. Tokenomics is a great example. It is what is known as a portmanteau, a word that blends the meaning of two other words - tokens and economics. It fills the empty space in the dictionary to describe how the mechanics of cryptocurrency function - supply, distribution and incentive structure - relate to value.

What is Cryptocurrency?

Although there are many different types of cryptocurrencies, they all have one thing in common: they operate on blockchain technology, making them decentralized. Decentralization of financial operations through cryptocurrencies has several efficiencies over the traditional financial system, including:

  • Cuts out almost all the overhead costs associated with banks
  • Less expensive transactions that can be sent and received internationally
  • Inflation or finite supply that’s written into code — no need to trust the Federal Reserve
  • Financial derivatives like trading strategies and loans can be coded directly onto certain cryptocurrency blockchains, replacing the need for financial intermediaries.

The largest cryptocurrency is Bitcoin and it’s used as a “digital gold.” Essentially, Bitcoin is a commodity used as a store of value. Ethereum is the 2nd-largest cryptocurrency with a market cap of $170 billion. Developers can develop smart contracts on Ethereum’s blockchain to create decentralized alternatives to traditional banking functions, like lending and trading.

How Does Cryptocurrency Work?

Cryptocurrencies are digital assets that are powered on the blockchain. Blockchain technology stores a ledger of every transaction of the cryptocurrency on every node powering the blockchain. Nodes are computers that are connected to Bitcoin’s network to  mine Bitcoin. If one of these miners tries to enter false transactions, it will be nullified by the correct ledger.

The correct ledger is determined by the majority of miners’ records. In theory, you could hack a blockchain by controlling 51% of the cryptocurrency’s network in a process called a  51% attack. However, this process is economically infeasible and would require an extremely choreographed hack with billions, if not trillions, of dollars worth of computer hardware.

To transact with a cryptocurrency, you need to have a set of public and private keys. These keys are like passwords generated by your cryptocurrency wallet. Your public key is connected to your wallet’s address and allows people to send you cryptocurrency. Your private key is used to approve transactions being sent from your wallet –– only you have access to your private keys.

Contrary to popular belief, many cryptocurrencies don’t have a finite supply. Bitcoin’s total supply is capped at 21 million coins, but many altcoins have a set inflation rate with no cap on total supply, like Ethereum.

What is tokenomics?

The choice of the term tokenomics' constituent parts - token and economics - may seem a bit confusing if your assumption is that cryptocurrencies are simply new forms of internet money. In reality, crypto can apply to any form of value transfer. 

This is why the word token is used, because units of cryptocurrency value can function as money, but also give the holder specific utility. Just as a games arcade or laundromat may require that you use a specific token to operate their machines, many  blockchain based services will be powered by their own token, which unlocks specific privileges or rewards:

  • DEFI - users are rewarded with tokens for activity (borrowing/lending), or tokens are created as synthetic versions of other existing cryptocurrencies
  • DAOs - token holders get voting rights within Decentralised Autonomous Organisations, new digital communities governed by Smart Contracts
  • Gaming/Metaverse - where game activity and in-game items are represented by tokens and can have exchangeable value

If we add this understanding of cryptocurrency as tokens, to the traditional definition of economics - measuring the production, distribution and consumption of goods and services - we can breakdown what tokenomics within cryptocurrency measures into:

  1. how tokens are produced via their supply schedule, using a specific set of supply metrics
  2. how tokens are distributed among holders
  3. the incentives that encourage usage and ownership of tokens

We can start to unpack these aspects of tokenomics by looking at the supply schedule for the first ever cryptocurrency, Bitcoin.

1. Supply schedule

Bitcoin went live in January 2009,  based on a set  of rules - the Bitcoin Protocol - that included a clearly defined supply schedule:

  • New bitcoins are created through Mining. Miners compete to process a new block of transactions by committing computing power to solve a mathematical puzzle. They do this by running a set algorithm hoping to find the answer - this is known as Proof of Work.
  • A new block is mined roughly every ten minutes. The system is self-regulating, through a difficulty adjustment of the mining algorithm every two weeks, to maintain the steady rate of block creation.
  • The mining reward began at 50 BTC in 2009, but halves every 210,000 blocks - roughly four years. There have been three so-called halvings - the last in May 2020 - with the block reward now set at 6.25 BTC.
     
  • This fixed supply schedule will continue until a maximum of 21 million bitcoin are created.
  • There is no other way that bitcoin can be created
  • Along with the block reward, successful Miners also receive fees that each transaction pays to be sent over the network

The importance of Bitcoin's fixed supply schedule to perceived value cannot be overstated.  It enables us to know Bitcoin's inflation rate over time - its programmed scarcity.

It also tells us that as of January 2022, 90% of Bitcoin supply has been mined and that the maximum supply will be reached in around 2140, at which point the only reward Miners will receive will be the transaction fees.

The supply schedule is a critical piece of the tokenomics puzzle. If a coin has a maximum supply this tells you that over time inflation will decline to zero, at the point the last coins are mined (see the graph above). This quality is described as disinflationary - as supply increases but at a decreasing marginal rate - and is a valuable characteristic for something to function as a store of value. 

If there is no maximum supply this means that tokens will keep being created indefinitely, and potentially diluting value. This is true of the existing fiat monetary system, and one of its biggest criticisms along withe the uncertainty that surrounds the changes in supply. 

To know whether the supply of fiat money is expanding or contracting - with the obvious knock on effects to the its purchasing power and the wider economy - you have to wait anxiously for the outcome of periodic closed door Federal Reserve or ECB meetings. Contrast that with the certainty that Bitcoin's fixed supply schedule provides, which even allows scarcity-based models to predict its value.

Supply Metrics

As the first example of a cryptocurrency, Bitcoin effectively introduced the concept of tokenomics, along with a set of metrics that breakdown the supply schedule of any cryptocurrency into key components that give valuable insight into potential, or comparative, value.

These common yardsticks are published on popular crypto price comparison sites like Coinmarketcap or Coingecko as a complement to the headline price and volume data. 

  • Maximum Supply - A hard cap on the total number of coins that will ever exist. In the case of Bitcoin 21 million.
  • Disinflationary -  Coins with a maximum supply are described as disinflationary or deflationary, because the marginal supply increase decreases over time.
  • Inflationary - Coins without a maximum supply are described as inflationary because the supply will constantly grow - inflate - over time, which may decrease the purchasing power of existing coins.
  • Total Supply - The total number of coins in existence right now.
  • Circulating Supply - The best guess of the total number of coins circulating in the public’s hands right now. In the case of Bitcoin, Total Supply and Circulating Supply are the same thing because its distribution was broadcast from day one.
  • Market Capitalisation - The Circulating Supply multiplied by current price; this is the main metric for measuring the overall value and importance of a cryptocurrency, just as it is for public companies which multiply share price by number of tradable shares.
  • Generally abbreviated to Marketcap, it is often used as a proxy measure for value, and though it is helpful in a comparative sense, its reliance on price means that it reflects what the last person was prepared to pay, which is a very different thing to estimating fundamental value.
  • Fully Diluted Market Capitalisation - The maximum supply multiplied by current price; this projects an overall value of the fully supplied coin, but based on current price.

Read more: Using CoinMarketCap Like A Pro | A Guide to Coinmarketcap (CMC)

2. Supply Distribution

Whereas the Supply Schedule tells you what the currently Circulating Supply is and the rate at which coins are being created, Supply Distribution takes into consideration how coins are spread among addresses, which  can have a big influence on value, and is another important part of tokenomics.

Given cryptocurrencies like Bitcoin are open source, this information is freely available to anyone with an internet connection and some data analysis skills. 

The raw supply distribution for Bitcoin doesn’t look particularly healthy, with less than 1% of addresses owning 86% of coins, which would suggest that is vulnerable to the actions of the smaller controlling addresses.

But this picture is somewhat misleading, as an individual will have numerous addresses, while one address might belong to an entity - like an exchange - which holds custody of Bitcoin on behalf of potentially millions of users.

Analysis by blockchain analytics provider, Glassnode, suggests that concentration is nowhere near as dramatic, and that the relative amount of bitcoin held by smaller entities has been consistently growing over time.

So though the Bitcoin blockchain is transparent, address ownership is pseudonymous, which means that we can infer certain information about the concentration of crypto ownership, and use this to provide insight in to value, but never really know the true supply distribution at a granular level.

This has spawned an entirely new field of analysis called on-chain analytics  - the closest thing to blockchain economics - which uses patterns in address behaviour to infer future price movement.

Read more: What is Bitcoin Halving | How it Works and Why Does it Matter?

Lost or Burned Coins 

Another factor that further muddies the waters around supply distribution is the number of coins that can never be spent because their Private Keys are lost, or they have been sent to a burn address. 

Though there are some well-publicised cases where significant amounts of bitcoin have been lost, it is impossible to put an exact figure on the total amount of lost coins for any cryptocurrency.

Dormancy - a measure of how long addresses have been inactive - is the main hint that on-chain analysts use to calculate how many coins are genuinely lost. Studies estimate that around 3 million bitcoin are irretrievable, which equates to over 14% of the Maximum Supply.

This is an important consideration as price is a function of demand and supply. If the supply of available coins is actually smaller than thought, but demand is unchanged, existing coins become more valuable. This is another reason why Marketcap can be misleading, because it cannot account for lost or burned coins.

Intentionally burning Bitcoin - by sending to an address that is known to be irretrievable -  is for obvious reasons, very rare. Burning coins is, however, an important concept in inflationary coins as way to counteract supply growth and the negative impact on price.

Unfortunately, burning generally happens as a manual action, without warning, because it is associated with price increases.  Burning can be used  programmatically to reduce supply inflation in uncapped cryptocurrencies, as we'll see with Ethereum below.

As if measuring supply distribution data wasn't hard enough, there is another crucial consideration impacting value, that raw data doesn't account for, which is how coins can be shared out before a project is even launched.

If we compare crypto's two dominant currencies - Bitcoin and Ether - were distributed at launch, we can understand why that is so important.

Bitcoin's Sacred Launch

Bitcoin was the first cryptocurrency, created in 2008. We don't know who created it, we just have a pseudonym, Satoshi Nakamoto, who disappeared soon after it was up and running. Their last public communication was in December, 2010. 

The creation of Bitcoin is sometimes called a Sacred Launch, because of the manner in which it started is exactly how it runs now. No deals were cut, no venture capitalists involved, no shareholders. No initial distribution to vested parties.

Given what we now know about the relationship of supply distribution to value, Bitcoin's Sacred Launch is a significant part of its appeal. But though Satoshi didn't award his/herself a huge stack of coins for creating Bitcoin, they had to play the role of sole Miner until others were convinced to do so, and therefore were earning the 50 BTC reward every 10 minutes for a considerable time.

Much is made of what is described as Satoshi’s coins, the vast amount of bitcoin earned when he/she was the only one mining it in the months after launch. 

The addresses that hold it amount to around 1.1 million coins, none of which have ever moved, accounting for one of the four addresses holding 100,000 to 1 million bitcoin in the chart above.

Even rumours that they Satoshi's coins have moved can have a huge impact on price, showing that tokenomics is not just a matter of numbers, but includes elements of behavioural analysis, inference and game theory.

Though a significant amount of bitcoin is definitely in a few hands, it's Sacred Launch and permissionless nature are regarded as features, rather than bugs.

Most of the cryptocurrencies that followed however, took a different approach to their launch and how supply was initially distributed. 

Ethereum & the concept of Premine

It turns out that the initial approach taken by Satoshi was the exception, rather than the rule, largely because the majority of cryptocurrencies that followed were created by a known team, and supported by early investors, both of whom were rewarded with coins before the network was up and running. 

One of the reasons why skeptics think crypto has no value is because of the idea that, given its virtual nature, it can just be created out of thin air. In many cases that is actually what happens with the initial distribution of a new coin, aka a Premine.

The idea of a Premine began with the launch of Ethereum in 2013. Rather than a Sacred Launch, Ethereum’s founders decided on an initial distribution of Ether - the native token - that included those who were part of the original team, developers and community with a portion set aside for early investors, through what was known as the Initial Coin Offering (ICO). 

The Premine was essentially crypto’s way of using a traditional form of equity distribution to reward entrepreneurs with a stake in their creation, but can put a significant proportion of the overall supply in very few hands, and depending on what restrictions are placed on selling, can tell you a lot about how focused the founders are on creating long term value, or short term personal gain. 

The ICO used a completely new approach to investing in a tech start-up, attempting to give everyone an equal chance to invest, by setting aside a fixed amount on a first-come-first-served basis, which - in the case of Ethereum's launch - simply required an investor to sending bitcoin to a specific address.

This was intended to counter the privileged access that venture capital has to privately investing in emerging companies. That was the theory, things didn't quite work out in practice.

Unfortunately Premines and ICOs quickly got out of control, and the idea of democratising early stage investment soon evaporated. Initial allocations incentivised hype and over-promise, while ICOs were set by FOMO and greed.

  • If you had enough ETH you could game the system by paying ridiculous fees & frontrunning
  • In many cases ICOs were staggered, with privileged access to early investors or brokers

Premines and visible founders are two of the biggest arguments used by Bitcoin Maximalists who feel that only Bitcoin provides genuine decentralisation because it has no single controlling figure, and has a vast network of Nodes that all have to agree on potential rule changes.

This is why tokenomics must include some measure of decentralisation, because even if a cryptocurrency has a maximum supply, its founders are capable of simply rewriting the rules in their favour, or simply disappearing in a so-called rug pull.

Address distribution should be a consideration when trying to understand what value a cryptocurrency has. The more diverse ownership is, the lower the chance that price can be impacted by own holder or a small group of holders.

Node Distribution

Just as concentration of supply within a few hands is unhealthy, if there are only a small number of miners/validators, the threshold to force a change to the supply schedule is relatively low, which could also devastate value.

In the same way, the distribution of those that run the network - the Nodes and Validators - has a crucial influence. Nodes enforce the rules that govern how a cryptocurrency works, including the supply schedule and consensus method already mentioned.

If there is only a small number of Nodes, they can collude to enforce a different version of those rules or to gain a majority agreement on a different version of the blockchain record the network holds (aka a 51% attack).

Either scenario means there is no certainty that the tokenomics can be relied up, which negatively impacts potential value.

Top exchanges for token-coin trading. Follow instructions and make unlimited money

BinancePoloniexBitfinexHuobiMXCProBITGate.ioCoinbase

3. Tokenomics & Incentives

Another important consideration of tokenomics are the incentives users have to play some role in a cryptocurrency's function. The most explicit reward is that provided for processing new blocks of transactions, which differs depending on the consensus method used; the two main methods having already been introduced.

Mining (PoW) - Being rewarded for processing transactions by running mining algorithms for Proof of Work blockchains, like Bitcoin

Validating/Staking (PoS) - Being rewarded for validating transactions by staking funds in Proof of Stake blockchains.

Blockchains are self-organising. They don't recruit or contract Miners or Validators, they simply join the network because of the economic incentive for providing a service. The byproduct of more Nodes is an increase in the resilience and independence of the network.

Being directly involved as a Miner or Validator requires technical knowledge, and up-front costs, such as  specialist equipment, which in the case of Bitcoin means industrial scale operations beyond the budget of solo miners, and in the case of Ethereum, a minimum stake of 32 ETH.

But as the crypto ecosystem has become more sophisticated opportunities to passively generate income, by indirectly staking and mining, have grown dramatically.

Users can simply stake funds for PoS chains with a few clicks within a supported wallet and generate a passive income, or add their Bitcoin to a Mining Pool to generate a share of the aggregate mining rewards.

Ethereum will experience a significant change in its tokenomics in 2022, changing from a Proof of Work consensus mechanism to Proof of Stake. ETH holders have been able to stake since December 2020, when Ethereum 2.0 launched.

Total Value Locked (TVL) provides a measure of how much Ethereum has been staked, while figures are also available for how much ETH is now being burned, and the impact on overall supply. 

Both these metrics are being interpreted positively by supporters of Ethereum, but its detractors simply say that the ability to make wholesale changes to its governing principles illustrates weakness, not strength.

How successful chains are at attracting this financial backing has a significant impact on price, especially where funds are locked for a given period as part of the commitment, as this provides price stability.

The impact of fees

Whatever consensus method a cryptocurrency uses, it can only grow if there is demand for transactions from users, which will be influenced by:

  • the cost of making a transaction, how it is calculated & who earns it
  • how fast a transaction is processed

Fees and Miner/Validator revenue are two sides of the same coin, providing a barometer of blockchain usage and health. Low fees can incentivise usage; while an active and growing user base attracts more Miners/Validators, keen to earn fees. This creates network effects, generating value for all participants in a win-win situation.

Fees are especially important where they pay for computational power, rather than just the processing of transactions. This type of blockchain emerged in the years following Bitcoin’s launch, starting with Ethereum, known as the world's computer. It provides processes the majority of transactions related to the growth areas of DEFI and NFTs, but has become a victim of its own success with its fees - measured in something called GAS - pricing out all but the wealthiest users. 

Addressing that challenge is one of the key objectives of the changes in the Ethereum Roadmap. EIP 1559 - aka the London Upgrade - which happened in August 2021.

Not only did the fee estimation process completely change, with the aim of making fees cheaper, the changes to Ethereum's fee structure are also having a significant impact on its tokenomics. Instead of all transaction fees going to Ethereum Miners, a mechanism was introduced to burn a portion of fees turning it from inflationary (no maximum supply cap) to disinflationary. 

Consensus methods and fee structures can therefore, provide important incentives for participation in a blockchain ecosystem, and even directly impact supply, so should be considered as part of tokenomics. There are also a number of other incentives and influences that complete the tokenomics picture.

IEOs, IDOs & Bonding Curves

ICOs failed because they fuelled bad behaviour from both entrepreneurs, with exit scams and untested ideas, and from investors, encouraging short term speculation, rather than actual usage.

What has emerged are more innovative ways to incentivise ownership and usage of tokens - as intended - that learn from these mistakes.

One approach to launching is to negotiate directly with centralised exchanges to ensure they are listed and tap into the existing base of users - known as an IEO - Initial Exchange Offering. This can have a significant impact on ownership distribution and price, as illustrated by the well publicised price boost that coins listed on Coinbase experience. But this is a long way from Bitcoin's organic, decentralised debut.

IEOs put all the power in the hands of the large exchanges, who will pick and choose coins that they deem anticipate demand for. But given crypto is about removing the middleman, one of the most interesting  developments in coin launches is the IDO - Initial Decentralised Exchange Offering.

An IDO is a programmatic way of listing a new token on a Decentralised Exchange (DEX) using Ethereum Smart Contracts and mathematics to shape incentives for buying and selling through something called a Bonding Curve.

Bonding Curves create a fixed price discovery mechanism based on supply and demand of a new token, relative to the price of Ethereum. Their complexity warrants a completely separate article, but it is enough to know that the shape of bonding curves is relevant to the tokenomics of new ERC20 coins launched on DEXs or DEFI platforms, because it can incentivise the timing of investment.

While bonding curves a mathematically complex way to incentivise investment in the new cryptocurrencies, there are more obvious and cruder approaches, particularly within DEFI, where the focus is providing interest on tokens. as a way to encourage early investment.

APYs & Ponzinomics

DEFI has exploded over the last 18 months, with over $90bn in TVL according to Defi Pulse, but this has also fuelled a mania around APY (average percentage yield). 

Many tokens have no real use case other than incentivising users to buy and stake/lock-up the coin in order to generate early liquidity. This doesn’t reward positive behaviour, but simply creates a race to the bottom, with users chasing ludicrous returns then dumping coins before the interest rates inevitably crash. This approach has been nicknamed Ponzinomics as the ongoing function of the token is unsustainable.

Airdrops

There is another way to reward holders in terms of how much they have actually used the token as it was intended - Airdrops. DEFI projects like Uniswap and 1Inch are good examples, while OpenSea did the same for those most active in minting and trading NFTs.

Airdrops are financed from the initial treasury but unfortunately aren’t built into road maps, as telegraphing them would be self-defeating.

Many savvy investors simply use new DEFI, NFT or Metaverse platforms simply in the hope, or expectation, of an Airdrop. That makes them relevant to tokenomics as they will drastically alter supply distribution of a token, but given the secrecy that surrounds them, can only be factored into retrospectively. 

DAOs & Governance

We’ve already discussed how the concentration of ownership and the network impacts perceived value, given the concern that control rests in a few hands. Even where there is a healthy distribution of holding addresses, they are largely passive, and have no specific influence on how the cryptocurrency functions.

There is a growing move towards crypto projects that are actively run by their communities through DAOs (Decentralised Autonomous Organisations). 

DAOs give holders of the native token the right to actively participate in its governance. Token holders can submit proposals and receive votes, in proportion to their holdings, on which proposals are accepted . DAOs therefore have a crucial influence on tokenomics because the community can decide to tweak or even rip up the rules. 

DAOs are essentially attempts to create a new digital democracy via crypto, and still have a lot of hurdles to overcome, as rational rules have to be written by irrational humans.

Tokenomics & Rational Decision Making

Sensible tokenomics doesn’t guarantee a project will succeed, nor does a blatantly vague token model doom a coin to failure.

For every project that makes huge efforts toward transparent supply schedules, good governance and healthy incentives for using of the network, there are hundreds, if not thousands, that have fuzzy or non-existent distribution logic because sensible tokenomics isn’t their aim, they simply want to meme, or hustle their way to higher market capitalisation.

Coins like Dogecoin or Shiba Inu have crazy supply schedules yet can still generate a huge market cap - bigger than global publicly traded brands - because investors are irrational. 

So studying tokenomics on its own doesn’t mean that you can find cryptocurrencies that will succeed and increase in price, as you have to also understand how other people are making decisions, many of whom have no interest in tokenomics, or even know what it means. 

What tokenomics does give you is a framework to understand how a coin is intended to work, which can form part of an investing decision.

Here’s a summary of what the main metrics can tell you:

  • Maximum supply - Positive indicator for an effective store of value; if there is no supply cap, there will be ongoing inflation, which may dilute the value of all existing coins.Network/Nodes - The more diverse the better. Will make arbitrary decisions less likely, producing stability.
  • Supply Distribution - The more evenly distributed the better, as there is less chance that one person can have a disproportionate impact on price by selling their coins
  • Fee Revenue - Shows you how much people are actively using it; a proxy for cashflow
  • TVL Locked - Shows that users are willing to put their money where their mouth is, and lock in their investment for a share of rewards
  • Governance, Airdrops, Incentives & launch strategies can all influence supply distribution so should be considered as part of tokenomics.
Too Soon for 'To the Moon': What the BTC Rally Really Means

The principles, philosophies and models by which tokens, coins and the projects that underpin are at the very beginning of experimenting with what works, and what doesn't.

There are plenty of model that won't work, and we expect those projects will fade away. But for the ones that do, will go on to inspire and guide new projects still to come.

Thank for visiting and reading this article! Please don't forget to leave a like, comment and share!

#blockchain #tokenization #cryptocurrency 

What is Tokenomics | The Real Value of a Token

What is Blockchain - Guide for Business Owners

https://www.blog.duomly.com/what-is-blockchain/

If you’re a business owner, you’ve probably heard the term “blockchain” floating around lately. But what is blockchain, exactly? And what does it mean for your business?

In short, blockchain is a distributed database that allows for secure, transparent, and tamper-proof transactions between two or more parties. This transparency and security have caught the attention of businesses and investors alike.

But what does that mean for your business specifically? In this guide, we’ll explore what blockchain is, how it works and what benefits it could offer your business. We’ll also look at some potential applications of blockchain technology in the business world. So whether you’re just starting to learn about blockchain or you’re ready to start implementing it, what you’ll find below is what you need to know.


#blockchain #hyperledger #tech #programming #business #startup #token #tokenization #tokens #crypto #cryptocurrency 

What is Blockchain - Guide for Business Owners

NFT Development Guide for Business Owners

https://www.blog.duomly.com/nft-development-guide/

If you’re a business owner, you know that staying ahead of the competition is key to success. And to stay ahead, you need to be constantly innovating and evolving your business model. But how do you do that? How can you create something new when everything around you seems so familiar?

One way to develop new ideas is to explore the world of NFT development. NFTs are a relatively new technology, and there are still many possibilities for what they can be used for. So if you’re looking for ways to take your business to the next level, then NFT development may be just what you need.

#blockchain #hyperledger #web3 #nft #business #businesses #token #tokenization #tokens #decentralized #p2p #entrepreneur #entrepreneurs #businesses #startup 

NFT Development Guide for Business Owners

Web 3.0 Development Guide for Business Owners

https://www.blog.duomly.com/web-3-0-development-guide/

It’s been a while since I’ve written anything, but my fingers are itching to get back at it. So I hope you enjoy this article on the Web 3.0 Development Guide for Business Owners! 

I’m going to talk about some of the features and benefits of web 3.0 development, including decentralized apps.

With our Web 3.0 Development Guide, your business can now take advantage of these latest advancements in web tech!


#web3 #blockchain #token #tokenization #tokens #decentralized #p2p #entrepreneur #entrepreneurs #businesses #startup

Web 3.0 Development Guide for Business Owners

What is NFT - Guide for Business Owners

https://www.blog.duomly.com/what-is-nft/

In this post, we’ll explore what is NFT – what it is and what it means for business owners. 

NFT stands for “Non-Fungible Token” which means that the product can only be used by the purchaser of the product. 

It does not mean that you cannot trade your products with other people or sell them to others on a case-by-case basis. It simply ensures that what you are selling will not be traded in any way other than what was agreed upon when purchasing the product.

#blockchain #hyperledger #nft #token #tokenization #tokens #decentralized #p2p #entrepreneur #entrepreneurs #businesses #startup 

What is NFT - Guide for Business Owners
Markus zusak

Markus zusak

1645771247

Real estate Tokenization - Tokenize Real estate

Owners may now use blockchain technology and smart contracts to tokenize real estate marketplaces. As a result, seeking the advice of a tokenization developer is strongly advised.

Check: https://bit.ly/3pdKW4e

#realestatetokenization #tokenizerealestate #tokenization #realestate #blockchain #crypto #cryptocurrency #cryptocurrencies 

Real estate Tokenization - Tokenize Real estate
Markus zusak

Markus zusak

1644820938

Tokenization Platform Development

You can use the tokenization platform to tokenize your assets. Because of the construction of a tokenization platform, people will be able to tokenize their assets in a secure manner.

Check: https://www.blockchainappfactory.com/tokenization-platform-development

#tokenization #tokenizationplatform #tokenizerealestate #tokenizeproperty #assettokenization #arttokenization #tokenizeasset #blockchain #crypto #cryptocurrency #cryptocurrencies

Tokenization Platform Development
Markus zusak

Markus zusak

1644561746

Real estate tokenization - Tokenize your Real estate

The most recent type of profit in the real estate market is tokenization. As a result, start tokenizing your property right now. It's currently a big topic in the Blockchain sector.

Check: https://www.blockchainappfactory.com/real-estate-tokenization

#realestate #realestatetokenization #tokenization #tokenizerealestate #tokenizeproperty #assettokenization #tokenizeasset #blockchain #crypto #cryptocurrency #cryptocurrencies

Real estate tokenization - Tokenize your Real estate
Markus zusak

Markus zusak

1643349669

Tokenization Platform Development

Tokenization is the process of representing real-world assets, such as real estate, equities, and bonds, on a blockchain. The Tokenization platform, on the other hand, allows you to tokenize your assets.

Check: https://www.blockchainappfactory.com/tokenization-platform-development

#tokenization #tokenizationplatform #arttokenization #assettokenization #realestatetokenization #blockchain #crypto #cryptocurrency #cryptocurrencies

Tokenization Platform Development
Ben Taylor

Ben Taylor

1643269598

What is Elastic Supply Tokens | Why Do We Need Elastic Supply Tokens

Elastic supply tokens have a changing circulating supply. The idea is that instead of price volatility, what changes is the token supply through events called rebases. 

Imagine if the Bitcoin protocol could adjust how much bitcoin is in user wallets to achieve a target price. You have 1 BTC today. You wake up tomorrow, and now you have 2 BTC, but they’re each worth half of what they were yesterday. That’s how a rebase mechanism works.

Introduction

Decentralized Finance (DeFi) has seen an explosion of new types of financial products on the blockchain. We’ve already discussed yield farming, tokenized Bitcoin on Ethereum, Uniswap, and flash loans. One other segment of the crypto space that has been interesting to watch is elastic supply tokens, or rebase tokens.

The unique mechanism behind them allows for a lot of experimentation. Let’s see how these tokens work.

What is an elastic supply token?

An elastic supply (or rebase) token works in a way that the circulating supply expands or contracts due to changes in token price. This increase or decrease in supply works with a mechanism called rebasing. When a rebase occurs, the supply of the token is increased or decreased algorithmically, based on the current price of each token.

In some ways, elastic supply tokens can be paralleled with stablecoins. They aim to achieve a target price, and these rebase mechanics facilitate that. However, the key difference is that rebasing tokens aim to achieve it with a changing (elastic) supply. 

Wait, aren’t many cryptocurrencies operating with a changing supply? Yes, somewhat. Currently, 6.25 new BTC is minted with every block. After the 2024 halving, this is going to be reduced to 3.125 per block. It is a predictable rate, so we can estimate how much BTC will exist next year or after the next halving. 

Supply-elastic tokens work differently. As mentioned, the rebasing mechanism adjusts the token circulating supply periodically. Let’s say we have an elastic supply token that aims to achieve a value of 1 USD. If the price is above 1 USD, the rebase increases the current supply, reducing the value of each token. Conversely, if the price is below 1 USD, the rebase will decrease the supply, making each token worth more.

What does this mean from a practical standpoint? The amount of tokens in user wallets changes if a rebase occurs. Let’s say we have Rebase USD (rUSD), a hypothetical token that targets a price of 1 USD. You have 100 rUSD safely sitting in your hardware wallet. Let’s say the price goes below 1 USD. After the rebase occurs, you’ll have only 96 rUSD in your wallet, but at the same time, each will be worth proportionally more than before the rebase.

The idea is that your holdings proportional to the total supply haven’t changed with the rebase. If you had 1% of the supply before the rebase, you should still have 1% after it, even if the number of coins in your wallet has changed. In essence, you retain your share of the network no matter what the price is.

How Do Elastic Tokens Work?

Tokens with elastic supply operate on a special technique termed rebase. This is an algorithm-based token supply regulation. It’s worth mentioning that during these arrangements customers’ proportional assets eventually aren’t diluted and remain the same.

The goal of rebases is to connect a token with a definite price considering supply and demand. As an instance, let’s view a case when there is a rebase token with the objective to deliver a value of 1 USD. If its price surpasses 1 USD, the algorithm will heighten the existing supply and the cost of a single token will go down. If the cost is under 1 USD, the opposite action takes place. Accordingly, rebases can be positive or negative.

Following a rebase, the quantity of tokens in user wallets modifies accordingly.

At some points, elastic supply tokens have similarities with stablecoins – both of them intend to correspond to the particular price, but the used techniques differ.

  • Stablecoins are described as semi-fixed supply currencies that are governed (as they have the option to mint more coins to corresponding demand when being collateralized). In contrast, rebase tokens are actively adapting supply to reach a non-collateralized peg rate.
  • In the case of fiat collateralized stablecoins (like USDT), there is a trust issue as users need to trust that the other side really holds the stocks it states. So, there is counterparty risk. As for elastic tokens, their work is based on algorithms.
  • Stable coins don’t intend to bring income or trade like stocks. Likewise, elastic tokens are a store of value. Nevertheless, they can bring income given that when the market cap grows, users acquire rebases. To accomplish this, the rebase technique issues recently minted currencies to users, without the dilution of their possessions.

At any rate, it’s essential to be alert of the risks that investing in tokens with not constant supply has. Rebases expend holders’ capital when the rate is growing, but also cause more losses on the way down.

Symmetric and Asymmetric Rebases

Symmetric rebase also called standard rebase suggests changing the number of tokens in the customers’ wallets equally when an adjustment is managed. As for asymmetric (non-standard) rebase it doesn’t impact all wallets in the same way. This means users can volunteer to reduce their token supply. By doing so, they will acquire higher returns in case of a positive rebase.

Originally, elastic supply tokens mainly used asymmetric rebase standard. Afterwards, aiming to reduce the supply contraction, the asymmetric rebase model appeared.

Anyway, no matter what structure of rebasing is adopted, elastic supply tokens should keep relative stability, or they will fail to perform their basic function.

Why Do We Need Elastic Supply Tokens?

Well, it’s a little tricky to understand the use-cases for elastic supply tokens. One might say, elastic supply tokens are quite similar in nature to that of stablecoins. However, there’s one major difference. Stablecoins, on one hand, aims to keep the price stable or pegged to another asset to facilitate the ease of trading.

Whereas, elastic supply tokens, on the other hand, aims for a given target price to fight the volatile nature of cryptocurrencies.

Are there any risks with elastic supply tokens?

Investing in tokens with an elastic price can be considered risky. With elastic supply tokens, the chances of losing funds could be higher. Sure, this can amplify your gains to the upside, but it can also boost your losses. If rebases occur while the token price is going down, you not only lose money from the token price going down, you’ll also own fewer and fewer tokens after each rebase! 

Another reason why investing in elastic supply tokens may be risky is that they are an experimental asset that increases the chances for projects to have bugs in their smart contract code.

Elastic supply tokens are highly risky and very dangerous investments. You should only invest in them if you completely understand what you’re doing. Remember, looking at price charts isn’t going to be all that helpful, since the amount of tokens you hold will change after rebases occur. 

Sure, this can amplify your gains to the upside, but it can also amplify your losses. If rebases occur while the token price is going down, you not only lose money from the token price going down, you’ll also own less and less tokens after each rebase! 

Since they’re quite tricky to understand, investing in rebasing tokens will likely result in a loss for most traders. Only invest in elastic supply tokens if you can fully grasp the mechanisms behind them. Otherwise, you’re not in control of your investment and won’t be able to make informed decisions.

Rebasing token examples

Ampleforth

Ampleforth is one of the first coins to work with an elastic supply. Ampleforth aims to be an uncollateralized synthetic commodity, where 1 AMPL targets a price of 1 USD. Rebases occur once every 24 hours.

The project had relatively little traction until the introduction of a liquidity mining campaign called Geyser. What’s particularly interesting about this scheme is its duration. It distributes tokens for participants over a 10-year period. Geyser is a prime example of how liquidity incentives can create significant traction for a DeFi project.

While technically a stablecoin, the AMPL price chart shows you how volatile elastic supply tokens get.

The AMPL price targets $1, but it can be quite volatile nevertheless.

Bear in mind that this price chart only shows the price of individual AMPL tokens, and doesn’t take into account the changes in supply. Even so, Ampleforth is highly volatile, making it a risky coin to play around with.

It might make more sense to chart elastic supply tokens in terms of market capitalization. Since the price of individual units doesn’t matter as much, the market cap can be a more accurate barometer of the network’s growth and traction.

AMPL market cap on a logarithmic scale.

Yam Finance

Yam Finance is one of the other elastic supply token projects that has gained some traction. The Yam protocol’s overall design is sort of a mashup between Ampleforth’s elastic supply, Synthetix’s staking system, and yearn.finance’s fair launch. YAMs also aims to achieve a price target of 1 USD.

YAM is a completely community-owned experiment, as all tokens were distributed through liquidity mining. There was no premine, no founder allocation – the playing field to acquire these tokens was even for everyone through a yield farming scheme.

As a completely new and unknown project, Yam had achieved 600 million dollars of value locked in its staking pools in less than two days. What may have attracted a lot of liquidity is how YAM farming was specifically targeting the holders of some of the most popular DeFi coins. These were COMP, LEND, LINK, MKR, SNX, ETH, YFI, and ETH-AMPL Uniswap LP tokens.

However, due to a bug in the rebasing mechanism, much more supply was minted as planned. The project was ultimately relaunched and migrated to a new token contract thanks to a community-funded audit and joint effort. The future of Yam is completely in the hands of YAM holders now.

#blockchain #cryptocurrency #tokenization 

What is Elastic Supply Tokens | Why Do We Need Elastic Supply Tokens
Markus zusak

Markus zusak

1639636904

Tokenize Real Estate - Real Estate Tokenization

Tokenization is the most modern method of ensuring website reliability in the real estate industry. As a result, tokenizing real estate services is crucial for high liquidity.

Check: https://www.blockchainappfactory.com/real-estate-tokenization

#realestate #tokenization #realestatetokenization #tokenizerealestate #tokenizeproperty #crypto #cryptocurrency #blockchain 

Tokenize Real Estate - Real Estate Tokenization

BEP20 Token Generator | Create BEP20 Token For Free

How to Create BEP20 Token? BEP20 Token Create Tool
If you would like to create BEP20 token under Binance Smart Chain without any coding skills, you are invited to use the BEP20 Token Generator which is free and easy to use with its user friendly interface.

BEP20 Token Generator: https://createmytoken.net/

ERC20 Token Generator: https://createmytoken.net/

BEP20 Token Generator is a free DApp which allows you to create your own BEP20 token in less than a minute.

How to use the BEP20 Token Generator
It is super easy to use the tool

Install Metamask and login.
Enter your token details such as name, symbol, decimals and supply.
Create your token.

#bsc  #bep20  #token  #bep20  #tokenization  create #solidity  #metamask  #erc20  #erc 20

BEP20 Token Generator | Create BEP20 Token For Free