1606095254
DeFi’s astonishing pace of innovation is accompanied by a theatre of speculation, increasingly complex and interwoven smart contracts, and lucrative exploits. Proper risk management solutions for investors shouldering principal risks are currently insufficient to cover the progressive influx of capital into the market, leaving retail investors at the peril of the Ethereum’s Dark Forest and institutions with a dearth of options for hedging risk.
The risks facing DeFi are multi-faceted, nuanced, and novel. Vulnerabilities span multiple layers of a composable web of DeFi lego blocks, of which many are intertwined to complex degrees. The market presents several distinct challenges for perceiving and modelling risk in a permissionless financial system. Without sufficient tooling and infrastructure for risk management, institutional and retail capital will not be able to manage risk at a competitive level to centralized finance.
Risk management and sophisticated asset protection instruments are the next great frontier for DeFi that can usher in a wave of new capital and support the existing community as well. The potential design space is enormous. By fusing conventional coverage modelling with bundled protection products and a no-KYC platform, UNION aims to lay the foundation for advanced risk management products and grant a higher degree of comfort to participants.
We’re excited to push the boundaries of scalable, open asset protection in the DeFi market, and welcome the broader crypto community to join us on our journey.
The token sale of UNN will begin on Sunday, November 22, 2020. The demand-driven pricing curve will utilize a linear approximation to offer the token price from $0.035 (entry price of the sale) to $0.50 (exit price of the sale).
Please have MetaMask installed in your browser BEFORE you begin the process. It is also recommended that you only use Chrome, Brave, or Firefox as your web browser with the MetaMask extension.
Shortly, we will be releasing an article with instructions and further details about participating in the sale.
Inspired by similar work by Ampleforth, following the sale, we will launch the UNN Geyser, which will reward liquidity providers of UNN tokens to Uniswap with additional UNN tokens for their active participation. More details about that below.
The UNN token is used for governance purposes, such as voting on protection claims and related conflict resolution, adjusting risk parameters, or adjusting incentive programs. Early token holders will have the ability to participate in a liquidity mining program (e.g., the UNN Geyser) and a Voluntary Lockup Arrangement, where bolstered incentives beyond the UNN Geyser will be granted based on fixed lock-up periods.
Details for both the UNN Geyser and the Voluntary Lockup Arrangement are in the next section.
There will be a fixed total of 1 billion UNN tokens. The UNN token forms the basis for the UNION governance ecosystem, driving the incentives for liquidity programs, yield farming, and protection coverage across the platform. The token distribution is as follows:
UNN is a governance token with aspirations of being a staple DeFi token that is included in major DeFi platforms and protocols because of its deep liquidity. To achieve this we designed and built the project from the beginning to have no mechanical locks (meaning they can be moved freely by holders) beyond delivery following the public sale. Early investors have been selected based on their long term vision with the project.
The UNN token is the centrepiece of a multi-layered system using a three-token model.
The complimentary tokens to UNN are uUNN and pUNN. The UNN token is the only token available during the token sale. The uUNN and pUNN tokens are used within the capital and pricing model of the platform to divest the governance process from the UNION asset protection model. Separate tokens for governance and protection remove conflicts of interest arising from complex market dynamics, such as those experienced by NXM in September 2020.
Buyers of protection will receive uUNN tokens giving them rights to the protection policy. Writers of protection will receive pUNN tokens, representing the % share of the protection pool that the writer is powering. The pUNN protection pool incentives program is detailed below following the Voluntary Lockup Arrangement section.
More details about how the multi-layered governance, claims assessment, and protection process function are available in the official UNION whitepaper.
To incentivize on-chain liquidity providers of UNN (starting with Uniswap), we will provide a UNN Geyser. We built the UNN Geyser using AmpleForth’s Geyser as our base reference.
This lock incentivizes community participation in the early days of the project when governance activity is expected to be muted. In other words, think of this as a precursor to our governance incentive program. In return for locking their tokens, during which they cannot exchange/send the tokens, the purchasers will receive enhanced UNN incentives above the Geyser amount.
A comparison of the different lock returns along with Geyser (no lock) for theoretical 1,000 token allocations.
In return for providing the liquidity for a protection pool, protection writers or pUNN tokens holders will receive for their particular protection pool:
Our roadmap reflects a balance between meeting the pressing market demand for a suite of complete protection products and providing a robust and well-tested solution. Given the nature of the technology solution, we will always err on the side of conservative. Below roadmap is our aspirational target and may be adjusted based on progress and market demand.
Our general thesis on best practices is to deploy a series of state-of-the-art checks to ensure regulatory compliance and the privacy of user data. This begins even before our token sale, with our initial sales.
Steps that we take to ensure compliance and user data privacy include:
Would you like to earn token right now! ☞ CLICK HERE
Looking for more information…
☞ Website
☞ Explorer
☞ Source Code
☞ Social Channel
☞ Documentation
☞ Coinmarketcap
Create an Account and Trade NOW
☞ Bittrex
☞ Poloniex
☞ Binance
Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!
#bockchain #bitcoin #union finance #unn
1606095254
DeFi’s astonishing pace of innovation is accompanied by a theatre of speculation, increasingly complex and interwoven smart contracts, and lucrative exploits. Proper risk management solutions for investors shouldering principal risks are currently insufficient to cover the progressive influx of capital into the market, leaving retail investors at the peril of the Ethereum’s Dark Forest and institutions with a dearth of options for hedging risk.
The risks facing DeFi are multi-faceted, nuanced, and novel. Vulnerabilities span multiple layers of a composable web of DeFi lego blocks, of which many are intertwined to complex degrees. The market presents several distinct challenges for perceiving and modelling risk in a permissionless financial system. Without sufficient tooling and infrastructure for risk management, institutional and retail capital will not be able to manage risk at a competitive level to centralized finance.
Risk management and sophisticated asset protection instruments are the next great frontier for DeFi that can usher in a wave of new capital and support the existing community as well. The potential design space is enormous. By fusing conventional coverage modelling with bundled protection products and a no-KYC platform, UNION aims to lay the foundation for advanced risk management products and grant a higher degree of comfort to participants.
We’re excited to push the boundaries of scalable, open asset protection in the DeFi market, and welcome the broader crypto community to join us on our journey.
The token sale of UNN will begin on Sunday, November 22, 2020. The demand-driven pricing curve will utilize a linear approximation to offer the token price from $0.035 (entry price of the sale) to $0.50 (exit price of the sale).
Please have MetaMask installed in your browser BEFORE you begin the process. It is also recommended that you only use Chrome, Brave, or Firefox as your web browser with the MetaMask extension.
Shortly, we will be releasing an article with instructions and further details about participating in the sale.
Inspired by similar work by Ampleforth, following the sale, we will launch the UNN Geyser, which will reward liquidity providers of UNN tokens to Uniswap with additional UNN tokens for their active participation. More details about that below.
The UNN token is used for governance purposes, such as voting on protection claims and related conflict resolution, adjusting risk parameters, or adjusting incentive programs. Early token holders will have the ability to participate in a liquidity mining program (e.g., the UNN Geyser) and a Voluntary Lockup Arrangement, where bolstered incentives beyond the UNN Geyser will be granted based on fixed lock-up periods.
Details for both the UNN Geyser and the Voluntary Lockup Arrangement are in the next section.
There will be a fixed total of 1 billion UNN tokens. The UNN token forms the basis for the UNION governance ecosystem, driving the incentives for liquidity programs, yield farming, and protection coverage across the platform. The token distribution is as follows:
UNN is a governance token with aspirations of being a staple DeFi token that is included in major DeFi platforms and protocols because of its deep liquidity. To achieve this we designed and built the project from the beginning to have no mechanical locks (meaning they can be moved freely by holders) beyond delivery following the public sale. Early investors have been selected based on their long term vision with the project.
The UNN token is the centrepiece of a multi-layered system using a three-token model.
The complimentary tokens to UNN are uUNN and pUNN. The UNN token is the only token available during the token sale. The uUNN and pUNN tokens are used within the capital and pricing model of the platform to divest the governance process from the UNION asset protection model. Separate tokens for governance and protection remove conflicts of interest arising from complex market dynamics, such as those experienced by NXM in September 2020.
Buyers of protection will receive uUNN tokens giving them rights to the protection policy. Writers of protection will receive pUNN tokens, representing the % share of the protection pool that the writer is powering. The pUNN protection pool incentives program is detailed below following the Voluntary Lockup Arrangement section.
More details about how the multi-layered governance, claims assessment, and protection process function are available in the official UNION whitepaper.
To incentivize on-chain liquidity providers of UNN (starting with Uniswap), we will provide a UNN Geyser. We built the UNN Geyser using AmpleForth’s Geyser as our base reference.
This lock incentivizes community participation in the early days of the project when governance activity is expected to be muted. In other words, think of this as a precursor to our governance incentive program. In return for locking their tokens, during which they cannot exchange/send the tokens, the purchasers will receive enhanced UNN incentives above the Geyser amount.
A comparison of the different lock returns along with Geyser (no lock) for theoretical 1,000 token allocations.
In return for providing the liquidity for a protection pool, protection writers or pUNN tokens holders will receive for their particular protection pool:
Our roadmap reflects a balance between meeting the pressing market demand for a suite of complete protection products and providing a robust and well-tested solution. Given the nature of the technology solution, we will always err on the side of conservative. Below roadmap is our aspirational target and may be adjusted based on progress and market demand.
Our general thesis on best practices is to deploy a series of state-of-the-art checks to ensure regulatory compliance and the privacy of user data. This begins even before our token sale, with our initial sales.
Steps that we take to ensure compliance and user data privacy include:
Would you like to earn token right now! ☞ CLICK HERE
Looking for more information…
☞ Website
☞ Explorer
☞ Source Code
☞ Social Channel
☞ Documentation
☞ Coinmarketcap
Create an Account and Trade NOW
☞ Bittrex
☞ Poloniex
☞ Binance
Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!
#bockchain #bitcoin #union finance #unn
1624219980
NFT Art Finance is currently one of the most popular cryptocurrencies right now on the market, so in today’s video, I will be showing you guys how to easily buy NFT Art Finance on your phone using the Trust Wallet application.
📺 The video in this post was made by More LimSanity
The origin of the article: https://www.youtube.com/watch?v=sKE6Pc_w1IE
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #nft art finance token #token #buy nft art finance #how to buy nft art finance token - the easiest method!
1624312800
SPORE FINANCE PREDICTION - WHAT IS SPORE FINANCE & SPORE FINANCE ANALYSIS - SPORE FINANCE
In this video, I talk about spore finance coin and give my spore finance prediction. I talk about the latest spore finance analysis & spore finance crypto coin that recently has been hit pretty hard in the last 24 hours. I go over what is spore finance and how many holders are on this new crypto coin spore finance.
📺 The video in this post was made by Josh’s Finance
The origin of the article: https://www.youtube.com/watch?v=qbPQvdxCtEI
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #spore finance #what is spore finance #spore finance prediction - what is spore finance & spore finance analysis - spore finance #spore finance prediction
1658068560
WordsCounted
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
Contributors
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted
License: MIT license
1659601560
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license
#ruby #ruby-on-rails