What is Hepa Finance (HEPA) | What is Hepa Finance token | What is HEPA token

In this article, we’ll discuss information about the Hepa Finance project and HEPA token

Hello farmers today we’re thrilled to announce the initial launch of

Hepa Finance

Our governance token **HEPA **and our first DeFi product Hepa.Finance

all launching right now on Binance Smart Chain

TLDR;

  • IDO will be on ApeTools
  • HEPA token (HEPA) — a revenue generating governance token governing the HEPA Finance ecosystem on Binance Smart Chain.
  • Yield farming on Binance Smart Chain —
  • Stake;
  • HEPA/BUSD
  • HEPA/BNB
  • HEPA/BANANA
  • HEPA/TAPE and earn extremely lucrative rewards for being an early supporter. The yield farming system incorporates a unique 95% locked / 5% unlocked token supply design to create a sustainable and long-term farming environment.
  • HepaFinance only supports MetaMask & TrustWallet
  • Join our Telegram channel

The HEPA (HEPA) token

The core foundation of the Hepa Finance is the HEPA (ticker: HEPA) token.

It is aimed to be fairly and widely distributed, with focus on rewarding early and active supporters of the protocol.

Hard cap

The HEPA token has a hard cap of 500,000,000 tokens and features an extensive lockup model where 95% of earned tokens will be released at a later stage in the future and 5% will be available directly.

The HEPA token will be configured to lock 95% of all newly minted supply during the yield farming stage until 00:00:00 UTC March 25th, 2022.

When 00:00:00 UTC March 25th 2022 has been reached the locked supply will slowly start to unlock on a block-per-block basis until 00:00:00 UTC March 25th, 2023.

By the end of 2022 roughly 290.87 million HEPA tokens are expected to have been minted. At the end of 2023 the supply is expected to reach 460.70 million HEPA and the final 500 million cap is expected to be reached at the beginning of March 2024. From start to finish it’ll take roughly 3 years in total to reach the cap.

Treasury

95% locked / 5% unlocked setup

Note that ALL of the fees are also locked using the exact same emission schedule as previously discussed — the treasuries will be locked until

00:00:00 UTC March 25th, 2022 and it will then take a full year

(until **00:00:00 UTC March 25th, 2023) **to fully unlock them.

Pre-minting

Due to the treasuries following the exact same locked emission schedule as regular users we’ve decided to pre-mint a total of 10,000,000 HEPA, or 2% of the expected final hard cap.

The **10,000,000 HEPA **tokens will be distributed as follows:

  • 7,000,000 HEPA: IDO Sale on ApeTools
  • 2,000,000 HEPA: Allocated to the liquidity provider (LP) fund. The first action will be to seed initial liquidity for the HEPA/BNB pool.
  • 1,000,000 HEPA: Allocated to the strategic wallet.

Besides these 10,000,000 HEPA there are no intentions to further mint tokens outside the scope of our yield farming contracts/token emission schedule.

Yield Farming

Here’s a short breakdown of the first month’s bonus multipliers:

  • Week 1: 256x multiplier = 256 HEPA/block = 7,680 HEPA/minute
  • Week 2: 128x multiplier = 128 HEPA/block = 3,840 HEPA/minute
  • Week 3: 64x multiplier = 64 HEPA/block = 1,920 HEPA/minute
  • Week 4: 32x multiplier = 32 HEPA/block = 960 HEPA/minute

The reason we can go crazy with these multipliers is because of the 95% locked / 5% unlocked token supply locking system.

Pools

Hepa Finance will launch with two pools that will reward yield farmers with HEPA:

  • HEPA/TAPE — 1x pool weight (~9.85 HEPA/block during the initial launch week).
  • HEPA/BNB — 25x pool weight (~246.1 HEPA/block during the initial launch week).

Withdrawals

For withdrawals we’re using a slightly more complex system:

  • 1% fee if a user withdraws under 5 days.
  • 2% fee if a user withdraws under 3 days.
  • 4% fee if a user withdraws under 24 hours.
  • 8% fee if a user withdraws under 1 hour.
  • 25% slashing fee if a user withdraws during the same block (in order to disincentivize the use of flash loans).

Liquidity

The founding team will commit ~$10k worth of BNB to the HEPA/BNB pool to allow for users to easily swap to HEPA in order to enter the HEPA/BNB liquidity farm.

The HEPA/BNB pool on Hepa Finance will initially be seeded with a price of $0.005 per HEPA.

ApeSwap router will be used for all transactions.

How and Where to Buy HEPA token?

HEPA token is now live on the Binance mainnet. The token address for HEPA is 0x9159f30f1c3f0317b0a2d6bc176f29266be790ee. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.

Just be sure you have enough BNB in your wallet to cover the transaction fees.

Join To Get BNB (Binance Coin)! ☞ CLICK HERE

You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.

Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

SIGN UP ON BINANCE

Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)

Next step

You need a wallet address to Connect to Pancakeswap Decentralized Exchange, we use Metamask wallet

If you don’t have a Metamask wallet, read this article and follow the steps

What is Metamask wallet | How to Create a wallet and Use

Transfer $BNB to your new Metamask wallet from your existing wallet

Next step

Connect Metamask Wallet to Pancakeswap Decentralized Exchange and Buy, Swap HEPA token

Contract: 0x9159f30f1c3f0317b0a2d6bc176f29266be790ee

Read more: What is Pancakeswap | Beginner’s Guide on How to Use Pancakeswap

The top exchange for trading in HEPA token is currently Pancakeswap v2

Find more information HEPA

WebsiteExplorerSocial ChannelSocial Channel 2DocumentationCoinmarketcap

🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.

🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner

⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!

☞ **-----https://geekcash.org-----**⭐ ⭐ ⭐

Thank for visiting and reading this article! Please don’t forget to leave a like, comment and share!

#blockchain #bitcoin #hepa #hepa finance

What is GEEK

Buddha Community

What is Hepa Finance (HEPA) | What is Hepa Finance token | What is HEPA token

What is Hepa Finance (HEPA) | What is Hepa Finance token | What is HEPA token

In this article, we’ll discuss information about the Hepa Finance project and HEPA token

Hello farmers today we’re thrilled to announce the initial launch of

Hepa Finance

Our governance token **HEPA **and our first DeFi product Hepa.Finance

all launching right now on Binance Smart Chain

TLDR;

  • IDO will be on ApeTools
  • HEPA token (HEPA) — a revenue generating governance token governing the HEPA Finance ecosystem on Binance Smart Chain.
  • Yield farming on Binance Smart Chain —
  • Stake;
  • HEPA/BUSD
  • HEPA/BNB
  • HEPA/BANANA
  • HEPA/TAPE and earn extremely lucrative rewards for being an early supporter. The yield farming system incorporates a unique 95% locked / 5% unlocked token supply design to create a sustainable and long-term farming environment.
  • HepaFinance only supports MetaMask & TrustWallet
  • Join our Telegram channel

The HEPA (HEPA) token

The core foundation of the Hepa Finance is the HEPA (ticker: HEPA) token.

It is aimed to be fairly and widely distributed, with focus on rewarding early and active supporters of the protocol.

Hard cap

The HEPA token has a hard cap of 500,000,000 tokens and features an extensive lockup model where 95% of earned tokens will be released at a later stage in the future and 5% will be available directly.

The HEPA token will be configured to lock 95% of all newly minted supply during the yield farming stage until 00:00:00 UTC March 25th, 2022.

When 00:00:00 UTC March 25th 2022 has been reached the locked supply will slowly start to unlock on a block-per-block basis until 00:00:00 UTC March 25th, 2023.

By the end of 2022 roughly 290.87 million HEPA tokens are expected to have been minted. At the end of 2023 the supply is expected to reach 460.70 million HEPA and the final 500 million cap is expected to be reached at the beginning of March 2024. From start to finish it’ll take roughly 3 years in total to reach the cap.

Treasury

95% locked / 5% unlocked setup

Note that ALL of the fees are also locked using the exact same emission schedule as previously discussed — the treasuries will be locked until

00:00:00 UTC March 25th, 2022 and it will then take a full year

(until **00:00:00 UTC March 25th, 2023) **to fully unlock them.

Pre-minting

Due to the treasuries following the exact same locked emission schedule as regular users we’ve decided to pre-mint a total of 10,000,000 HEPA, or 2% of the expected final hard cap.

The **10,000,000 HEPA **tokens will be distributed as follows:

  • 7,000,000 HEPA: IDO Sale on ApeTools
  • 2,000,000 HEPA: Allocated to the liquidity provider (LP) fund. The first action will be to seed initial liquidity for the HEPA/BNB pool.
  • 1,000,000 HEPA: Allocated to the strategic wallet.

Besides these 10,000,000 HEPA there are no intentions to further mint tokens outside the scope of our yield farming contracts/token emission schedule.

Yield Farming

Here’s a short breakdown of the first month’s bonus multipliers:

  • Week 1: 256x multiplier = 256 HEPA/block = 7,680 HEPA/minute
  • Week 2: 128x multiplier = 128 HEPA/block = 3,840 HEPA/minute
  • Week 3: 64x multiplier = 64 HEPA/block = 1,920 HEPA/minute
  • Week 4: 32x multiplier = 32 HEPA/block = 960 HEPA/minute

The reason we can go crazy with these multipliers is because of the 95% locked / 5% unlocked token supply locking system.

Pools

Hepa Finance will launch with two pools that will reward yield farmers with HEPA:

  • HEPA/TAPE — 1x pool weight (~9.85 HEPA/block during the initial launch week).
  • HEPA/BNB — 25x pool weight (~246.1 HEPA/block during the initial launch week).

Withdrawals

For withdrawals we’re using a slightly more complex system:

  • 1% fee if a user withdraws under 5 days.
  • 2% fee if a user withdraws under 3 days.
  • 4% fee if a user withdraws under 24 hours.
  • 8% fee if a user withdraws under 1 hour.
  • 25% slashing fee if a user withdraws during the same block (in order to disincentivize the use of flash loans).

Liquidity

The founding team will commit ~$10k worth of BNB to the HEPA/BNB pool to allow for users to easily swap to HEPA in order to enter the HEPA/BNB liquidity farm.

The HEPA/BNB pool on Hepa Finance will initially be seeded with a price of $0.005 per HEPA.

ApeSwap router will be used for all transactions.

How and Where to Buy HEPA token?

HEPA token is now live on the Binance mainnet. The token address for HEPA is 0x9159f30f1c3f0317b0a2d6bc176f29266be790ee. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.

Just be sure you have enough BNB in your wallet to cover the transaction fees.

Join To Get BNB (Binance Coin)! ☞ CLICK HERE

You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.

Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

SIGN UP ON BINANCE

Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)

Next step

You need a wallet address to Connect to Pancakeswap Decentralized Exchange, we use Metamask wallet

If you don’t have a Metamask wallet, read this article and follow the steps

What is Metamask wallet | How to Create a wallet and Use

Transfer $BNB to your new Metamask wallet from your existing wallet

Next step

Connect Metamask Wallet to Pancakeswap Decentralized Exchange and Buy, Swap HEPA token

Contract: 0x9159f30f1c3f0317b0a2d6bc176f29266be790ee

Read more: What is Pancakeswap | Beginner’s Guide on How to Use Pancakeswap

The top exchange for trading in HEPA token is currently Pancakeswap v2

Find more information HEPA

WebsiteExplorerSocial ChannelSocial Channel 2DocumentationCoinmarketcap

🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.

🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner

⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!

☞ **-----https://geekcash.org-----**⭐ ⭐ ⭐

Thank for visiting and reading this article! Please don’t forget to leave a like, comment and share!

#blockchain #bitcoin #hepa #hepa finance

Angelina roda

Angelina roda

1624219980

How to Buy NFT Art Finance Token - The EASIEST METHOD! DO NOT MISS!!! JUST IN 4 MINUTES

NFT Art Finance is currently one of the most popular cryptocurrencies right now on the market, so in today’s video, I will be showing you guys how to easily buy NFT Art Finance on your phone using the Trust Wallet application.
📺 The video in this post was made by More LimSanity
The origin of the article: https://www.youtube.com/watch?v=sKE6Pc_w1IE
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #nft art finance token #token #buy nft art finance #how to buy nft art finance token - the easiest method!

David mr

David mr

1624312800

SPORE FINANCE PREDICTION - WHAT IS SPORE FINANCE & SPORE FINANCE ANALYSIS - SPORE FINANCE

SPORE FINANCE PREDICTION - WHAT IS SPORE FINANCE & SPORE FINANCE ANALYSIS - SPORE FINANCE

In this video, I talk about spore finance coin and give my spore finance prediction. I talk about the latest spore finance analysis & spore finance crypto coin that recently has been hit pretty hard in the last 24 hours. I go over what is spore finance and how many holders are on this new crypto coin spore finance.
📺 The video in this post was made by Josh’s Finance
The origin of the article: https://www.youtube.com/watch?v=qbPQvdxCtEI
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #spore finance #what is spore finance #spore finance prediction - what is spore finance & spore finance analysis - spore finance #spore finance prediction

Words Counted: A Ruby Natural Language Processor.

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Are you using WordsCounted to do something interesting? Please tell me about it.

 

Demo

Visit this website for one example of what you can do with WordsCounted.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license

#ruby  #ruby-on-rails 

Royce  Reinger

Royce Reinger

1658068560

WordsCounted: A Ruby Natural Language Processor

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Are you using WordsCounted to do something interesting? Please tell me about it.

Gem Version 

RubyDoc documentation.

Demo

Visit this website for one example of what you can do with WordsCounted.


Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted 
License: MIT license

#ruby #nlp