1671002520
In this article, we'll discuss information about the Soccer Crypto project and SOT token. What is Soccer Crypto (SOT) | What is SOT token?
SoccerCrypto is a combination of passion for football and blockchain technology. We aim to create a platform not just for NFT collection, but also a platform for users to equip their NFT players with uniquely designed items. Besides, we seek to provide our users thrilling experiences of fast-paced multiplayer soccer matches!
In SoccerCrypto, players are able to play for free everyday and earn the tickets to participate in exclusive fiery matches to achieve tournaments’ grand prizes worth up to 10.000% ROI.
All SoccerCrypto’s characters have equal strength. Players will rely on skill and luck to collect valuable items. With the more masterful skill set, the more valuable items you can collect in our game and trade them on SoccerCrypto’s marketplace.
Highlight Features:
With the passion for football and blockchain technology, we have developed SoccerCrypto that simulates a football game on the Binance Smart Chain. Each match in SoccerCrypto lasts for 90 seconds, players can experience matches with several levels. We understand that it can be difficult for players when competing directly with opponents to reach higher levels. Therefore, SoccerCrypto also has simple daily tasks for you to easily receive gifts from the Publisher. The more time invested in the game, the better income players can earn.
In the stages of developing ideas and new features of the game, we always look forward to hearing from players, so that SoccerCrypto is a product of the community, developed by the community.
How To Play
Step 1: Use Meta Mask wallet to login to the Game,
Soccer Player offers a new player incentive program. After completing certain early missions, players, who register into the game and connect to their MetaMask wallet for the first time, will be rewarded with a Free NFT Box:
• Follow Soccer Crypto on Twitter
• Join the Soccer Crypto Telegram group
Players can open the Free Box and obtain NFT players to begin participating in our game modes.
Besides, users can buy Premium Box on Marketplace and open NFT
Step 2: After entering the game, you will see sections below on the screen:
Step 3: Join competition to start earning tokens. Equip your character with accessories for a higher winning rate
Step 4: Buy sell NFT on the marketplace
Upgrade - Repair NFT Players
NFT Players can be upgraded by using $SOW. The higher a player's rarity, the more points it has. Upgrade to increase the stats of the NFT Players, the higher the index, the higher the win rate.
Rarity (Player) | Point |
---|---|
Common | 50 |
Rare | 100 |
Epic | 150 |
Legend | 200 |
There are four major slots in which to place materials. There is also a special slot for special materials. When the rarity of the material equals the rarity of the NFT Players, the success rate is 50%. When using higher rarity NFT as material, the ratio will be 100%.
Repair
Over many matches, equipment and NFT players are likely to be damaged and have reduced durability, affecting the outcome of the match. For the best results, players need to use $SOW on NFT players and equipment to restore durability.
Marketplace
Marketplace is a feature that allows users to trade, purchase, or exchange their in-game assets in the form of NFT-721 and NFT-1155
A place that allows players to buy and sell NFT assets such as player, equipment. Marketplace needs to be connected to a wallet to participate in purchases
Transaction fee
When the sale is successful, the seller will pay 5% of the fee.
Dual token model will be applied in our game. Soccer Crypto has its own Tokenomics structure and uses BEP-721 NFTs for its gaming assets.
1. SOT
SOT Token: Governance token. The total supply is limited at 1B. SOT Token is the game native token, enabling players to purchase in-game assets and enjoy all game features.
You need $SOT for these operations:
How to get $SOT?
Summary
Total supply | 1.000.000.000 |
Private sale | $30.000 |
Public sale | $500.000 |
Initial Market Cap | $288.000 (included liquidity) |
Initial Circulation Supply | 73,600,000 |
Total Diluted Market Cap | $10.000.000 |
Token allocation
No. | Allocation | % | TGE Unlock | Token Supply | Vesting |
---|---|---|---|---|---|
1 | Private | 1,00% | 10% | 10.000.000 | TGE 10%, 3 months lock up, then linear vesting in 6 months |
2 | Public Sale | 5,00% | 25% | 50.000.000 | TGE 25%, 3 months lock up, then linear vesting in 6 months |
3 | Marketing | 10,00% | 15% | 100.000.000 | TGE 15%, 1 month lock up, then linear vesting in 18 months |
4 | Ecosystem growth/ Reward | 30,00% | 15 | 300.000.000 | TGE 15%, then linear vesting in 18 months |
5 | Staking | 11,00% | 0% | 110.000.000 | Locked from TGE: Add gradually into Stake reward |
6 | Team | 16,00% | 0% | 160.000.000 | Lock up 12 months, then linear vesting in 12 months |
7 | Advisor | 3,00% | 10% | 30.000.000 | TGE 10%, then linear vesting in 12 months |
8 | Liquidity | 12,00% | 25% | 120.000.000 | TGE 25%, then linear vesting in 12 months |
9 | Reserves | 12,00% | 0% | 120.000.000 | Lock up 3 months, then linear vesting in 12 months |
TOTAL | 100,00% | | 1.000.000.000 |
Token allocation utility:
Investors: These tokens were sold to investors, and will be unlocked based on the vesting plan. Some investors have been selling their tokens for capital recovery purposes.
Founder & Team: These tokens are allocated to the founding team & project development team.
Advisors: These tokens are allocated to the project advisors, who help to consult, guide, and support the project in all related activities.
Reserve: The tokens are used as the project's reserve fund, balancing necessary activities, and supporting future expansion.
Game incentives: These tokens will be used to incentivize players participating in competitions and other activities in the game. This is to encourage user participation in the game and to maintain traction.
Liquidity: These tokens are used for the following main purposes:
Marketing: These tokens will be used for various ecosystem-building initiatives, including marketing and partnership programs.
2. SOW
SOW Token: Unlimited supply. SOW is used as game incentive to motivate gamers. Users can earn SOW through P2E and daily missions. It is also used to improve the strength of NFT players and to repair equipment.
When players claim rewards from the game, that amount of $SOW will be minted, making the supply of $SOW increase. When players deposit $SOW into the game, that amount of $SOW are locked in our smart contract and will be burnt in batches.
You need $SOW for these operations:
How to get $SOW?
11% of the SOT token supply (110.000.000) will be distributed via staking pools.
Users can stake $SOT token with APR up to 300%. Staking SOT tokens will reduce circulating supply and enable SOT token holders to earn annual yield while the Soccer Crypto user base is being grown. The staking will incentivize both existing and new SOT holders. The more tokens staked, the bigger the rewards.
Buyback and burn
After launch, Soccer Crypto will adopt buyback & burn as a deflationary mechanism to increase SOW tokens long-term value. The tokens are burnt in order to reduce the overall SOW tokens circulation, stabilize the token price and create deflation. The portion of game's revenue used for buyback & burn are:
1. Soccer Crypto NFT Marketplace - 100% of transaction fees go towards buyback and burn.
2. Profit of NFT Box Sales - 20% profit of the sales proceeds will be used for our marketing campaign, and buyback and burn.
3. Reserver Fund - 10% of Reserve Fund goes towards buyback and burn quarterly.
4. When players use $SOW to upgrade or repair NFT Players and equipment, $SOW tokens will be burned and $SOW circulating supply will be reduced significantly. 100% of upgrade fee will be used for buyback and burn.
5. Energy Box - 100% of energy box sales proceeds go towards buyback and burn.
Soccer Crypto will going to use the above fund to buy back $SOW tokens and then burn them monthly. We will make announcements regarding the buyback and burn program on our social media channels. The buyback and burn proceeds will be updated on a regular basis as our project progresses. This will continuously lower the supply, redistribute the value of the $SOW tokens to the remaining $SOW holders and push up $SOW price constantly.
Benefits of Buyback and Burn program
How and Where to Buy SOT token?
SOT has been listed on a number of crypto exchanges, unlike other main cryptocurrencies, it cannot be directly purchased with fiats money. However, You can still easily buy this coin by first buying Bitcoin, ETH, USDT, BNB from any large exchanges and then transfer to the exchange that offers to trade this coin, in this guide article we will walk you through in detail the steps to buy SOT token.
You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…
We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.
Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…
Once finished you will then need to make a BTC/ETH/USDT/BNB deposit to the exchange from Binance depending on the available market pairs. After the deposit is confirmed you may then purchase SOT from the exchange.
The top exchange for trading in SOT token is currently: PancakeSwap (V2)
BEP-20 contracts: 0xde1a0f6c7078c5da0a6236eeb04261f4699905c5
Top exchanges for token-coin trading. Follow instructions and make unlimited money
☞ Binance ☞ Poloniex ☞ Bitfinex ☞ Huobi ☞ MXC ☞ ProBIT ☞ Gate.io
Find more information SOT token ☞ Website
🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.
I hope this post will help you. Don't forget to leave a like, comment and sharing it with others. Thank you!
#bitcoin #cryptocurrency #token #coin
1648115675
Germany was the first country to recognize #Bitcoins as “units of value” and that they could be classified as a “financial instrument.”
Legal regulation for the decentralized industry in Germany is ongoing. Now, 16% of the German population 18 to 60 are #crypto investors.
These people who own #cryptocurrencies or have traded cryptocurrencies in the past six months.
41% of these #crypto investors intend to increase the share of their investments in #crypto in the next six months. Another 13% of Germans are #crypto-curious.
They intend to invest in #cryptocurrencies too. Yet, only 23% of the #crypto-curious said they are highly likely to invest, with the rest remaining hesitant.
1658068560
WordsCounted
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
Contributors
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted
License: MIT license
1659601560
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license
#ruby #ruby-on-rails
1623103200
How To Buy Munch Token (MUNCH) crypto/token in UNISWAP USING METAMASK OR TRUST WALLET
📺 The video in this post was made by Upcoming Gems
️ The origin of the article: https://www.youtube.com/watch?v=OT2lRkM_c8s
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
(There is no limit to the amount of credit you can earn through referrals)
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #munch token #crypto #token #uniswap
1622197808
SafeMoon is a decentralized finance (DeFi) token. This token consists of RFI tokenomics and auto-liquidity generating protocol. A DeFi token like SafeMoon has reached the mainstream standards under the Binance Smart Chain. Its success and popularity have been immense, thus, making the majority of the business firms adopt this style of cryptocurrency as an alternative.
A DeFi token like SafeMoon is almost similar to the other crypto-token, but the only difference being that it charges a 10% transaction fee from the users who sell their tokens, in which 5% of the fee is distributed to the remaining SafeMoon owners. This feature rewards the owners for holding onto their tokens.
Read More @ https://bit.ly/3oFbJoJ
#create a defi token like safemoon #defi token like safemoon #safemoon token #safemoon token clone #defi token