1608106285
The Aluna project aims to address the lack of transparency and **trust **in crypto trading.
Anyone can say what they think about the market, but it’s difficult to prove if a trader puts his money where his mouth is. Among the constant noise of Crypto Twitter, Telegram, YouTube channels and other media, it is difficult to make sense of what’s actually being traded versus what’s being promoted — there is simply a lack of transparency and trust.
In 2018, we embarked on the Aluna project with the goal of creating a transparent environment where aspiring crypto traders can thrive, by combining a trading terminal with a social network.
Aluna.Social is ALN’s flagship product — a multi-exchange social trading terminal for crypto traders.
On Aluna.Social, users can connect and manage multiple exchange accounts in one place, verify and share their trading performance in an unforgeable way, leverage on community insights and positive social feedback loops, and automatically copy the trades of the world’s best traders (or counter-copy the worst!).
The platform is currently in Beta, integrated with Bitmex, Binance, Bitfinex, Bittrex and Poloniex, with more exchange integrations on the way.
Three notable features in the pipeline are:
Aluna (ALN) Token is the utility token at the heart of the Aluna ecosystem.
The core functions of ALN are:
51% of the total ALN supply is distributed to the community who will govern the remaining 49% held in the treasury through the Aluna DAO.
The total supply of ALN is 100 million.
ALN Token Distribution
ALN Token Vesting
ALN Token Release Schedule
TOKEN SALE: 15 DEC – 29 DEC
Ticker: ALN
Token type: ERC20
ICO Token Price: 1 ALN = 0.1 USD
Fundraising Goal: $1,500,000
Total Tokens: 100,000,000
Available for Token Sale: 5%
Would you like to earn ALN right now! ☞ CLICK HERE
Looking for more information…
☞ Website
☞ Explorer
☞ Source Code
☞ Social Channel
☞ Message Board
☞ Documentation
☞ Coinmarketcap
Create an Account and Trade Cryptocurrency NOW
☞ Binance
☞ Bittrex
☞ Poloniex
Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!
#cryptocurrency #bitcoin #aluna.social #aln
1608106285
The Aluna project aims to address the lack of transparency and **trust **in crypto trading.
Anyone can say what they think about the market, but it’s difficult to prove if a trader puts his money where his mouth is. Among the constant noise of Crypto Twitter, Telegram, YouTube channels and other media, it is difficult to make sense of what’s actually being traded versus what’s being promoted — there is simply a lack of transparency and trust.
In 2018, we embarked on the Aluna project with the goal of creating a transparent environment where aspiring crypto traders can thrive, by combining a trading terminal with a social network.
Aluna.Social is ALN’s flagship product — a multi-exchange social trading terminal for crypto traders.
On Aluna.Social, users can connect and manage multiple exchange accounts in one place, verify and share their trading performance in an unforgeable way, leverage on community insights and positive social feedback loops, and automatically copy the trades of the world’s best traders (or counter-copy the worst!).
The platform is currently in Beta, integrated with Bitmex, Binance, Bitfinex, Bittrex and Poloniex, with more exchange integrations on the way.
Three notable features in the pipeline are:
Aluna (ALN) Token is the utility token at the heart of the Aluna ecosystem.
The core functions of ALN are:
51% of the total ALN supply is distributed to the community who will govern the remaining 49% held in the treasury through the Aluna DAO.
The total supply of ALN is 100 million.
ALN Token Distribution
ALN Token Vesting
ALN Token Release Schedule
TOKEN SALE: 15 DEC – 29 DEC
Ticker: ALN
Token type: ERC20
ICO Token Price: 1 ALN = 0.1 USD
Fundraising Goal: $1,500,000
Total Tokens: 100,000,000
Available for Token Sale: 5%
Would you like to earn ALN right now! ☞ CLICK HERE
Looking for more information…
☞ Website
☞ Explorer
☞ Source Code
☞ Social Channel
☞ Message Board
☞ Documentation
☞ Coinmarketcap
Create an Account and Trade Cryptocurrency NOW
☞ Binance
☞ Bittrex
☞ Poloniex
Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!
#cryptocurrency #bitcoin #aluna.social #aln
1614652472
The Aluna project aims to address the lack of transparency and **trust **in crypto trading.
Anyone can say what they think about the market, but it’s difficult to prove if a trader puts his money where his mouth is. Among the constant noise of Crypto Twitter, Telegram, YouTube channels and other media, it is difficult to make sense of what’s actually being traded versus what’s being promoted — there is simply a lack of transparency and trust.
In 2018, we embarked on the Aluna project with the goal of creating a transparent environment where aspiring crypto traders can thrive, by combining a trading terminal with a social network.
Aluna.Social is ALN’s flagship product — a multi-exchange social trading terminal for crypto traders.
On Aluna.Social, users can connect and manage multiple exchange accounts in one place, verify and share their trading performance in an unforgeable way, leverage on community insights and positive social feedback loops, and automatically copy the trades of the world’s best traders (or counter-copy the worst!).
The platform is currently in Beta, integrated with Bitmex, Binance, Bitfinex, Bittrex and Poloniex, with more exchange integrations on the way.
Three notable features in the pipeline are:
Aluna (ALN) Token is the utility token at the heart of the Aluna ecosystem.
The core functions of ALN are to:
65% of the total ALN supply is distributed to the community who will govern the remaining 35% held in the treasury through the Aluna DAO.
The total supply of ALN is 100 million.
ALN Token Release Schedule
We have updated ALN’s allocation, metrics, and tokenomics based on recent developments shared here. Refer to our Whitepaper V1.5 for the full details.
This post covers the revised token allocation, token sale and private sale breakdown, then highlights some tokenomics changes, and ends off with our roadmap for the next 4 quarters.
The treasury allocation has been reduced to accommodate the new token sale allocation. The complete breakdown is as follows:
ALN Token Allocation as of 25th February 2021
25% of the total supply (25,000,000 ALN) is allocated to the Token Sale and divided into 4 phases held from Q4 2020 to Q1 2021, detailed in the table below:
Note: All vesting is linear (by block) and starts from date of exchange listing.
Unsold tokens will be added to the Treasury.
The ALN private sale is currently open, and will end on 22nd February 2021, or when allocated tokens are sold out, whichever comes first.
5% of total supply (5,000,00 ALN) is allocated for the Private Sale, at the price of $0.08/ALN. Tokens will be locked up until exchange listing, followed by 6 months of linear vesting (by block).
Private Sale conditions:
The core functions of the ALN token are to:
Here we highlight some new incentive mechanisms in our latest Whitepaper, namely the Aluna.Social Performance Pool, and ALN’s value accrual design with the addition of the Rewards Pool.
For more information about other incentive mechanisms such as participation mining, holding and staking rewards, refer to our latest Whitepaper.
Up to 50% of fees from Aluna.Social (e.g. payment for PRO subscription plans starting Q2 2021) will be added to a Performance Pool. This is then shared among all the profitable leader traders every month.
Up to 50% of ALN fees and 5% of non-ALN fees from ALN-powered platforms and smart contracts (e.g. for prediction games, defi social trading) will be added to a Rewards Pool. This is then distributed to ALN stakers and added to the community-owned treasury.
We also completed our smart contract audit with CertiK last month, and are in the process of completing a platform infrastructure security audit with Deployflow.
Here’s our roadmap for the next 4 quarters:
Refer to section 7 in our Whitepaper for the long term roadmap.
You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT)… We will use Binance here as it is one of the largest crypto exchanges that accept fiat deposits.
Binance is a popular cryptocurrency exchange which was started in China but then moved their headquarters to the crypto-friendly Island of Malta in the EU. Binance is popular for its crypto to crypto exchange services. Binance exploded onto the scene in the mania of 2017 and has since gone on to become the top crypto exchange in the world.
Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer. You will be charged higher fees when using cards but you will also make an instant purchase. While a bank transfer will be cheaper but slower, depending on the country of your residence.
Step by Step Guide ☞ What is Binance | How to Create an account on Binance (Updated 2021)
Next step - Transfer your cryptos to an Altcoin Exchange
Since ALN is an altcoin we need to transfer our coins to an exchange that ALN can be traded. Below is a list of exchanges that offers to trade ALN in various market pairs, head to their websites and register for an account.
Once finished you will then need to make a BTC/ETH/USDT deposit to the exchange from Binance depending on the available market pairs. After the deposit is confirmed you may then purchase SHIB from the exchange view.
Exchange: Gate.io
Apart from the exchange(s) above, there are a few popular crypto exchanges where they have decent daily trading volumes and a huge user base. This will ensure you will be able to sell your coins at any time and the fees will usually be lower. It is suggested that you also register on these exchanges since once ALN gets listed there it will attract a large amount of trading volumes from the users there, that means you will be having some great trading opportunities!
Top exchanges for token-coin trading. Follow instructions and make unlimited money
☞ Binance ☞ Bittrex ☞ Poloniex ☞ Bitfinex ☞ Huobi ☞ MXC ☞ ProBIT ☞ Gate.io ☞ Coinbase
Find more information ALN
☞ Website ☞ Explorer ☞ Explorer 2 ☞ Whitepaper ☞ Source Code ☞ Social Channel ☞ Social Channel 2 ☞ Message Board ☞ Coinmarketcap
Would you like to earn TOKEN right now! ☞ CLICK HERE
Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!
#blockchain #bitcoin #crypto #aluna.social #aln
1658068560
WordsCounted
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
Contributors
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted
License: MIT license
1659601560
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license
#ruby #ruby-on-rails
1622197808
SafeMoon is a decentralized finance (DeFi) token. This token consists of RFI tokenomics and auto-liquidity generating protocol. A DeFi token like SafeMoon has reached the mainstream standards under the Binance Smart Chain. Its success and popularity have been immense, thus, making the majority of the business firms adopt this style of cryptocurrency as an alternative.
A DeFi token like SafeMoon is almost similar to the other crypto-token, but the only difference being that it charges a 10% transaction fee from the users who sell their tokens, in which 5% of the fee is distributed to the remaining SafeMoon owners. This feature rewards the owners for holding onto their tokens.
Read More @ https://bit.ly/3oFbJoJ
#create a defi token like safemoon #defi token like safemoon #safemoon token #safemoon token clone #defi token