What is Konomi Network (KONO) | What is Konomi Network token | What is KONO token

About Konomi

Konomi is a full suite asset management solution for cross-chain crypto assets. Users could manage their crypto holding positions, trade assets and earn interest through decentralized money market products.

In-depth analysis of KONOMI collateral and liquidation model

1. Introduction

The year 2020 is definitely a memorable year in the chronicles for the financial industry as the global economy experienced the most unprecedented challenges and faced the greatest crisis ever since World War II. Traditional financial sectors such as the US securities market have experienced tremendous unstability due to COVID-19, the quantitative monetary easing policy of the US government as well as the US government elections. As a result, A large number of SMEs faced business setbacks as they were unable to obtain loans from traditional financial enterprises due to various reasons like the erosion of collateral values. Such phenomenon has triggered the concern of investors and entrepreneurs about the future viability of traditional financial models in the modern world context. To better adapt to the ever-evolving world, Konomi developed its own lending platform through the Polkadot ecosystem to tackle the existing problems and flaws of traditional money lending platforms. Polkadot is a platform based on Substrate, which is more flexible for transactions within the shared security umbrella and has a comparatively lower entry requirement for general users. For the blockchain industry and traditional financial intermediary platforms, Polkadot is not only a gate-crasher, but also a complete game-changer.

In the near future, Polkadot Ecosystem will establish a complete decentralised Web 3.0, providing all users with absolute autonomy and control. In other words, each user will be free from the influence of all central authority and will have full control of their identity and data management. The Polkadot Ecosystem aims to enhance userbility and user experience by connecting private chains, alliance chains, public chains, open networks and oracle machines, as well as future technologies yet to be realised. In doing so, independent blockchains will also be able to exchange information and conduct transactions through the Polkadot Ecosystem.

The complex logic and powerful engineering capabilities supporting Polkadot Ecosystem brought unprecedented development to the blockchain ecosystem and provided new perspectives for investors to explore. Compared to the rapid development of DeFi on Ethereum, Polkadot’s DeFi ecosystem is still at an initial stage with unlimited potential to be delved into. Konomi, as a decentralised lending project, perceive the security of all users’ assets as the top priority. In order to make sure that all users are well-protected, Konomi innovatively adopts the method of combining mathematical modelling and empirical evidence of economic theories, adding reconciliation variables supported by empirical data to the mathematical model, becoming the pioneer in the industry to integrate theory and empirical evidence. Further, it also provides a new resolution in dealing with “black swan” events which can be prominent in the blockchain industry.

2. Introduction to the KONOMI Collateral and Liquidation Model:

The Konomi liquidation model has undergone rigorous mathematical derivation and empirical testing of historical data. The team has conducted an empirical analysis of economics using years of high frequency historical data from the three major Exchanges, namely Binance, Huobi and OKEX. Taking into consideration the periodicity of cryptocurrency prices and “Black Swan” events, we have reached a conclusion that proves to be in line with the team’s initial expectations. With strong research ability and constant testing, we can safely say that the model is able to protect the legitimate rights of traders in both lending and borrowing even under extreme circumstances.

2.1 Introduction to the collateral model

Image for post

2.2 Introduction to the Liquidation Model

Based on empirical simulations and with reference to findings from other projects, our team believes that the relationship that triggers liquidation on this platform is:

Image for post

3. Factor Identification in the Collateral and Liquidation model

Taking the theories of Traditional Finance’s Mean-Variance Criterion, Arbitrage Pricing Theory and Multifactor Model as the fundament, taking into consideration cryptocurrencies’ high volatility at this stage, high tail risk and a large differential gap in trading volume between currencies, the team proposes an innovative model to determine the various factors in the collateral and liquidation model on this platform.

Image for post

3.1 Introduction of Factor Selection Methods

In traditional finance markets, selection of factors includes fundamental-related factors (valuation, cash flow, profitability etc.) and technical factors (momentum index, CYE etc.). However, the non-disclosure nature of cryptocurrencies and its high volatility resulted in a lack of fundamental-related information or large deviations. Therefore, this model uses technical factors in the early stage of setting up, classifying them into 8 types of factors including BRAR Index, Variable Distribution Index, Directional Movement Index, Volume Index, Inverse Directional Movement Index, Relative Strength Index etc.

When further examined,

  • Directional Movement Index include ACD, BBI, BIAS etc;
  • Inverse Directional Movement Index include CCI, KDJ etc.;
  • BRAR Index include ARBR, CR, VR, etc.;
  • Volume Index include: PSY, VOSC, VSTD, etc.

3.2 Brief Description of The Empirical Methodology

The team has conducted a rigorous empirical analysis based on the above-mentioned basic pricing model to determine specific factors to be used for each currency and their respective weightage.

Data used for this model is high frequency trading data (0.5 millisecond daily) of each currency over the last three years (if the currency has yet existed for three years, the date chosen is the first day of trading to 10 January 2021). The methods to be used include OLS, 2OLS, Mixed Estimation Models, and Knowledge Graph to build a quantitative rate of return system index for each currency.

3.3 Filtering Factors and How to Determine their Validity

Selection of factors is the central part of the model. Since using an excessive number of factors to construct the model will impair the interpretability of the model, the model should select only crucial factors while keeping𝜀𝑖as low as possible. Regression method and ranking algorithnm are commonly used for factor selection, but in view of the late development of the cryptocurrency market and the limited amount of relevant empirical studies, regression method is thereby chosen in this context.

This method is divided into two main parts. Firstly, validity of the factors affecting asset returns is tested and verified, followed by the elimination of valid but relatively redundant factors.

3.3.1 Determining Validity of Factors

In this section, independent factors are used as explanatory variables and the predicted variable is projected to be the rate of return of cryptocurrency based on which a linear regression model is constructed. The validity of the individual factors is assessed comprehensively by obtaining the corresponding coefficients β for each factor, the Goodness of Fit of the equation, and the t-value of the hypothesis testing statistic. At this stage of model construction, the data has both individual and time characteristics, hence a Pooled Regression Model is used to form a new series of data without ndividual and time information, and then OLS (Ordinary Least Squares) regression equations are applied to estimate the parameters. In doing so, it ensures the removal of individual and time characteristics from the data panel. Moreover, the parameters obtained from the regression are not sensitive to the time of data acquisition, thus allowing more solid factors to be taken into account.

Image for post

3.3.2 Eliminating Valid but Relatively Redundant Factors

Since different individual factors will be closely interrelated to a certain extent, the approach of simply selecting the effective factors is not sufficient. In this section, multiple factors are selected as explanatory variables and are jointly regressed on the rate of return of cryptocurrency. The regression coefficients obtained from the joint regression, the Goodness of Fit, the hypothesis testing statistic F and t-value are combined with the correlation coefficients between the factors. The redundant factors are then removed to obtain the finalised factors.

Image for post

4. Confirmation of Factors

4.1 Determining the Impact Factor

Using Dot and KSM’s lending model as examples, the following factors are projected to affect the rate of return of currencies significantly.

Image for post

Image for post

Image for post

5. Innovations and Contributions of this model

Our team feels that this model has the potential to make significant contributions in the following aspects.

  • Breakthrough from the mathematical and model-only theory in traditional Defi project modelling. Lending projects for cryptocurrencies commonly rely on mathematical models alone to describe the role of participants. However, due to factors such as its late development, inadequate historical data and inexperience of practitioners and traders, most models still lack explanatory power in the face of extreme events such as “Black Swans”. As a result, great losses are incurred for both investors and platforms.
  • Pioneering interdisciplinary project between digital finance and traditional finance in the industry which leverages on the advantages of empirical testing of classical traditional finance models and the organic integration of emerging digital finance, eventually establishing a digital finance lending platform model with both theoretical foundation and strong feasibility.
  • By means of empirical tests using historical data on cryptocurrencies, high feasibility and practicality of the model is ensured. Further, the theoretical basis and empirical methodology of cryptocurrencies in the current Polkadot Ecosystem are deeply entrenched and enriched.

Meanwhile, this model entails the following innovations.

  • Comprehensive understanding and analysis of “Black Swan” events. Majority of traditional finance firms and previous cryptocurrency projects have used Var and ES to invert the tail correlation, which often underestimates the risk of a “Black Swan” event. However, our model is designed to take full account of tail risk evaluation by incorporating human adjustment variables into the model at the earliest stage. Our ultimate goal is to enable our risk distribution to meet the Holy Grail distribution in the future.
  • The model is based on rigorous and detailed empirical analysis. With the founding team’s background experience in finance, statistics and big data science, the team has made revolutional use of a variety of big data empirical methods and statistical theories to provide empirical basis for the selection of the moderation variables in this model across different markets.
  • The model is time-sensitive and has great potential to make further development. The model is constructed with innovative adjustment variables, which increases the model’s versatility to adapt to current market conditions and therefore has a strong timeliness. At the same time, as the amount of data increases, more data nodes can be added to the empirical test to bestow the model with more explanatory power, creating strong potential in its further growth.

6. Future Adaptations of the Model

The team believes that the model has an extremely high level of universality and can therefore play an important role in future subordinated loan products, option products and fixed rate lending products.

Join the Konomi Ambassador Program

Image for post

Konomi is a decentralized liquidity and money market protocol for cross-chain crypto assets.

Upon its establishment, Konomi thrives to enable financial applications on Ethereum to seamlessly connect to Polkadot ecosystem where it provides users with a complete and all-rounded asset management experience.

The Konomian team is founded by a close-knit team of engineers and Tech entrepreneurs based in Singapore with the same passion and professionality. Together, we build and further develop the project on a common core set values we believe are in line with the tenets of Web 3.0.

The team came together because of a deep desire to build an indispensable financial infrastructure for Web 3.0. Therefore, the community we are building is also guided by these values and vision. Being part of building Web 3.0 means impacting true and meaningful change on the world and we want to empower our community members to take part in the progress and embrace Web 3.0 together with Konomi. Today, the team would like to announce the launch of our core community building program-the “Konomian” Ambassador Program”.

The program aims to increase the level of community involvement by integrating more community members to take part in our project and enhance the growth and prosperity of Polkadot ecology with a concerted effort.

What we are looking for:

We are looking for individuals and organizations that are passionate about Web 3.0 and believe in the potential of Konomi to join us as our ambassador. We will set different roles and career paths for the selected ambassadors.

Roles of ambassadors:

  • Technical Contributor- Contributing code or improving documentation related to our project
  • Content Creator-Create informative and engaging content such as videos, blog posts, in-depth analysis etc. about the project
  • Meet-up/Event organizer- organize online and offline events for local community and participate in official events organized by Konomi.
  • Translator-Translate given English content for your local community
  • Community Moderator-Manage and promote the official community channels and ensure they are always informative, engaging and friendly to all members
  • HR Explorer- Explore and nominate suitable talents from the community to join the project as part-time/ full-time employee

Benefits of our ambassador:

  • Access- Attend meetings and events with the team member. Get training opportunities from our technical team and early access to our new features
  • Funding-Receive bounty and grant from the Konomi Fund for your contribution
  • SWAG-Opportunity to get customized and branded items from the official
  • Career- We offer career opportunities for desirable ambassadors

Application Process:

Please Email us in the format below and we will contact you ASAP once you match the requirements. All selected ambassadors will join the orientation program organized by core team members for quick onboard. During the orientation program, candidates will learn more about the project and have a chance to discuss with the team about the progress and share their goals and ideas.

Email Format:

Email Address:  info@konomi.network

CC: jayden@konomi.network,  yuqing@konomi.network,  arielho@konomi.network

Topic: Application for Konomi Ambassador Program

Please attach a copy of your CV and let us know why you want to join this program

Should you have any inquiry, please contact  jayden@konomi.network

We are looking forward to your application !

Looking for more information…

☞ Website: https://www.konomi.network/
☞ Twitter: https://twitter.com/KonomiNetwork
☞ Telegram: https://t.me/konominetwork
☞ Telegram Announcements: https://t.me/konominetworkchannel
☞ Reddit: https://www.reddit.com/r/KonomiNetwork/

Would you like to earn KONO right now! ☞ [CLICK HERE](https://www.binance.com/en/register?ref=28551372 “CLICK HERE”)

Top exchanges for token-coin trading. Follow instructions and make unlimited money

BinanceBittrexPoloniexBitfinexHuobiMXC

Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!

#blockchain #bitcoin #crypto #konomi network #kono

What is GEEK

Buddha Community

What is Konomi Network (KONO) | What is Konomi Network token | What is KONO token

What is Konomi Network (KONO) | What is Konomi Network token | What is KONO token

About Konomi

Konomi is a full suite asset management solution for cross-chain crypto assets. Users could manage their crypto holding positions, trade assets and earn interest through decentralized money market products.

In-depth analysis of KONOMI collateral and liquidation model

1. Introduction

The year 2020 is definitely a memorable year in the chronicles for the financial industry as the global economy experienced the most unprecedented challenges and faced the greatest crisis ever since World War II. Traditional financial sectors such as the US securities market have experienced tremendous unstability due to COVID-19, the quantitative monetary easing policy of the US government as well as the US government elections. As a result, A large number of SMEs faced business setbacks as they were unable to obtain loans from traditional financial enterprises due to various reasons like the erosion of collateral values. Such phenomenon has triggered the concern of investors and entrepreneurs about the future viability of traditional financial models in the modern world context. To better adapt to the ever-evolving world, Konomi developed its own lending platform through the Polkadot ecosystem to tackle the existing problems and flaws of traditional money lending platforms. Polkadot is a platform based on Substrate, which is more flexible for transactions within the shared security umbrella and has a comparatively lower entry requirement for general users. For the blockchain industry and traditional financial intermediary platforms, Polkadot is not only a gate-crasher, but also a complete game-changer.

In the near future, Polkadot Ecosystem will establish a complete decentralised Web 3.0, providing all users with absolute autonomy and control. In other words, each user will be free from the influence of all central authority and will have full control of their identity and data management. The Polkadot Ecosystem aims to enhance userbility and user experience by connecting private chains, alliance chains, public chains, open networks and oracle machines, as well as future technologies yet to be realised. In doing so, independent blockchains will also be able to exchange information and conduct transactions through the Polkadot Ecosystem.

The complex logic and powerful engineering capabilities supporting Polkadot Ecosystem brought unprecedented development to the blockchain ecosystem and provided new perspectives for investors to explore. Compared to the rapid development of DeFi on Ethereum, Polkadot’s DeFi ecosystem is still at an initial stage with unlimited potential to be delved into. Konomi, as a decentralised lending project, perceive the security of all users’ assets as the top priority. In order to make sure that all users are well-protected, Konomi innovatively adopts the method of combining mathematical modelling and empirical evidence of economic theories, adding reconciliation variables supported by empirical data to the mathematical model, becoming the pioneer in the industry to integrate theory and empirical evidence. Further, it also provides a new resolution in dealing with “black swan” events which can be prominent in the blockchain industry.

2. Introduction to the KONOMI Collateral and Liquidation Model:

The Konomi liquidation model has undergone rigorous mathematical derivation and empirical testing of historical data. The team has conducted an empirical analysis of economics using years of high frequency historical data from the three major Exchanges, namely Binance, Huobi and OKEX. Taking into consideration the periodicity of cryptocurrency prices and “Black Swan” events, we have reached a conclusion that proves to be in line with the team’s initial expectations. With strong research ability and constant testing, we can safely say that the model is able to protect the legitimate rights of traders in both lending and borrowing even under extreme circumstances.

2.1 Introduction to the collateral model

Image for post

2.2 Introduction to the Liquidation Model

Based on empirical simulations and with reference to findings from other projects, our team believes that the relationship that triggers liquidation on this platform is:

Image for post

3. Factor Identification in the Collateral and Liquidation model

Taking the theories of Traditional Finance’s Mean-Variance Criterion, Arbitrage Pricing Theory and Multifactor Model as the fundament, taking into consideration cryptocurrencies’ high volatility at this stage, high tail risk and a large differential gap in trading volume between currencies, the team proposes an innovative model to determine the various factors in the collateral and liquidation model on this platform.

Image for post

3.1 Introduction of Factor Selection Methods

In traditional finance markets, selection of factors includes fundamental-related factors (valuation, cash flow, profitability etc.) and technical factors (momentum index, CYE etc.). However, the non-disclosure nature of cryptocurrencies and its high volatility resulted in a lack of fundamental-related information or large deviations. Therefore, this model uses technical factors in the early stage of setting up, classifying them into 8 types of factors including BRAR Index, Variable Distribution Index, Directional Movement Index, Volume Index, Inverse Directional Movement Index, Relative Strength Index etc.

When further examined,

  • Directional Movement Index include ACD, BBI, BIAS etc;
  • Inverse Directional Movement Index include CCI, KDJ etc.;
  • BRAR Index include ARBR, CR, VR, etc.;
  • Volume Index include: PSY, VOSC, VSTD, etc.

3.2 Brief Description of The Empirical Methodology

The team has conducted a rigorous empirical analysis based on the above-mentioned basic pricing model to determine specific factors to be used for each currency and their respective weightage.

Data used for this model is high frequency trading data (0.5 millisecond daily) of each currency over the last three years (if the currency has yet existed for three years, the date chosen is the first day of trading to 10 January 2021). The methods to be used include OLS, 2OLS, Mixed Estimation Models, and Knowledge Graph to build a quantitative rate of return system index for each currency.

3.3 Filtering Factors and How to Determine their Validity

Selection of factors is the central part of the model. Since using an excessive number of factors to construct the model will impair the interpretability of the model, the model should select only crucial factors while keeping𝜀𝑖as low as possible. Regression method and ranking algorithnm are commonly used for factor selection, but in view of the late development of the cryptocurrency market and the limited amount of relevant empirical studies, regression method is thereby chosen in this context.

This method is divided into two main parts. Firstly, validity of the factors affecting asset returns is tested and verified, followed by the elimination of valid but relatively redundant factors.

3.3.1 Determining Validity of Factors

In this section, independent factors are used as explanatory variables and the predicted variable is projected to be the rate of return of cryptocurrency based on which a linear regression model is constructed. The validity of the individual factors is assessed comprehensively by obtaining the corresponding coefficients β for each factor, the Goodness of Fit of the equation, and the t-value of the hypothesis testing statistic. At this stage of model construction, the data has both individual and time characteristics, hence a Pooled Regression Model is used to form a new series of data without ndividual and time information, and then OLS (Ordinary Least Squares) regression equations are applied to estimate the parameters. In doing so, it ensures the removal of individual and time characteristics from the data panel. Moreover, the parameters obtained from the regression are not sensitive to the time of data acquisition, thus allowing more solid factors to be taken into account.

Image for post

3.3.2 Eliminating Valid but Relatively Redundant Factors

Since different individual factors will be closely interrelated to a certain extent, the approach of simply selecting the effective factors is not sufficient. In this section, multiple factors are selected as explanatory variables and are jointly regressed on the rate of return of cryptocurrency. The regression coefficients obtained from the joint regression, the Goodness of Fit, the hypothesis testing statistic F and t-value are combined with the correlation coefficients between the factors. The redundant factors are then removed to obtain the finalised factors.

Image for post

4. Confirmation of Factors

4.1 Determining the Impact Factor

Using Dot and KSM’s lending model as examples, the following factors are projected to affect the rate of return of currencies significantly.

Image for post

Image for post

Image for post

5. Innovations and Contributions of this model

Our team feels that this model has the potential to make significant contributions in the following aspects.

  • Breakthrough from the mathematical and model-only theory in traditional Defi project modelling. Lending projects for cryptocurrencies commonly rely on mathematical models alone to describe the role of participants. However, due to factors such as its late development, inadequate historical data and inexperience of practitioners and traders, most models still lack explanatory power in the face of extreme events such as “Black Swans”. As a result, great losses are incurred for both investors and platforms.
  • Pioneering interdisciplinary project between digital finance and traditional finance in the industry which leverages on the advantages of empirical testing of classical traditional finance models and the organic integration of emerging digital finance, eventually establishing a digital finance lending platform model with both theoretical foundation and strong feasibility.
  • By means of empirical tests using historical data on cryptocurrencies, high feasibility and practicality of the model is ensured. Further, the theoretical basis and empirical methodology of cryptocurrencies in the current Polkadot Ecosystem are deeply entrenched and enriched.

Meanwhile, this model entails the following innovations.

  • Comprehensive understanding and analysis of “Black Swan” events. Majority of traditional finance firms and previous cryptocurrency projects have used Var and ES to invert the tail correlation, which often underestimates the risk of a “Black Swan” event. However, our model is designed to take full account of tail risk evaluation by incorporating human adjustment variables into the model at the earliest stage. Our ultimate goal is to enable our risk distribution to meet the Holy Grail distribution in the future.
  • The model is based on rigorous and detailed empirical analysis. With the founding team’s background experience in finance, statistics and big data science, the team has made revolutional use of a variety of big data empirical methods and statistical theories to provide empirical basis for the selection of the moderation variables in this model across different markets.
  • The model is time-sensitive and has great potential to make further development. The model is constructed with innovative adjustment variables, which increases the model’s versatility to adapt to current market conditions and therefore has a strong timeliness. At the same time, as the amount of data increases, more data nodes can be added to the empirical test to bestow the model with more explanatory power, creating strong potential in its further growth.

6. Future Adaptations of the Model

The team believes that the model has an extremely high level of universality and can therefore play an important role in future subordinated loan products, option products and fixed rate lending products.

Join the Konomi Ambassador Program

Image for post

Konomi is a decentralized liquidity and money market protocol for cross-chain crypto assets.

Upon its establishment, Konomi thrives to enable financial applications on Ethereum to seamlessly connect to Polkadot ecosystem where it provides users with a complete and all-rounded asset management experience.

The Konomian team is founded by a close-knit team of engineers and Tech entrepreneurs based in Singapore with the same passion and professionality. Together, we build and further develop the project on a common core set values we believe are in line with the tenets of Web 3.0.

The team came together because of a deep desire to build an indispensable financial infrastructure for Web 3.0. Therefore, the community we are building is also guided by these values and vision. Being part of building Web 3.0 means impacting true and meaningful change on the world and we want to empower our community members to take part in the progress and embrace Web 3.0 together with Konomi. Today, the team would like to announce the launch of our core community building program-the “Konomian” Ambassador Program”.

The program aims to increase the level of community involvement by integrating more community members to take part in our project and enhance the growth and prosperity of Polkadot ecology with a concerted effort.

What we are looking for:

We are looking for individuals and organizations that are passionate about Web 3.0 and believe in the potential of Konomi to join us as our ambassador. We will set different roles and career paths for the selected ambassadors.

Roles of ambassadors:

  • Technical Contributor- Contributing code or improving documentation related to our project
  • Content Creator-Create informative and engaging content such as videos, blog posts, in-depth analysis etc. about the project
  • Meet-up/Event organizer- organize online and offline events for local community and participate in official events organized by Konomi.
  • Translator-Translate given English content for your local community
  • Community Moderator-Manage and promote the official community channels and ensure they are always informative, engaging and friendly to all members
  • HR Explorer- Explore and nominate suitable talents from the community to join the project as part-time/ full-time employee

Benefits of our ambassador:

  • Access- Attend meetings and events with the team member. Get training opportunities from our technical team and early access to our new features
  • Funding-Receive bounty and grant from the Konomi Fund for your contribution
  • SWAG-Opportunity to get customized and branded items from the official
  • Career- We offer career opportunities for desirable ambassadors

Application Process:

Please Email us in the format below and we will contact you ASAP once you match the requirements. All selected ambassadors will join the orientation program organized by core team members for quick onboard. During the orientation program, candidates will learn more about the project and have a chance to discuss with the team about the progress and share their goals and ideas.

Email Format:

Email Address:  info@konomi.network

CC: jayden@konomi.network,  yuqing@konomi.network,  arielho@konomi.network

Topic: Application for Konomi Ambassador Program

Please attach a copy of your CV and let us know why you want to join this program

Should you have any inquiry, please contact  jayden@konomi.network

We are looking forward to your application !

Looking for more information…

☞ Website: https://www.konomi.network/
☞ Twitter: https://twitter.com/KonomiNetwork
☞ Telegram: https://t.me/konominetwork
☞ Telegram Announcements: https://t.me/konominetworkchannel
☞ Reddit: https://www.reddit.com/r/KonomiNetwork/

Would you like to earn KONO right now! ☞ [CLICK HERE](https://www.binance.com/en/register?ref=28551372 “CLICK HERE”)

Top exchanges for token-coin trading. Follow instructions and make unlimited money

BinanceBittrexPoloniexBitfinexHuobiMXC

Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!

#blockchain #bitcoin #crypto #konomi network #kono

Royce  Reinger

Royce Reinger

1658068560

WordsCounted: A Ruby Natural Language Processor

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Are you using WordsCounted to do something interesting? Please tell me about it.

Gem Version 

RubyDoc documentation.

Demo

Visit this website for one example of what you can do with WordsCounted.


Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted 
License: MIT license

#ruby #nlp 

Words Counted: A Ruby Natural Language Processor.

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Are you using WordsCounted to do something interesting? Please tell me about it.

 

Demo

Visit this website for one example of what you can do with WordsCounted.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license

#ruby  #ruby-on-rails 

Lisa joly

Lisa joly

1624658400

PAID NETWORK Review, Is it worth Investing in? Token Sale Coming Soon !!

Hey guys, in this video I review PAID NETWORK. This is a DeFi project that aims to solve complex legal process using decentralised protocols and DeFi products for 2021.

PAID Network is an ecosystem DAPP that leverages blockchain technology to deliver DeFi powered SMART Agreements to make business exponentially more efficient. We allow users to create their own policy, to ensure they Get PAID.

📺 The video in this post was made by Crypto expat
The origin of the article: https://www.youtube.com/watch?v=ZIU5javfL90
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #paid network #paid network review #token sale #paid network review, is it worth investing in? token sale coming soon !!

aaron silva

aaron silva

1622197808

SafeMoon Clone | Create A DeFi Token Like SafeMoon | DeFi token like SafeMoon

SafeMoon is a decentralized finance (DeFi) token. This token consists of RFI tokenomics and auto-liquidity generating protocol. A DeFi token like SafeMoon has reached the mainstream standards under the Binance Smart Chain. Its success and popularity have been immense, thus, making the majority of the business firms adopt this style of cryptocurrency as an alternative.

A DeFi token like SafeMoon is almost similar to the other crypto-token, but the only difference being that it charges a 10% transaction fee from the users who sell their tokens, in which 5% of the fee is distributed to the remaining SafeMoon owners. This feature rewards the owners for holding onto their tokens.

Read More @ https://bit.ly/3oFbJoJ

#create a defi token like safemoon #defi token like safemoon #safemoon token #safemoon token clone #defi token