What is SandBox Pro (SANDPRO) | What is SANDPRO token

In this article, we'll discuss information about the SandBox Pro project and SANDPRO token. 

SandBox Pro is a NFT game metaverse on BSC chain. It will have its own decentralized network in the near future and players will have customized game experience.

SandBox Pro is built based on the logic of Sandbox with an obvious improvement on its logic due to some more fresh and creative ideas. Players can build everything alive in their world with a cheaper cost compared to Ethereum block chain. The Sandbox Pro team is building a unique virtual world where players can build, own, and monetize their gaming experiences using SANDPRO, which is the main utility token of the platform. The demon version will be based on Sandbox and Binance smart chain at first until the publish of the Sandbox Pro network.

TOKENOMIC:

Total supply: 40,000,000 SANDPRO

Presale: 16,000,000 SANDPRO
(40% of Total supply)

Launch: 12,000,000 SANDPRO
(30% of Total supply)

Founders: 6,000,000 SANDPRO
(15% of Total supply)

Marketing: 6,000,000 SANDPRO
(15% of Total supply)

All charged SANDPRO as service in the game will input to reward poll.
Founders can only unlock 10% of total founders holds per year.

ROADMAP

2021.08

Launch SANDPRO token and get financal support

2021.10

Publish SandBox Pro demo version and launch the first game Island

2022.03

Launch a new game at SandBox Pro

2022.06

Publish SandBox Pro network

2022.08

Update SandBox Pro to SandBox Pro network

2022.10

Update SandBox Pro Network 2.0 (Players canbe easier to update and download the code)

How to earn at SandBox Pro

SandBox Pro is a new block-chain game which you can play to earn as AXS and other block-chain games. As a block-chain game, every transaction is recorded, and NFT is the core for the block-chain game. Daily tasks would be an easy and basic way to earn at the block-chain game.

Also, their white paper shows the differences and improvements with SandBox. Players become parts of developers at this game.

What NFT is?

According to Robyn Conti and John Schmidt, NFT is a digital asset that represents real-world objects like art, music, in-game items and videos. They are bought and sold online, frequently with crypto currency, and they are generally encoded with the same underlying software as many crypto. So, this is one of the reasons why players interested on the block-chain games.

The Basic way to earn at SandBox Pro

The white paper of SandBox Pro shows there is a reward pool. The reward pool will be opening twice per day. It means the earlier players can easily find the Sand Pro around the world of their first game called The Island.

DAO can vote and decide when and how many Sand Pro will be used at The Island. If no decision has been made, 90% of reward pool will be used in The Island. I think this is easy to mint a lot Sand Pro. The selling of regular roles should be the main part of reward pool. As the price of launch, 1bnb equals 30,000 Sand Pro. According to the trading number of Sand NFT as $560,000. While calculate as 90% of reward pool will be put into the reward pool is $504,000. If 100k players online share this reward, the players can share $5 per day. This is amazing number for every players of SandBox Pro. I think the real reward number will lower than this cause this need more new players join the game and the online players may higher than this number. We measure $3 to $7 should be the range of income per day. Also, they shows this game will cost almost $15 to purchase the regular role as enter the game. It means 1 week can get the money back. Here is the chart about the prediction of SandBox Pro

Selling your own NFT and codes

In their plan, SandBox Pro will publish their own network and NFT can be added codes so that every NFT is one of one and alive. In the Market place, players can sell and buy their codes and NFT. This will be a big chance for the professional and tech players. I feel interested on this part, every players can make the game be amazing and cool. Roles can do things as the real world.

In conclusion, this is a great and interesting project with much potential. It changes some facts of Sand Box and input more elements. Players can play to earn in the game and be free as the real world. This should be the first game which did.

How and Where to Buy SANDPRO token?

SANDPRO token is now live on the Binance mainnet. The token address for SANDPRO is 0x6a2560645075e50cd5585591396e033022661ff1. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.

Just be sure you have enough BNB in your wallet to cover the transaction fees.

Join To Get BNB (Binance Coin)! ☞ CLICK HERE

You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.

Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…

☞ SIGN UP ON BINANCE

Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)

Next step

You need a wallet address to Connect to Pancakeswap Decentralized Exchange, we use Metamask wallet

If you don’t have a Metamask wallet, read this article and follow the steps

What is Metamask wallet | How to Create a wallet and Use

Transfer $BNB to your new Metamask wallet from your existing wallet

Next step

Connect Metamask Wallet to Pancakeswap Decentralized Exchange and Buy, Swap SANDPRO token

Contract: 0x6a2560645075e50cd5585591396e033022661ff1

Read more: What is Pancakeswap | Beginner’s Guide on How to Use Pancakeswap

The top exchange for trading in SANDPRO token is currently: PancakeSwap (V2)

Find more information SANDPRO token

☞ Website ☞ Explorer ☞ Social Channel ☞ Social Channel 2 ☞ Social Channel 3 ☞ Coinmarketcap

🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.

🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner

⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!

☞ **-----https://geekcash.org-----**⭐ ⭐ ⭐

I hope this post will help you. Don't forget to leave a like, comment and sharing it with others. Thank you!

#bitcoin #cryptocurrency 

What is GEEK

Buddha Community

What is SandBox Pro (SANDPRO) | What is SANDPRO token

Words Counted: A Ruby Natural Language Processor.

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Are you using WordsCounted to do something interesting? Please tell me about it.

 

Demo

Visit this website for one example of what you can do with WordsCounted.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license

#ruby  #ruby-on-rails 

Royce  Reinger

Royce Reinger

1658068560

WordsCounted: A Ruby Natural Language Processor

WordsCounted

We are all in the gutter, but some of us are looking at the stars.

-- Oscar Wilde

WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.

Features

  • Out of the box, get the following data from any string or readable file, or URL:
    • Token count and unique token count
    • Token densities, frequencies, and lengths
    • Char count and average chars per token
    • The longest tokens and their lengths
    • The most frequent tokens and their frequencies.
  • A flexible way to exclude tokens from the tokeniser. You can pass a string, regexp, symbol, lambda, or an array of any combination of those types for powerful tokenisation strategies.
  • Pass your own regexp rules to the tokeniser if you prefer. The default regexp filters special characters but keeps hyphens and apostrophes. It also plays nicely with diacritics (UTF and unicode characters): Bayrūt is treated as ["Bayrūt"] and not ["Bayr", "ū", "t"], for example.
  • Opens and reads files. Pass in a file path or a url instead of a string.

Installation

Add this line to your application's Gemfile:

gem 'words_counted'

And then execute:

$ bundle

Or install it yourself as:

$ gem install words_counted

Usage

Pass in a string or a file path, and an optional filter and/or regexp.

counter = WordsCounted.count(
  "We are all in the gutter, but some of us are looking at the stars."
)

# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")

.count and .from_file are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter initialized with the tokens. The WordsCounted::Tokeniser and WordsCounted::Counter classes can be used alone, however.

API

WordsCounted

WordsCounted.count(input, options = {})

Tokenises input and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.count("Hello Beirut!")

Accepts two options: exclude and regexp. See Excluding tokens from the analyser and Passing in a custom regexp respectively.

WordsCounted.from_file(path, options = {})

Reads and tokenises a file, and initializes a WordsCounted::Counter object with the resulting tokens.

counter = WordsCounted.from_file("hello_beirut.txt")

Accepts the same options as .count.

Tokeniser

The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.

Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.

#tokenise([pattern: TOKEN_REGEXP, exclude: nil])

tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise

# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")

# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)

See Excluding tokens from the analyser and Passing in a custom regexp for more information.

Counter

The WordsCounted::Counter class allows you to collect various statistics from an array of tokens.

#token_count

Returns the token count of a given string.

counter.token_count #=> 15

#token_frequency

Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.

counter.token_frequency

[
  ["the", 2],
  ["are", 2],
  ["we",  1],
  # ...
  ["all", 1]
]

#most_frequent_tokens

Returns a hash where each key-value pair is a token and its frequency.

counter.most_frequent_tokens

{ "are" => 2, "the" => 2 }

#token_lengths

Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.

counter.token_lengths

[
  ["looking", 7],
  ["gutter",  6],
  ["stars",   5],
  # ...
  ["in",      2]
]

#longest_tokens

Returns a hash where each key-value pair is a token and its length.

counter.longest_tokens

{ "looking" => 7 }

#token_density([ precision: 2 ])

Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision argument, which must be a float.

counter.token_density

[
  ["are",     0.13],
  ["the",     0.13],
  ["but",     0.07 ],
  # ...
  ["we",      0.07 ]
]

#char_count

Returns the char count of tokens.

counter.char_count #=> 76

#average_chars_per_token([ precision: 2 ])

Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.

counter.average_chars_per_token #=> 4

#uniq_token_count

Returns the number of unique tokens.

counter.uniq_token_count #=> 13

Excluding tokens from the tokeniser

You can exclude anything you want from the input by passing the exclude option. The exclude option accepts a variety of filters and is extremely flexible.

  1. A space-delimited string. The filter will normalise the string.
  2. A regular expression.
  3. A lambda.
  4. A symbol that names a predicate method. For example :odd?.
  5. An array of any combination of the above.
tokeniser =
  WordsCounted::Tokeniser.new(
    "Magnificent! That was magnificent, Trevor."
  )

# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]

# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]

# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]

# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]

# Using an array
tokeniser = WordsCounted::Tokeniser.new(
  "Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
  exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]

Passing in a custom regexp

The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.

/[\p{Alpha}\-']+/

You can pass your own criteria as a Ruby regular expression to split your string as desired.

For example, if you wanted to include numbers, you can override the regular expression:

counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]

Opening and reading files

Use the from_file method to open files. from_file accepts the same options as .count. The file path can be a URL.

counter = WordsCounted.from_file("url/or/path/to/file.text")

Gotchas

A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.

counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency

[
  ["do",   2],
  ["how",  1],
  ["you",  1],
  ["-you", 1], # WTF, mate!
  ["are",  1],
  # ...
]

In this example -you and you are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.

A note on case sensitivity

The program will normalise (downcase) all incoming strings for consistency and filters.

Roadmap

Ability to open URLs

def self.from_url
  # open url and send string here after removing html
end

Are you using WordsCounted to do something interesting? Please tell me about it.

Gem Version 

RubyDoc documentation.

Demo

Visit this website for one example of what you can do with WordsCounted.


Contributors

See contributors.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted 
License: MIT license

#ruby #nlp 

aaron silva

aaron silva

1622197808

SafeMoon Clone | Create A DeFi Token Like SafeMoon | DeFi token like SafeMoon

SafeMoon is a decentralized finance (DeFi) token. This token consists of RFI tokenomics and auto-liquidity generating protocol. A DeFi token like SafeMoon has reached the mainstream standards under the Binance Smart Chain. Its success and popularity have been immense, thus, making the majority of the business firms adopt this style of cryptocurrency as an alternative.

A DeFi token like SafeMoon is almost similar to the other crypto-token, but the only difference being that it charges a 10% transaction fee from the users who sell their tokens, in which 5% of the fee is distributed to the remaining SafeMoon owners. This feature rewards the owners for holding onto their tokens.

Read More @ https://bit.ly/3oFbJoJ

#create a defi token like safemoon #defi token like safemoon #safemoon token #safemoon token clone #defi token

Openshift Sandbox/Kata Containers

In this article, I will walk you through Openshift Sandbox containers based on Kata containers and how this is different from the traditional Openshift containers. 

Containers graphic.

Sandbox/kata containers are useful for users for the following scenarios: 

  1. Run 3rd party/untrusted application.
  2. Ensure kernel level isolation.
  3. Proper isolation through VM boundaries.

Prerequisites

You will need to install the following technologies before beginning this exercise:

Create the KataConfig

Create the KataConfig  CR and label the node on which Sandbox containers will be running. I have used sandbox=true label. 

apiVersion: kataconfiguration.openshift.io/v1

kind: KataConfig

metadata:

 name: cluster-kataconfig

spec:

   kataConfigPoolSelector:

     matchLabels:

        sandbox: 'true'

Verify the deployment:

oc describe kataconfig cluster-kataconfig

Name:         cluster-kataconfig

…..

Status:

  Installation Status:

    Is In Progress:  false

    Completed:

      Completed Nodes Count:  3

      Completed Nodes List:

        master0

        master1

        master2

    Failed:

    Inprogress:

  Prev Mcp Generation:  2

  Runtime Class:        kata

  Total Nodes Count:    3

  Un Installation Status:

    Completed:

    Failed:

    In Progress:

      Status:

  Upgrade Status:

Events:  <none>

Verify a new machine config(mc) and machine config pool(MCP) would have been created with the name Sandbox:

oc get mc |grep sandbox

50-enable-sandboxed-containers-extension

Verify the node configuration. Login to the Node label sandbox=true:

sh-4.4# cat /etc/crio/crio.conf.d/50-kata

[crio.runtime.runtimes.kata]

  runtime_path = "/usr/bin/containerd-shim-kata-v2"

  runtime_type = "vm"

  runtime_root = "/run/vc"

  privileged_without_host_devices = true

 Verify the Runtimeclass:

→ oc get runtimeclass

NAME   HANDLER   AGE

kata   kata      5d14h

This completes the deployment of the Sandbox container using Operator. 

Deploying the Application on Sandbox vs Regular Containers.

Let's try to deploy Sandbox and Regular containers from the same image and will verify the difference.

I have used a sample application image(quay.io/shailendra14k/getotp) based on spring boot for testing. 

#Regular POD definition:

apiVersion: apps/v1

kind: Deployment

metadata:

 name: webapp-deployment-6.0

 labels:

   app: webapp

   version: v6.0

spec:

 replicas: 2

 selector:

   matchLabels:

     app: webapp

 template:

   metadata:

     labels:

       app: webapp

       version: v6.0

   spec:

     containers:

     - name: webapp

       image: quay.io/shailendra14k/getotp:6.0

       imagePullPolicy: Always

       ports:

       - containerPort: 8180

Version 6.0 is Normal and 6.1 has the runtimeclass=kata. 

apiVersion: apps/v1

kind: Deployment

metadata:

 name: webapp-deployment-6.1

 labels:

   app: webapp

   version: v6.1

spec:

 replicas: 1

 selector:

   matchLabels:

     app: webapp

 template:

   metadata:

     labels:

       app: webapp

       version: v6.1

   Spec:

     runtimeClassName: kata

       containers:

     - name: webapp

       image: quay.io/shailendra14k/getotp:6.1

       imagePullPolicy: Always

       ports:

       - containerPort: 8180

 Deploy the application and verify the status:

➜  ~ oc get pods

NAME                                     READY   STATUS    RESTARTS   AGE

webapp-deployment-6.0-5d78fcd8db-ck7g7   1/1     Running   0          11m

webapp-deployment-6.1-6587f8997b-7f5p5   1/1     Running   0          11m

Compare the Uptime of Both Containers 

#Regular containers:

➜  ~ oc exec -it webapp-deployment-6.0-5d78fcd8db-ck7g7 -- cat /proc/uptime

416625.14 4640515.30

#Sandbox containers:

➜  ~ oc exec -it webapp-deployment-6.1-6587f8997b-7f5p5 -- cat /proc/uptime

670.63 658.26

You can observe the difference, which is huge, the uptime of the regular container Kernel is the same as the Node kernel(416625.14s= 4.8 Days). However, the Sandbox container kernel uptime is the time of the creation of the Pod(670.63s=11min)

Compare the Process on the Nodes

Log in to the node, where both containers are running. Use the oc debug node/<node-Name> 

#Regular containers:


sh-4.4# ps -eaf |grep 10008

1000800+  852898  852878  0 07:23 ?        00:00:08 java -jar /home/jboss/test.jar

1000800+ is the UID for the container.

#Sandbox containers:

 First, fetch the sandbox id using the crictl inspect command:

➜  ~oc get pods webapp-deployment-6.1-6587f8997b-7f5p5  -o jsonpath='{.status.containerStatuses[0]}'

{"containerID":"cri-o://b0768d7fbfd2d656b9900ba0b16b6078eb625b412784809ce516f9111a211e10" …..



#From the node

sh-4.4# crictl inspect b0768d7fbfd2d656b9900ba0b16b6078eb625b412784809ce516f9111a211e10 | jq -r '.info.sandboxID'

7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

Fetch the process id using the SandboxId:

sh-4.4# ps aux | grep 7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

root      852850  0.0  0.1 1337556 34816 ?       Sl   07:23   0:00 /usr/bin/containerd-shim-kata-v2 -namespace default -address  -publish-binary /usr/bin/crio -id 

7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

 

root      852859  0.0  0.0 122804  4776 ?        Sl   07:23   0:00 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/shared -o cache=auto --syslog -o no_posix_lock -d --thread-pool-size=1

root      852865  0.9  1.8 2465200 603492 ?      Sl   07:23   0:15 /usr/libexec/qemu-kiwi -name sandbox-7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270 -uuid ae09b8a0-1f89-4196-8402-cdcb471675bd -machine q35,accel=kvm,kernel_irqchip -cpu 

…… /run/vc/vm/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/qemu.log -smp 1,cores=1,threads=1,sockets=12,maxcpus=12

 

root      852873  0.0  0.2 2514884 75800 ?       Sl   07:23   0:00 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/shared -o cache=auto --syslog -o no_posix_lock -d --thread-pool-size=1

For the regular container, the process runs directly on the Node host kernel; However, for the Sandbox, the containers run inside the VMs.  

Conclusion

Thank you for reading! We saw how the Sandbox containers are deployed on Openshift and its comparison with the regular containers.  

Source: https://dzone.com/articles/openshift-sandboxkata-containers

#openshift #sandbox 

Contenedores Openshift Sandbox/Kata

En este artículo, lo guiaré a través de los contenedores Openshift Sandbox basados ​​en contenedores Kata y en qué se diferencian de los contenedores Openshift tradicionales. 

Gráfico de contenedores.

Los contenedores Sandbox/kata son útiles para los usuarios en los siguientes escenarios: 

  1. Ejecute una aplicación de terceros o que no sea de confianza.
  2. Garantice el aislamiento a nivel de kernel.
  3. Aislamiento adecuado a través de los límites de la máquina virtual.

requisitos previos

Deberá instalar las siguientes tecnologías antes de comenzar este ejercicio:

Crear el KataConfig

Cree KataConfig CR y etiquete el nodo en el que se ejecutarán los contenedores Sandbox. He usado sandbox=true label. 

apiVersion: kataconfiguration.openshift.io/v1

kind: KataConfig

metadata:

 name: cluster-kataconfig

spec:

   kataConfigPoolSelector:

     matchLabels:

        sandbox: 'true'

Verifique la implementación:

oc describe kataconfig cluster-kataconfig

Name:         cluster-kataconfig

…..

Status:

  Installation Status:

    Is In Progress:  false

    Completed:

      Completed Nodes Count:  3

      Completed Nodes List:

        master0

        master1

        master2

    Failed:

    Inprogress:

  Prev Mcp Generation:  2

  Runtime Class:        kata

  Total Nodes Count:    3

  Un Installation Status:

    Completed:

    Failed:

    In Progress:

      Status:

  Upgrade Status:

Events:  <none>

Verifique que se haya creado una nueva configuración de máquina (mc) y un grupo de configuración de máquina (MCP) con el nombre Sandbox:

oc get mc |grep sandbox

50-enable-sandboxed-containers-extension

Verifique la configuración del nodo. Inicie sesión en la etiqueta del nodo sandbox=true:

sh-4.4# cat /etc/crio/crio.conf.d/50-kata

[crio.runtime.runtimes.kata]

  runtime_path = "/usr/bin/containerd-shim-kata-v2"

  runtime_type = "vm"

  runtime_root = "/run/vc"

  privileged_without_host_devices = true

 Verifique la clase de tiempo de ejecución:

→ oc get runtimeclass

NAME   HANDLER   AGE

kata   kata      5d14h

Esto completa la implementación del contenedor Sandbox usando Operador. 

Implementación de la aplicación en Sandbox vs Contenedores regulares.

Intentemos implementar contenedores Sandbox y Regular desde la misma imagen y verificaremos la diferencia.

He usado una imagen de aplicación de muestra (quay.io/shailendra14k/getotp) basada en Spring Boot para la prueba. 

#Definición de POD normal:

apiVersion: apps/v1

kind: Deployment

metadata:

 name: webapp-deployment-6.0

 labels:

   app: webapp

   version: v6.0

spec:

 replicas: 2

 selector:

   matchLabels:

     app: webapp

 template:

   metadata:

     labels:

       app: webapp

       version: v6.0

   spec:

     containers:

     - name: webapp

       image: quay.io/shailendra14k/getotp:6.0

       imagePullPolicy: Always

       ports:

       - containerPort: 8180

La versión 6.0 es Normal y la 6.1 tiene runtimeclass=kata. 

apiVersion: apps/v1

kind: Deployment

metadata:

 name: webapp-deployment-6.1

 labels:

   app: webapp

   version: v6.1

spec:

 replicas: 1

 selector:

   matchLabels:

     app: webapp

 template:

   metadata:

     labels:

       app: webapp

       version: v6.1

   Spec:

     runtimeClassName: kata

       containers:

     - name: webapp

       image: quay.io/shailendra14k/getotp:6.1

       imagePullPolicy: Always

       ports:

       - containerPort: 8180

 Implemente la aplicación y verifique el estado:

➜  ~ oc get pods

NAME                                     READY   STATUS    RESTARTS   AGE

webapp-deployment-6.0-5d78fcd8db-ck7g7   1/1     Running   0          11m

webapp-deployment-6.1-6587f8997b-7f5p5   1/1     Running   0          11m

Compare el tiempo de actividad de ambos contenedores 

#Contenedores regulares:

➜  ~ oc exec -it webapp-deployment-6.0-5d78fcd8db-ck7g7 -- cat /proc/uptime

416625.14 4640515.30

Contenedores #sandbox:

➜  ~ oc exec -it webapp-deployment-6.1-6587f8997b-7f5p5 -- cat /proc/uptime

670.63 658.26

Puede observar la diferencia, que es enorme, el tiempo de actividad del Kernel del contenedor regular es el mismo que el del kernel Node (416625.14s = 4.8 Días). Sin embargo, el tiempo de actividad del kernel del contenedor Sandbox es el momento de la creación del Pod (670.63s = 11min)

Comparar el proceso en los nodos

Inicie sesión en el nodo, donde se ejecutan ambos contenedores. Utilice el nodo de depuración oc/<nombre-nodo> 

#Contenedores regulares:


sh-4.4# ps -eaf |grep 10008

1000800+  852898  852878  0 07:23 ?        00:00:08 java -jar /home/jboss/test.jar

1000800+ es el UID del contenedor.

Contenedores #sandbox:

 Primero, obtenga la identificación de la caja de arena usando el comando crictl inspect:

➜  ~oc get pods webapp-deployment-6.1-6587f8997b-7f5p5  -o jsonpath='{.status.containerStatuses[0]}'

{"containerID":"cri-o://b0768d7fbfd2d656b9900ba0b16b6078eb625b412784809ce516f9111a211e10" …..



#From the node

sh-4.4# crictl inspect b0768d7fbfd2d656b9900ba0b16b6078eb625b412784809ce516f9111a211e10 | jq -r '.info.sandboxID'

7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

Obtenga la identificación del proceso usando SandboxId:

sh-4.4# ps aux | grep 7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

root      852850  0.0  0.1 1337556 34816 ?       Sl   07:23   0:00 /usr/bin/containerd-shim-kata-v2 -namespace default -address  -publish-binary /usr/bin/crio -id 

7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270

 

root      852859  0.0  0.0 122804  4776 ?        Sl   07:23   0:00 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/shared -o cache=auto --syslog -o no_posix_lock -d --thread-pool-size=1

root      852865  0.9  1.8 2465200 603492 ?      Sl   07:23   0:15 /usr/libexec/qemu-kiwi -name sandbox-7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270 -uuid ae09b8a0-1f89-4196-8402-cdcb471675bd -machine q35,accel=kvm,kernel_irqchip -cpu 

…… /run/vc/vm/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/qemu.log -smp 1,cores=1,threads=1,sockets=12,maxcpus=12

 

root      852873  0.0  0.2 2514884 75800 ?       Sl   07:23   0:00 /usr/libexec/virtiofsd --fd=3 -o source=/run/kata-containers/shared/sandboxes/7740c8967dd6ad50ecd8c31558c3c844bbe7ac4e7ca1115e7f91eec974737270/shared -o cache=auto --syslog -o no_posix_lock -d --thread-pool-size=1

Para el contenedor normal, el proceso se ejecuta directamente en el kernel del host Node; Sin embargo, para Sandbox, los contenedores se ejecutan dentro de las máquinas virtuales.  

Conclusión

¡Gracias por leer! Vimos cómo se implementan los contenedores Sandbox en Openshift y su comparación con los contenedores normales.  

Fuente: https://dzone.com/articles/openshift-sandboxkata-containers 

#openshift  #sandbox