1636136160
In this article, we'll discuss information about the Ginza Network project and GINZA token.
Ginza Network is an NFT issuance platform supported by (Gene International IT & AI Limited) and built on BSC (Binance Smart Chain).
In Ginza Network, users can deploy and issue NFTs with one click without and code. Ginza Network also joined the current innovative DeFi concept in the blockchain. Users can experience liquid mining, staking and other functions in the Ginza Network project. For the NFT+DeFi gameplay, the team will develop functions such as NFT mortgage lending and NFT casting decomposition to improve the liquidity of NFT within the ecosystem.
Ginza Network will adopt the Layer 2 solution. Through cross-chain technology, it will connect multiple blockchains such as Ethereum and Polkadot in the future. Ginza Network will enable blockchain projects to acquire users and develop more diversified NFT applications.
About Ginza Network
Ginza is a cross-chain NFT+DeFi+DAO protocol and infrastructure, as well as an open NFT ecosystem. Anyone can create NFT and initiate NFT mining, auctions and transactions.
Ginza provies a cross-chain infrastructure to enable blockchain projects to expand their user base, distribute tokens, and develop more NFT-enabled applications. Ginza can seemlessly connect GENE original users to the Ginza platform ecosystem through the cross-chain function, so as to realize the uprecedented cross-chain transfer of NFT assets for GENE users, and integrate the application and the underlying protocol experience.
GENE Background
Gene is a blockchain pass that combines financial protection with real estate and listed equity. It is supported and promoted by a company operating in SIngapore and listed on the Australian Stock Exchange (ASX) to create new corporate trends and open new real estate and equity investment solutions. Gene is also a financial management tool that can increase your wealth, manage and increase the value of digital assets. It not only allows you to enjoy daily income, but also has powerful advantages such as convertible listed equity and physical industries. With the birth of Ginza Network, Gene will work closely with the GInza Network ecosystem.
FOUNDATION
Gene International IT & AI Limited (Golden Royal International Technology and Intelligence Foundation) is forcused on the combination of financial real estate and listing. It is supported and promoted by a company operating in Singapore and listed on the Australian Stock Exchange (ASX) to create a new corporate potential and open new real estate and equity investment solutions. To regarding the blockchain field, Jinyu International Technology and Intelligence Foundation Currently has a layout in the NFT and DeFi fields. The foundation aims to provide incubation and investment for blockchain and cryptocurrency entrepreneurs, projects and communities. The vision of Jinyu International Technology and Intelligence Foundation is to maximize the potential of blockchain technology, hold a strong enough concept, and talk about the better development of the blockchain industry.
Gold Commemorative Coin NFT
Ginza will jointly issue a set of gold commemorative coins NFT with the Australian Gold Exchange. It is a 3D digital collectible platform backed by real gold, allowing users to choose design templates, based on real gold to mint NFT collectibles, and then sell, share and trade these Collection.
TECHNICAL CHARACTERISTICS
Based on BSC Smart Chain: Ginza is based on Binance Smart Chain. As a parallel running chain of Binance Chain, Ginza can realize functions such as smart contract creation and BNB pledge mining.
Sharding Technology: The sharding technology of GInza Network is different from other public chains such as Ethereum 2.0. Ginza Network does not adopt the model of sharding chain and beacon chain (Each sharding chain needs to be forked), but chooses to concentrate on a single one. Therefore, the block generation interval of GInza Network will be lower, the fragmentation effectiveness will be stronger, and the data will be better.
Ginza Network Bridge: Ginza Bridge is a fast channel for cross-layer asset transfer realized through the trading market. The first phase will realize a fast exchange channel between L1 and L2, and then, depending on the situation, consider opening a fast exchange channel between different L2s.
EVM Vitual Machine Technology: He processing power of GInza Network’s EVM can reach more than 10 times that of Ethereum. Using the dynamic segmentation method, the amount of calculation for contract execution is relatively small. The confimation speed of each transaction only takes 1-2 seconds, and the average gas fee for each transaction is less than 1 US dollar.
Layer 2 Extension Solution ZK Rollup: Ginza Network’s side chain using ZK rollups can also reduce the transaction size. For example, an account users an index instead of a string of addresses, which can reduce the transaction size from 32 bytes to 4 bytes.
Based on BSC Smart Chain: Ginza Network NFT Launchpad is a NFT project issuance platform, the purpose is to help NFT blockchain projects to raise funds.
TOKEN ECONOMY
Max Supply: 660,000,000
it is expected that most of the funds in the operating pool will be used for liquidity mining to enable Ginza to gain market competition,
2.9% (19.14 million) of Ginza will be invested in the initial market making, and the remaining 1% (19.14 million) of Ginza tokens are expected to be put into market making within one week after the initial market making.
The founding team of Ginza will provide the market with a quota of 15% of the total supply of financing tokens, or 99 million, before the project is lanuched.
Flexible deployment by Ginza DAO on demand
Other practitioners in the incentive mode coorperate closly with Ginza, and this part of the capital can be paid to the cooperative in the form of instant payment.
TOKEN FUNCTION
How and Where to Buy GINZA token?
GINZA token is now live on the Binance mainnet. The token address for GINZA is 0x32d7da6a7cf25ed1b86e1b0ee9a62b0252d46b16. Be cautious not to purchase any other token with a smart contract different from this one (as this can be easily faked). We strongly advise to be vigilant and stay safe throughout the launch. Don’t let the excitement get the best of you.
Just be sure you have enough BNB in your wallet to cover the transaction fees.
Join To Get BNB (Binance Coin)! ☞ CLICK HERE
You will have to first buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…
We will use Binance Exchange here as it is one of the largest crypto exchanges that accept fiat deposits.
Once you finished the KYC process. You will be asked to add a payment method. Here you can either choose to provide a credit/debit card or use a bank transfer, and buy one of the major cryptocurrencies, usually either Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB)…
Step by Step Guide : What is Binance | How to Create an account on Binance (Updated 2021)
Next step
You need a wallet address to Connect to Pancakeswap Decentralized Exchange, we use Metamask wallet
If you don’t have a Metamask wallet, read this article and follow the steps ☞ What is Metamask wallet | How to Create a wallet and Use
Transfer $BNB to your new Metamask wallet from Binance wallet
Next step
Connect Metamask Wallet to Pancakeswap Decentralized Exchange and Buy, Swap GINZA token
Contract: 0x32d7da6a7cf25ed1b86e1b0ee9a62b0252d46b16
Read more: What is Pancakeswap | Beginner’s Guide on How to Use Pancakeswap
The top exchange for trading in GINZA token is currently: PancakeSwap (V2)
Find more information GINZA token
☞ Website ☞ Explorer ☞ Social Channel ☞ Social Channel 2 ☞ Social Channel 3 ☞ Coinmarketcap
Top exchanges for token-coin trading. Follow instructions and make unlimited money
☞ Binance ☞ Bittrex ☞ Poloniex ☞ Bitfinex ☞ Huobi ☞ MXC ☞ ProBIT ☞ Gate.io ☞ Coinbase
🔺DISCLAIMER: The Information in the post isn’t financial advice, is intended FOR GENERAL INFORMATION PURPOSES ONLY. Trading Cryptocurrency is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money.
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community ☞ **-----https://geekcash.org-----**⭐ ⭐ ⭐
I hope this post will help you. Don't forget to leave a like, comment and sharing it with others. Thank you!
1659601560
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: abitdodgy
Source code: https://github.com/abitdodgy/words_counted
License: MIT license
#ruby #ruby-on-rails
1658068560
WordsCounted
We are all in the gutter, but some of us are looking at the stars.
-- Oscar Wilde
WordsCounted is a Ruby NLP (natural language processor). WordsCounted lets you implement powerful tokensation strategies with a very flexible tokeniser class.
["Bayrūt"]
and not ["Bayr", "ū", "t"]
, for example.Add this line to your application's Gemfile:
gem 'words_counted'
And then execute:
$ bundle
Or install it yourself as:
$ gem install words_counted
Pass in a string or a file path, and an optional filter and/or regexp.
counter = WordsCounted.count(
"We are all in the gutter, but some of us are looking at the stars."
)
# Using a file
counter = WordsCounted.from_file("path/or/url/to/my/file.txt")
.count
and .from_file
are convenience methods that take an input, tokenise it, and return an instance of WordsCounted::Counter
initialized with the tokens. The WordsCounted::Tokeniser
and WordsCounted::Counter
classes can be used alone, however.
WordsCounted.count(input, options = {})
Tokenises input and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.count("Hello Beirut!")
Accepts two options: exclude
and regexp
. See Excluding tokens from the analyser and Passing in a custom regexp respectively.
WordsCounted.from_file(path, options = {})
Reads and tokenises a file, and initializes a WordsCounted::Counter
object with the resulting tokens.
counter = WordsCounted.from_file("hello_beirut.txt")
Accepts the same options as .count
.
The tokeniser allows you to tokenise text in a variety of ways. You can pass in your own rules for tokenisation, and apply a powerful filter with any combination of rules as long as they can boil down into a lambda.
Out of the box the tokeniser includes only alpha chars. Hyphenated tokens and tokens with apostrophes are considered a single token.
#tokenise([pattern: TOKEN_REGEXP, exclude: nil])
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise
# With `exclude`
tokeniser = WordsCounted::Tokeniser.new("Hello Beirut!").tokenise(exclude: "hello")
# With `pattern`
tokeniser = WordsCounted::Tokeniser.new("I <3 Beirut!").tokenise(pattern: /[a-z]/i)
See Excluding tokens from the analyser and Passing in a custom regexp for more information.
The WordsCounted::Counter
class allows you to collect various statistics from an array of tokens.
#token_count
Returns the token count of a given string.
counter.token_count #=> 15
#token_frequency
Returns a sorted (unstable) two-dimensional array where each element is a token and its frequency. The array is sorted by frequency in descending order.
counter.token_frequency
[
["the", 2],
["are", 2],
["we", 1],
# ...
["all", 1]
]
#most_frequent_tokens
Returns a hash where each key-value pair is a token and its frequency.
counter.most_frequent_tokens
{ "are" => 2, "the" => 2 }
#token_lengths
Returns a sorted (unstable) two-dimentional array where each element contains a token and its length. The array is sorted by length in descending order.
counter.token_lengths
[
["looking", 7],
["gutter", 6],
["stars", 5],
# ...
["in", 2]
]
#longest_tokens
Returns a hash where each key-value pair is a token and its length.
counter.longest_tokens
{ "looking" => 7 }
#token_density([ precision: 2 ])
Returns a sorted (unstable) two-dimentional array where each element contains a token and its density as a float, rounded to a precision of two. The array is sorted by density in descending order. It accepts a precision
argument, which must be a float.
counter.token_density
[
["are", 0.13],
["the", 0.13],
["but", 0.07 ],
# ...
["we", 0.07 ]
]
#char_count
Returns the char count of tokens.
counter.char_count #=> 76
#average_chars_per_token([ precision: 2 ])
Returns the average char count per token rounded to two decimal places. Accepts a precision argument which defaults to two. Precision must be a float.
counter.average_chars_per_token #=> 4
#uniq_token_count
Returns the number of unique tokens.
counter.uniq_token_count #=> 13
You can exclude anything you want from the input by passing the exclude
option. The exclude option accepts a variety of filters and is extremely flexible.
:odd?
.tokeniser =
WordsCounted::Tokeniser.new(
"Magnificent! That was magnificent, Trevor."
)
# Using a string
tokeniser.tokenise(exclude: "was magnificent")
# => ["that", "trevor"]
# Using a regular expression
tokeniser.tokenise(exclude: /trevor/)
# => ["magnificent", "that", "was", "magnificent"]
# Using a lambda
tokeniser.tokenise(exclude: ->(t) { t.length < 4 })
# => ["magnificent", "that", "magnificent", "trevor"]
# Using symbol
tokeniser = WordsCounted::Tokeniser.new("Hello! محمد")
tokeniser.tokenise(exclude: :ascii_only?)
# => ["محمد"]
# Using an array
tokeniser = WordsCounted::Tokeniser.new(
"Hello! اسماءنا هي محمد، كارولينا، سامي، وداني"
)
tokeniser.tokenise(
exclude: [:ascii_only?, /محمد/, ->(t) { t.length > 6}, "و"]
)
# => ["هي", "سامي", "وداني"]
The default regexp accounts for letters, hyphenated tokens, and apostrophes. This means twenty-one is treated as one token. So is Mohamad's.
/[\p{Alpha}\-']+/
You can pass your own criteria as a Ruby regular expression to split your string as desired.
For example, if you wanted to include numbers, you can override the regular expression:
counter = WordsCounted.count("Numbers 1, 2, and 3", pattern: /[\p{Alnum}\-']+/)
counter.tokens
#=> ["numbers", "1", "2", "and", "3"]
Use the from_file
method to open files. from_file
accepts the same options as .count
. The file path can be a URL.
counter = WordsCounted.from_file("url/or/path/to/file.text")
A hyphen used in leu of an em or en dash will form part of the token. This affects the tokeniser algorithm.
counter = WordsCounted.count("How do you do?-you are well, I see.")
counter.token_frequency
[
["do", 2],
["how", 1],
["you", 1],
["-you", 1], # WTF, mate!
["are", 1],
# ...
]
In this example -you
and you
are separate tokens. Also, the tokeniser does not include numbers by default. Remember that you can pass your own regular expression if the default behaviour does not fit your needs.
The program will normalise (downcase) all incoming strings for consistency and filters.
def self.from_url
# open url and send string here after removing html
end
Are you using WordsCounted to do something interesting? Please tell me about it.
Visit this website for one example of what you can do with WordsCounted.
Contributors
See contributors.
git checkout -b my-new-feature
)git commit -am 'Add some feature'
)git push origin my-new-feature
)Author: Abitdodgy
Source Code: https://github.com/abitdodgy/words_counted
License: MIT license
1624658400
Hey guys, in this video I review PAID NETWORK. This is a DeFi project that aims to solve complex legal process using decentralised protocols and DeFi products for 2021.
PAID Network is an ecosystem DAPP that leverages blockchain technology to deliver DeFi powered SMART Agreements to make business exponentially more efficient. We allow users to create their own policy, to ensure they Get PAID.
📺 The video in this post was made by Crypto expat
The origin of the article: https://www.youtube.com/watch?v=ZIU5javfL90
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#bitcoin #blockchain #paid network #paid network review #token sale #paid network review, is it worth investing in? token sale coming soon !!
1622197808
SafeMoon is a decentralized finance (DeFi) token. This token consists of RFI tokenomics and auto-liquidity generating protocol. A DeFi token like SafeMoon has reached the mainstream standards under the Binance Smart Chain. Its success and popularity have been immense, thus, making the majority of the business firms adopt this style of cryptocurrency as an alternative.
A DeFi token like SafeMoon is almost similar to the other crypto-token, but the only difference being that it charges a 10% transaction fee from the users who sell their tokens, in which 5% of the fee is distributed to the remaining SafeMoon owners. This feature rewards the owners for holding onto their tokens.
Read More @ https://bit.ly/3oFbJoJ
#create a defi token like safemoon #defi token like safemoon #safemoon token #safemoon token clone #defi token
1594312560
Talking about inspiration in the networking industry, nothing more than Autonomous Driving Network (ADN). You may hear about this and wondering what this is about, and does it have anything to do with autonomous driving vehicles? Your guess is right; the ADN concept is derived from or inspired by the rapid development of the autonomous driving car in recent years.
Driverless Car of the Future, the advertisement for “America’s Electric Light and Power Companies,” Saturday Evening Post, the 1950s.
The vision of autonomous driving has been around for more than 70 years. But engineers continuously make attempts to achieve the idea without too much success. The concept stayed as a fiction for a long time. In 2004, the US Defense Advanced Research Projects Administration (DARPA) organized the Grand Challenge for autonomous vehicles for teams to compete for the grand prize of $1 million. I remembered watching TV and saw those competing vehicles, behaved like driven by drunk man, had a really tough time to drive by itself. I thought that autonomous driving vision would still have a long way to go. To my surprise, the next year, 2005, Stanford University’s vehicles autonomously drove 131 miles in California’s Mojave desert without a scratch and took the $1 million Grand Challenge prize. How was that possible? Later I learned that the secret ingredient to make this possible was using the latest ML (Machine Learning) enabled AI (Artificial Intelligent ) technology.
Since then, AI technologies advanced rapidly and been implemented in all verticals. Around the 2016 time frame, the concept of Autonomous Driving Network started to emerge by combining AI and network to achieve network operational autonomy. The automation concept is nothing new in the networking industry; network operations are continually being automated here and there. But this time, ADN is beyond automating mundane tasks; it reaches a whole new level. With the help of AI technologies and other critical ingredients advancement like SDN (Software Defined Network), autonomous networking has a great chance from a vision to future reality.
In this article, we will examine some critical components of the ADN, current landscape, and factors that are important for ADN to be a success.
At the current stage, there are different terminologies to describe ADN vision by various organizations.
Even though slightly different terminologies, the industry is moving towards some common terms and consensus called autonomous networks, e.g. TMF, ETSI, ITU-T, GSMA. The core vision includes business and network aspects. The autonomous network delivers the “hyper-loop” from business requirements all the way to network and device layers.
On the network layer, it contains the below critical aspects:
On top of those, these capabilities need to be across multiple services, multiple domains, and the entire lifecycle(TMF, 2019).
No doubt, this is the most ambitious goal that the networking industry has ever aimed at. It has been described as the “end-state” and“ultimate goal” of networking evolution. This is not just a vision on PPT, the networking industry already on the move toward the goal.
David Wang, Huawei’s Executive Director of the Board and President of Products & Solutions, said in his 2018 Ultra-Broadband Forum(UBBF) keynote speech. (David W. 2018):
“In a fully connected and intelligent era, autonomous driving is becoming a reality. Industries like automotive, aerospace, and manufacturing are modernizing and renewing themselves by introducing autonomous technologies. However, the telecom sector is facing a major structural problem: Networks are growing year by year, but OPEX is growing faster than revenue. What’s more, it takes 100 times more effort for telecom operators to maintain their networks than OTT players. Therefore, it’s imperative that telecom operators build autonomous driving networks.”
Juniper CEO Rami Rahim said in his keynote at the company’s virtual AI event: (CRN, 2020)
“The goal now is a self-driving network. The call to action is to embrace the change. We can all benefit from putting more time into higher-layer activities, like keeping distributors out of the business. The future, I truly believe, is about getting the network out of the way. It is time for the infrastructure to take a back seat to the self-driving network.”
If you asked me this question 15 years ago, my answer would be “no chance” as I could not imagine an autonomous driving vehicle was possible then. But now, the vision is not far-fetch anymore not only because of ML/AI technology rapid advancement but other key building blocks are made significant progress, just name a few key building blocks:
#network-automation #autonomous-network #ai-in-network #self-driving-network #neural-networks