Terry  Tremblay

Terry Tremblay

1596652620

The Double-Edged Sword of Masternodes: What Happens When Inflation Surpasses Reward

The Dash cryptocurrency network invented the masternode concept in 2014, and has recently come to an interesting conclusion: that masternodes heavily influence the coin’s market cap — both for better and for worse.

The past six years have seen Dash’s dramatic rise to the number three spot in market cap rankings, followed by its meteoric fall to the 25th spot where it currently resides.

And now, after more than nine months of research, debate, and surveys, the Dash network thinks they know why this happened. And what’s more, they have a plan to do something about it.

A Flattening of the (Expected) Curve

Once the Dash network deployed masternodes into its architecture in 2014, the network began gradually shifting block reward payments away from miners until they reached an even split: half the block reward went to miners, and half went to masternodes.

At some points in these early days, the annual return-on-investment (ROI) of running a masternode was as high as 20%. This understandably attracted many new investors to the Dash ecosystem, and the coin sat comfortably in the top-10 by market cap for nearly four years.

But then in late 2018, something started to change: the expected rate at which new master nodes were being created began to fall (see chart below). For the entire history of the Dash network, the curve of master node creation had always stayed in line with the curve of new coin creation. But now the master node creation rate started to lag behind the overall inflation rate.

Image via Dash Masternode Information

“Circulating” Coins vs. Collateralized Coins

In late 2019, Dash’s continuing fall in market cap rankings led Ryan Taylor, CEO of Dash Core Group, to start digging into potential reasons. He first compared Dash’s annual inflation rate with that of other coins like Bitcoin and Bitcoin Cash (below).

Image via Dash Core Group presentation on Dash Economics

While Dash’s annual inflation of 7.7% is significantly larger than its competitors that hover around 2%, Taylor found something much more compelling: that the slow sell-off of master nodes that started in late 2018 had, over time, dramatically increased the “circulating” supply of Dash (that is, the coins not collateralized in master nodes) to the tune of 22% per annum as of this year.

“It’s really hard for Dash to maintain price parity against assets like Bitcoin and Bitcoin Cash if the circulating supply is growing as fast as it is,” Taylor said during a Dash Economics AMA on YouTube recently. “I think that it behooves the network to take a close look at this and see if we can get closer to parity with some of these other coins, and then allow our [technical & adoption-related] advantages to accrue to the network.”

#dash-network #dash #blockchain #masternodes #staking #staking-rewards #staking-economy #hackernoon-top-story

What is GEEK

Buddha Community

The Double-Edged Sword of Masternodes: What Happens When Inflation Surpasses Reward
Terry  Tremblay

Terry Tremblay

1596652620

The Double-Edged Sword of Masternodes: What Happens When Inflation Surpasses Reward

The Dash cryptocurrency network invented the masternode concept in 2014, and has recently come to an interesting conclusion: that masternodes heavily influence the coin’s market cap — both for better and for worse.

The past six years have seen Dash’s dramatic rise to the number three spot in market cap rankings, followed by its meteoric fall to the 25th spot where it currently resides.

And now, after more than nine months of research, debate, and surveys, the Dash network thinks they know why this happened. And what’s more, they have a plan to do something about it.

A Flattening of the (Expected) Curve

Once the Dash network deployed masternodes into its architecture in 2014, the network began gradually shifting block reward payments away from miners until they reached an even split: half the block reward went to miners, and half went to masternodes.

At some points in these early days, the annual return-on-investment (ROI) of running a masternode was as high as 20%. This understandably attracted many new investors to the Dash ecosystem, and the coin sat comfortably in the top-10 by market cap for nearly four years.

But then in late 2018, something started to change: the expected rate at which new master nodes were being created began to fall (see chart below). For the entire history of the Dash network, the curve of master node creation had always stayed in line with the curve of new coin creation. But now the master node creation rate started to lag behind the overall inflation rate.

Image via Dash Masternode Information

“Circulating” Coins vs. Collateralized Coins

In late 2019, Dash’s continuing fall in market cap rankings led Ryan Taylor, CEO of Dash Core Group, to start digging into potential reasons. He first compared Dash’s annual inflation rate with that of other coins like Bitcoin and Bitcoin Cash (below).

Image via Dash Core Group presentation on Dash Economics

While Dash’s annual inflation of 7.7% is significantly larger than its competitors that hover around 2%, Taylor found something much more compelling: that the slow sell-off of master nodes that started in late 2018 had, over time, dramatically increased the “circulating” supply of Dash (that is, the coins not collateralized in master nodes) to the tune of 22% per annum as of this year.

“It’s really hard for Dash to maintain price parity against assets like Bitcoin and Bitcoin Cash if the circulating supply is growing as fast as it is,” Taylor said during a Dash Economics AMA on YouTube recently. “I think that it behooves the network to take a close look at this and see if we can get closer to parity with some of these other coins, and then allow our [technical & adoption-related] advantages to accrue to the network.”

#dash-network #dash #blockchain #masternodes #staking #staking-rewards #staking-economy #hackernoon-top-story

Ian  Robinson

Ian Robinson

1624384920

A Double-Edged Sword: Horror Stories Aside, Big Data Does More GoodThan Harm

People worried about identity or data theft need to think “big picture”

There are plenty of good ways to hide your digital footprint – from ‘incognito mode’ to using a VPN. But let’s face it, most people are not going to go through all that just for the “privacy” of not having, say, Amazon track your shopping habits or Google reading your email. Sure, some highly dedicated privacy militants are perhaps successful in drastically reducing their presence, but it’s pretty darn hard to completely avoid an “internet existence.” Even if you don’t show yourself, others can “out” you; perhaps inadvertently. You, of course, want to protect your data with the best ID theft services possible. But protection doesn’t mean forgoing the modern world of online shopping or social media. Your data is a part of big data, probably whether you like it or not. This fact isn’t, however, necessarily bad.

There have certainly been some alarming stories. Big data is defined by Oxford as, “extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions.” So, if your address, identification numbers, tax returns, educational records, medical info – which for some, include those popular DNA lineage reports – are all collated and accessible, it’s not inconceivable that your data could be hacked. Over four years ago, Forbes had a piece claiming that in the year 2016, some 400 serious data breaches occurred, targeting mass amounts of data from organizations such as a hospital, a mental-health non-profit, and the National Network of Abortion Funds, a group that may or may not be controversial depending on your vantage point. It’s pretty self-evident that hackers weren’t breaking into medical-related organizations to leave messages of support or anonymous donations.

#big data #latest news #a double-edged sword #a double-edged sword: horror stories aside, big data does more goodthan harm #big data does more goodthan harm #horror stories aside,

Zelma  Gerlach

Zelma Gerlach

1621616520

Edge Computing: Device Edge vs. Cloud Edge

It sometimes makes sense to treat edge computing not as a generic category but as two distinct types of architectures: cloud edge and device edge.

Most people talk about edge computing as a singular type of architecture. But in some respects, it makes sense to think of edge computing as two fundamentally distinct types of architectures: Device edge and cloud edge.

Although a device edge and a cloud edge operate in similar ways from an architectural perspective, they cater to different types of use cases, and they pose different challenges.

Here’s a breakdown of how device edge and cloud edge compare.

Edge computing, defined

First, let’s briefly define edge computing itself.

Edge computing is any type of architecture in which workloads are hosted closer to the “edge” of the network — which typically means closer to end-users — than they would be in conventional architectures that centralize processing and data storage inside large data centers.

#cloud #edge computing #cloud computing #device edge #cloud edge

Joseph  Murray

Joseph Murray

1624097700

Double comparison in Java

Recently I was solving an interesting bug that came down to comparing two Double variables with equals method. It looks innocent, what can be wrong with something like firstDouble.equals(secondDouble)?

The problem here is with how doubles are stored. To fit them into 64bytes (usually) they are rounded.

See the example below:

Double firstDouble = 0d;
for (int i = 1; i <= 42; i++) {
 firstDouble += 0.1;
}
Double secondDouble = 0.1 * 42;
System.out.println(firstDouble); // 4.200000000000001
System.out.println(secondDouble); // 4.2
System.out.println(firstDouble.equals(secondDouble)); // false

This inaccuracy is caused by rounding errors.

We need to use a different approach to compare those doubles.

#java #double comparison in java #double comparison #comparisons #double

Juanita  Apio

Juanita Apio

1623173160

Computing on the EDGE

Most of the companies in today’s era are moving towards cloud for their computation and storage needs. Cloud provides a one shot solution for all the needs for services across various aspects, be it large scale processing, ML model training and deployments or big data storage and analysis. This again requires moving data, video or audio to the cloud for processing and storage which also has certain shortcomings compared to do it at the client like

  • Network latency
  • Network cost and bandwidth
  • Privacy
  • Single point failure

If you look at other side, cloud have their own advantages and I will not talk about them right now. With all these in mind, how about a hybrid approach where few requirements can be moved to the client and some remain on the cloud. This is where EDGE computing comes into picture. According to Wiki here is the definition of the same

Edge computing_ is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth”_

Edge has a lot of use cases like

  • Trained ML models (specially video and audio) siting closer on the edge for inferencing or prediction.
  • IoT data analysis for large scale machines right at the edge

Look at Gartner hype cycle for emerging technologies. Edge is gaining momentum.

There are many platforms in the market specialised in edge deployments right from cloud solutions like azure iot hub, aws greengrass …, open source like _kubeedge, edgeX-Foundary _and third party like Intellisite etc.

I will focus this article on using one of the platforms for building an “Attendance platform” on the edge using facial recognition. I will add as many links as possible for your references.

Let us start with taking the first step and defining the requirements

  • Capture video from the camera
  • Recognise faces based on trained ML model
  • Display the video feed with recognised faces on the monitor
  • Log attendance in a database
  • Collect logs and metrics
  • Save unrecognised images to a central repository for retraining and improving model
  • Multi site deployments

Choosing a platform

Choosing the right platform from so many options was a bit tricky. For the POC, we looked at few pieces in the platform

  • Pricing
  • Infrastructure maintenance
  • Learning curve
  • Ease of use

There were other metrics as well but these were on top of our mind. Azure IoT looked pretty good in terms of above evaluation. We also looked at Kubeedge which provided deployments on Kubernetes on the edge. It is open source and looked promising. Looking at many components (cloud and edge) involved with maintenance overhead, we decided not to move ahead with open source. We were already using Azure cloud for other cloud infra, this also made our work a little more easier in choosing this platform. This also helped

Leading platform players

Designing the solution

Azure IoT hub provided 2 main components. One is the cloud component responsible for managing the deployments on edge and collection of data from them. The other is the edge component consisting of

  • Edge Agent : manages deployment and monitoring of modules on the IoT Edge device
  • Edge Hub : handles communications between modules on the IoT Edge device, and between the device and IoT Hub.

I will not go into the details, you can find more details here about the Azure IoT edge. To give a brief, Azure edge requires modules as containers which can to be pushed to the edge. The edge device first needs to be registered with the IoT Hub. Once the Edge agent connects with the hub, you can push your modules using a deployment.json file. The container runtime that Azure Edge uses is moby.

We used Azure IoT free tier which was sufficient for our POC. Check the pricing here

As per the requirements of the POC, this is what we came up with

The solution consists of various containers which are deployment on the edge as well as few cloud deployments. I will talk about each components in details as we move ahead.

As part of the POC, we assumed 2 sites where attendance needs to be taken at multiple gates. To simulate, we created 4 ubuntu machine. This is the ubuntu desktop image we used. For attendance, we created a video containing still photos of few filmstars and sportsperson. These videos will be used for attendance in order to simulate the cameras, one for each gate.

Modules in action

Camera module

It captures IP camera feed and pushed the frames for consumption

  • It uses python opencv for capture. For the POC, we read video files pushed inside the container.
  • Frames published to zeromq (brokerless message queue).
  • Used python3-opencv docker container as base image and pyzmq module for mq. Check this blog on how to use zeromq with python.

The module was configured to use a lot of environment variables, one being sampling rate of the video frames. Processing all frames require high memory and CPU, so it is always advisable to drop frames to reduce cpu load. This can be done in either camera module or inferencing module.

Inference Module

  • Used a pre-existing face recognition deep learning model for our inferencing needs.
  • Trained the model with easily available filmstars and sportsperson images.
  • The model was not trained with couple of images which were present in the video to showcase undetected image use case. These undetected images were stored in ADLS gen2, explained in the storage module.
  • Python pyzmq module was used to consume frames published by the camera module.
  • Not every frame was processed and few frames were dropped based on the configuration set via environment variables.
  • Once an image was recognised, a message (json) for attendance was send to the cloud using IoT Edge hub. Use this to specify routes in your deployment file.

#deep-learning #edge-computing #azure #edge