Deval Desai

Deval Desai

1584862636

C++20 & Rust on Static vs Dynamic Generics

Demo of static vs dynamic generics with C++20 concepts & virtual methods and with Rust traits.

#rust #cplusplus #wasm #webdev

What is GEEK

Buddha Community

C++20 & Rust on Static vs Dynamic Generics
Abdullah  Kozey

Abdullah Kozey

1626250440

Using WebAssembly threads from C, C++ and Rust

WebAssembly threads support is one of the most important performance additions to WebAssembly. It allows you to either run parts of your code in parallel on separate cores, or the same code over independent parts of the input data, scaling it to as many cores as the user has and significantly reducing the overall execution time.

In this article you will learn how to use WebAssembly threads to bring multithreaded applications written in languages like C, C++, and Rust to the web.

How WebAssembly threads work #

WebAssembly threads is not a separate feature, but a combination of several components that allows WebAssembly apps to use traditional multithreading paradigms on the web.

Web Workers #

First component is the regular Workers you know and love from JavaScript. WebAssembly threads use the new Worker constructor to create new underlying threads. Each thread loads a JavaScript glue, and then the main thread uses Worker#postMessage method to share the compiled WebAssembly.Module as well as a shared WebAssembly.Memory (see below) with those other threads. This establishes communication and allows all those threads to run the same WebAssembly code on the same shared memory without going through JavaScript again.

Web Workers have been around for over a decade now, are widely supported, and don’t require any special flags.

SharedArrayBuffer #

WebAssembly memory is represented by a WebAssembly.Memory object in the JavaScript API. By default WebAssembly.Memory is a wrapper around an ArrayBuffer—a raw byte buffer that can be accessed only by a single thread.

> new WebAssembly.Memory({ initial:1, maximum:10 }).buffer
ArrayBuffer { … }

To support multithreading, WebAssembly.Memory gained a shared variant too. When created with a shared flag via the JavaScript API, or by the WebAssembly binary itself, it becomes a wrapper around a SharedArrayBuffer instead. It’s a variation of ArrayBuffer that can be shared with other threads and read or modified simultaneously from either side.

> new WebAssembly.Memory({ initial:1, maximum:10, shared:true }).buffer
SharedArrayBuffer { … }

Unlike postMessage, normally used for communication between main thread and Web Workers, SharedArrayBuffer doesn’t require copying data or even waiting for the event loop to send and receive messages. Instead, any changes are seen by all threads nearly instantly, which makes it a much better compilation target for traditional synchronisation primitives.

SharedArrayBuffer has a complicated history. It was initially shipped in several browsers mid-2017, but had to be disabled in the beginning of 2018 due to discovery of Spectre vulnerabilities. The particular reason was that data extraction in Spectre relies on timing attacks—measuring execution time of a particular piece of code. To make this kind of attack harder, browsers reduced precision of standard timing APIs like Date.now and performance.now. However, shared memory, combined with a simple counter loop running in a separate thread is also a very reliable way to get high-precision timing, and it’s much harder to mitigate without significantly throttling runtime performance.

Instead, Chrome 68 (mid-2018) re-enabled SharedArrayBuffer again by leveraging Site Isolation—a feature that puts different websites into different processes and makes it much more difficult to use side-channel attacks like Spectre. However, this mitigation was still limited only to Chrome desktop, as Site Isolation is a fairly expensive feature, and couldn’t be enabled by default for all sites on low-memory mobile devices nor was it yet implemented by other vendors.

Fast-forward to 2020, Chrome and Firefox both have implementations of Site Isolation, and a standard way for websites to opt-in to the feature with COOP and COEP headers. An opt-in mechanism allows to use Site Isolation even on low-powered devices where enabling it for all the websites would be too expensive. To opt-in, add the following headers to the main document in your server configuration:

Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin

Once you opt-in, you get access to SharedArrayBuffer (including WebAssembly.Memory backed by a SharedArrayBuffer), precise timers, memory measurement and other APIs that require an isolated origin for security reasons. Check out the Making your website “cross-origin isolated” using COOP and COEP for more details.

WebAssembly atomics #

While SharedArrayBuffer allows each thread to read and write to the same memory, for correct communication you want to make sure they don’t perform conflicting operations at the same time. For example, it’s possible for one thread to start reading data from a shared address, while another thread is writing to it, so the first thread will now get a corrupted result. This category of bugs is known as race conditions. In order to prevent race conditions, you need to somehow synchronize those accesses. This is where atomic operations come in.

WebAssembly atomics is an extension to the WebAssembly instruction set that allow to read and write small cells of data (usually 32- and 64-bit integers) “atomically”. That is, in a way that guarantees that no two threads are reading or writing to the same cell at the same time, preventing such conflicts at a low level. Additionally, WebAssembly atomics contain two more instruction kinds—“wait” and “notify”—that allow one thread to sleep (“wait”) on a given address in a shared memory until another thread wakes it up via “notify”.

  • All the higher-level synchronisation primitives, including channels, mutexes, and read-write locks build upon those instructions.

#rust #c, c++ #cplusplus #c++ #c #rust programming

Generics type in C# | Generic Class | Generic Method | C# Bangla Tutorial | Advanced C#

https://youtu.be/xGjX0R_6qDg

#oop in c# #object oriented programming in c# #object oriented concept in c# #learn oop concept #advance c# #generics type in c#

Generics type example in C# | Generic Class | Generic Method | C# Tutorial | Advanced C#

https://youtu.be/xfDjyg9jKSk

#oop in c# #object oriented programming in c# #object oriented concept in c# #learn oop concept #advance c# #generics type example in c#

C/C++ vs. Rust: A developer’s perspective

C++ is an incredibly fast and efficient programming language. Its versatility knows no bounds and its maturity ensures support and reliability are second to none. Code developed in C++ is also extremely portable, all major operating systems support it. Many developers begin their coding journey with the language, and this is no coincidence. Being object-oriented means it does a very good job of teaching concepts like classes, inheritance, abstraction, encapsulation and polymorphism. Its concepts and syntax can be found in modern languages like C#, Java and Rust. It provides a great foundation that serves as a high speed on ramp to the more popular, easier to use and modern alternatives.

Now it’s not all rosy. C++ has a very steep learning curve and requires developers to apply best practices to the letter or risk ending up with unsafe and/or poor performing code. The small footprint of the standard library, while most times considered a benefit, also adds to the level of difficulty. This means successfully using C++ to create useful complex libraries and applications can be challenging. There is also very little offered in terms of memory management, developers must do this themselves. Novice programmers could end up with debugging nightmares as their lack of experience leads to memory corruption and other sticky situations. This last point has lead many companies to explore fast performing, safe and equally powerful alternatives to C++. For today’s Microsoft that means Rust.

The majority of vulnerabilities fixed and with a CVE [Common Vulnerabilities and Exposures] assigned are caused by developers inadvertently inserting memory corruption bugs into their C and C++ code - Gavin Thomas, Microsoft Security Response Center
Rust began as a personal project by a Mozilla employee named Graydon Hoare sometime in 2006. This ambitious project was in pre-release development for almost a decade, finally launching version 1.0 in May 2015. In what seems to be the blink of an eye it has stolen the hearts of hordes of developers going as far as being voted the most loved language four years straight since 2016 in the Stack Overflow Developer Survey.

The hard work has definitely paid off. The end result is very efficient language which is characteristically object oriented. The fact that it was designed to be syntactically similar to C++ makes it very easy to approach. But unlike the aforementioned it was also designed to be memory safe while also employing a form of memory management without the explicit use of garbage collection.

The ugly truth is software development is very much a trial and error endeavor. With that said Rust has gone above and beyond to help us debug our code. The compiler produces extremely intuitive and user friendly error messages along with great direct linking to relevant documentation to aid with troubleshooting. This means if the problem is not evident, most times the answer is a click away. I’ve found myself rarely having to fire up my browser to look for solutions outside of what the Rust compiler offers in terms of explanation and documentation.

Rust does not have a garbage collector but most times still allocates and release memory for you. It’s also designed to be memory safe, unlike C++ which very easily lets you get into trouble with dangling pointers and data races. In contrast Rust employs concepts which help you prevent and avoid such issues.

There are many other factors which have steered me away from C++ and onto Rust. But to be honest it has nothing to do with all the great stuff we’ve just explored. I came to Rust on a journey that began with WebAssembly. What started with me looking for a more efficient alternative to JavaScript for the web turned into figuring out just how powerful Rust turns out to be. From its seamless interop…

Automatically generate binding code between Rust, WebAssembly, and JavaScript APIs. Take advantage of libraries like web-sys that provide pre-packaged bindings for the entire web platform. – Rust website
To how fast and predictable its performance is. Everything in our lives evolves. Our smartphones, our cars, our home appliances, our own bodies. C++ while still incredibly powerful, fast and versatile can only take us so far. There is no harm in exploring alternatives, especially one as exceptional and with as much promise as Rust.

What do you guys think? Have you or would you give Rust a try? Let us know your thoughts in the comments section below.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Why you should move from Node.js to Rust in 2019

Rust Vs. Haskell: Which Language is Best for API Design?

7 reasons why you should learn Rust programming language in 2019

An introduction to Web Development with Rust for Node.js Developers

#rust #c++ #c-sharp #c

Static in C# | What is static | Static Methods & Classes | C# Tutorial | Advanced C#

https://youtu.be/U67CWT-I4oY

#oop in c# #object oriented programming #object oriented concept in c# #learn oop concept #advance c# #static in c#