1622225580
This is my first Medium.com story. After reading, tweeting, and bookmarking hundreds of stories, I’ve decided to create a simple one for those of you learning Rust after coming from C++, just like me.
In this short article, we’ll compare and contrast how C++ and Rust handle dynamic polymorphism. A quick disclaimer, I’m not a Rust guru, but rather someone who’s more familiar with C++ (by the way, you can join our San Diego C++ meetup at https://www.meetup.com/San-Diego-CPP/ )
Let’s start with the purpose of polymorphism: with dynamic, runtime polymorphism, we’re able to hold a pointer or reference to a type that is actually pointing to a more concrete, derived type. It’s the basic lesson that people learn when working with OO (Object Oriented) languages. For our example, we’ll use **Animal **as the base, top level type, along with more concrete types, like Cat and Dog.
Animal class type has a
The above is simple: we create two newclasses, Dog and Cat, use public inheritance from Animal, and implement the pure virtual function declared in the **Animal **base class, **talk(). **The **final **keyword defines that the class will be a leaf type. You cannot subclass from Dog/Cat.
Here is how we can use the class hierarchy demonstrated in “Modern C++”.
For this to compile, you will need to **#include the following headers: , **and .
In the above main function, we created a vector of pointers to the **Animal **class type. We use unique_ptr<> to manage the heap allocation and deallocation. This is a C++11 feature. make_unique<>() is a helper function from C++14 that allocates such concrete type instances.
Finally, we create the loop (C++11 feature — range for loop) to iterate over Animal and invoke talk() on each instance. So long as each type implements the talk() interface (override virtual function of **Animal **base class), the type of the concrete instance is irrelevant.
#object-oriented #cpp #rust
1626250440
WebAssembly threads support is one of the most important performance additions to WebAssembly. It allows you to either run parts of your code in parallel on separate cores, or the same code over independent parts of the input data, scaling it to as many cores as the user has and significantly reducing the overall execution time.
In this article you will learn how to use WebAssembly threads to bring multithreaded applications written in languages like C, C++, and Rust to the web.
WebAssembly threads is not a separate feature, but a combination of several components that allows WebAssembly apps to use traditional multithreading paradigms on the web.
First component is the regular Workers you know and love from JavaScript. WebAssembly threads use the new Worker
constructor to create new underlying threads. Each thread loads a JavaScript glue, and then the main thread uses Worker#postMessage method to share the compiled
WebAssembly.Module as well as a shared
WebAssembly.Memory (see below) with those other threads. This establishes communication and allows all those threads to run the same WebAssembly code on the same shared memory without going through JavaScript again.
Web Workers have been around for over a decade now, are widely supported, and don’t require any special flags.
SharedArrayBuffer
#WebAssembly memory is represented by a WebAssembly.Memory object in the JavaScript API. By default
WebAssembly.Memory is a wrapper around an
ArrayBuffer—a raw byte buffer that can be accessed only by a single thread.
> new WebAssembly.Memory({ initial:1, maximum:10 }).buffer
ArrayBuffer { … }
To support multithreading, WebAssembly.Memory
gained a shared variant too. When created with a shared
flag via the JavaScript API, or by the WebAssembly binary itself, it becomes a wrapper around a SharedArrayBuffer instead. It’s a variation of ArrayBuffer
that can be shared with other threads and read or modified simultaneously from either side.
> new WebAssembly.Memory({ initial:1, maximum:10, shared:true }).buffer
SharedArrayBuffer { … }
Unlike postMessage, normally used for communication between main thread and Web Workers,
SharedArrayBuffer doesn’t require copying data or even waiting for the event loop to send and receive messages. Instead, any changes are seen by all threads nearly instantly, which makes it a much better compilation target for traditional synchronisation primitives.
SharedArrayBuffer
has a complicated history. It was initially shipped in several browsers mid-2017, but had to be disabled in the beginning of 2018 due to discovery of Spectre vulnerabilities. The particular reason was that data extraction in Spectre relies on timing attacks—measuring execution time of a particular piece of code. To make this kind of attack harder, browsers reduced precision of standard timing APIs like Date.now
and performance.now
. However, shared memory, combined with a simple counter loop running in a separate thread is also a very reliable way to get high-precision timing, and it’s much harder to mitigate without significantly throttling runtime performance.
Instead, Chrome 68 (mid-2018) re-enabled SharedArrayBuffer
again by leveraging Site Isolation—a feature that puts different websites into different processes and makes it much more difficult to use side-channel attacks like Spectre. However, this mitigation was still limited only to Chrome desktop, as Site Isolation is a fairly expensive feature, and couldn’t be enabled by default for all sites on low-memory mobile devices nor was it yet implemented by other vendors.
Fast-forward to 2020, Chrome and Firefox both have implementations of Site Isolation, and a standard way for websites to opt-in to the feature with COOP and COEP headers. An opt-in mechanism allows to use Site Isolation even on low-powered devices where enabling it for all the websites would be too expensive. To opt-in, add the following headers to the main document in your server configuration:
Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin
Once you opt-in, you get access to SharedArrayBuffer
(including WebAssembly.Memory
backed by a SharedArrayBuffer
), precise timers, memory measurement and other APIs that require an isolated origin for security reasons. Check out the Making your website “cross-origin isolated” using COOP and COEP for more details.
While SharedArrayBuffer
allows each thread to read and write to the same memory, for correct communication you want to make sure they don’t perform conflicting operations at the same time. For example, it’s possible for one thread to start reading data from a shared address, while another thread is writing to it, so the first thread will now get a corrupted result. This category of bugs is known as race conditions. In order to prevent race conditions, you need to somehow synchronize those accesses. This is where atomic operations come in.
WebAssembly atomics is an extension to the WebAssembly instruction set that allow to read and write small cells of data (usually 32- and 64-bit integers) “atomically”. That is, in a way that guarantees that no two threads are reading or writing to the same cell at the same time, preventing such conflicts at a low level. Additionally, WebAssembly atomics contain two more instruction kinds—“wait” and “notify”—that allow one thread to sleep (“wait”) on a given address in a shared memory until another thread wakes it up via “notify”.
#rust #c, c++ #cplusplus #c++ #c #rust programming
1557865620
C++ is an incredibly fast and efficient programming language. Its versatility knows no bounds and its maturity ensures support and reliability are second to none. Code developed in C++ is also extremely portable, all major operating systems support it. Many developers begin their coding journey with the language, and this is no coincidence. Being object-oriented means it does a very good job of teaching concepts like classes, inheritance, abstraction, encapsulation and polymorphism. Its concepts and syntax can be found in modern languages like C#, Java and Rust. It provides a great foundation that serves as a high speed on ramp to the more popular, easier to use and modern alternatives.
Now it’s not all rosy. C++ has a very steep learning curve and requires developers to apply best practices to the letter or risk ending up with unsafe and/or poor performing code. The small footprint of the standard library, while most times considered a benefit, also adds to the level of difficulty. This means successfully using C++ to create useful complex libraries and applications can be challenging. There is also very little offered in terms of memory management, developers must do this themselves. Novice programmers could end up with debugging nightmares as their lack of experience leads to memory corruption and other sticky situations. This last point has lead many companies to explore fast performing, safe and equally powerful alternatives to C++. For today’s Microsoft that means Rust.
The majority of vulnerabilities fixed and with a CVE [Common Vulnerabilities and Exposures] assigned are caused by developers inadvertently inserting memory corruption bugs into their C and C++ code - Gavin Thomas, Microsoft Security Response Center
Rust began as a personal project by a Mozilla employee named Graydon Hoare sometime in 2006. This ambitious project was in pre-release development for almost a decade, finally launching version 1.0 in May 2015. In what seems to be the blink of an eye it has stolen the hearts of hordes of developers going as far as being voted the most loved language four years straight since 2016 in the Stack Overflow Developer Survey.
The hard work has definitely paid off. The end result is very efficient language which is characteristically object oriented. The fact that it was designed to be syntactically similar to C++ makes it very easy to approach. But unlike the aforementioned it was also designed to be memory safe while also employing a form of memory management without the explicit use of garbage collection.
The ugly truth is software development is very much a trial and error endeavor. With that said Rust has gone above and beyond to help us debug our code. The compiler produces extremely intuitive and user friendly error messages along with great direct linking to relevant documentation to aid with troubleshooting. This means if the problem is not evident, most times the answer is a click away. I’ve found myself rarely having to fire up my browser to look for solutions outside of what the Rust compiler offers in terms of explanation and documentation.
Rust does not have a garbage collector but most times still allocates and release memory for you. It’s also designed to be memory safe, unlike C++ which very easily lets you get into trouble with dangling pointers and data races. In contrast Rust employs concepts which help you prevent and avoid such issues.
There are many other factors which have steered me away from C++ and onto Rust. But to be honest it has nothing to do with all the great stuff we’ve just explored. I came to Rust on a journey that began with WebAssembly. What started with me looking for a more efficient alternative to JavaScript for the web turned into figuring out just how powerful Rust turns out to be. From its seamless interop…
Automatically generate binding code between Rust, WebAssembly, and JavaScript APIs. Take advantage of libraries like web-sys that provide pre-packaged bindings for the entire web platform. – Rust website
To how fast and predictable its performance is. Everything in our lives evolves. Our smartphones, our cars, our home appliances, our own bodies. C++ while still incredibly powerful, fast and versatile can only take us so far. There is no harm in exploring alternatives, especially one as exceptional and with as much promise as Rust.
What do you guys think? Have you or would you give Rust a try? Let us know your thoughts in the comments section below.
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Follow us on Facebook | Twitter
☞ Why you should move from Node.js to Rust in 2019
☞ Rust Vs. Haskell: Which Language is Best for API Design?
☞ 7 reasons why you should learn Rust programming language in 2019
☞ An introduction to Web Development with Rust for Node.js Developers
#rust #c++ #c-sharp #c
1624694160
Rust and C++ languages are very important in IoT development: they are both utilized in the areas where the direct connection with hardware configuration, speed of performance, and low-level access to memory and controllers matter the most. Specifically, device and application levels of IoT, system programming (drivers, operating system kernels, controllers, etc.), desktop utility programming, 3D game development, and many other spheres. Let’s check the most important comparison areas to find out which of these two is the best choice for your project!
C++ is an “old-school” object-oriented programming language developed by Bjarne Stroustrup in 1985. It improved the conception of the C language, becoming a so-called “C with Classes”, which was a kind of revolutionary solution several decades ago. C++ was designed as a really powerful system programming instrument: literally, the majority of the Microsoft products were developed using different editions of Visual C++ (or simply C++), including the “epic” software packages, such as Windows 95, 98, ME, 200 and XP. Since it’s an object-oriented programming language, C++ provides a determined code structure, enables reusability of code modules, and is also praised for fast performance. Moreover, it’s a multi-purpose language meaning you can use it to build a very wide range of products from resource-constrained software and basic graphical user interface applications to various sophisticated 3D visuals, desktop games, and powerful business packages. С++ is valued by developers for its wide capabilities as well as efficiency, and flexibility.
Rust is a system-level programming language, developed by Mozilla in 2010, which is aimed at achieving higher performance and better safety levels in comparison to C++. Specifically, it’s designed to cope with certain issues that C++ has been never good with, such as memory-related inefficiencies and concurrent programming. In terms of syntax, Rust is pretty close to C++, but it appeared to be more “loveable” (in fact, it was named the “most loved language” for five years in a row) meaning it’s more convenient and versatile than others, so a great number of developers has employed it for their projects instead of C++.
It was used to develop the Mozilla Firefox browser and ensures a safer way of memory management without using garbage collection methods. It’s considered a low-level language as it provides detailed control possibilities, especially manual memory management. Also, Rust produces the smallest binary possible and compiles rapidly, with minimum overhead.
#software #web development #c++ #rust #rust vs c++
1618356600
We have now come to the part in the C## in Simple Terms series where we can explore some cool but little-used C## features. Among these is the ability to access values in a class instance in the same way we access array values; we do this using a C## feature called indexers.
So, let’s build some indexers!
#c# in simple terms #c# #c #c++
1618364160
A few posts back, we talked about Arrays and Collections, and how easy they were to deal with.
In this post, we’ll talk about a feature of C## that allows us developers to iterate over many different kinds of collections and return elements from them one-by-one. Let’s learn about iterators!
#c# in simple terms #c# #c #c++