1656732888
Visually-stunning, easy to customize site templates built with React and Next.js. Designed and built by the makers of Tailwind CSS themselves., it's the perfect starting point for your next project.
Read more: https://www.producthunt.com
#tailwindcss #tailwind #wedev #programming
1656665673
Connect a Flutter feature to a native app and get up and running in just a few lines of code
If you’ve ever wanted to try using Flutter, but don’t want to build something from scratch, Flutter’s add-to-app functionality is a great place to start. To make it even easier to put Flutter to work for you, we teamed up with the Flutter team to create a sample add-to-app prototype to showcase how Flutter can be integrated into a native codebase with minimal effort. Whether you want to take Flutter for a trial run or show your team how Flutter works in a tangible way, this article is for you!
Adding Flutter to a native newsfeed app
In this tutorial, we’ll show you how to take a Flutter feature and, using Flutter’s add-to-app API, incorporate it into a native app in just a few lines.
The project we’ll look at is split into two parts. The first part contains three identical newsfeed applications for three separate native platforms: Android, iOS, and the web. The app is interactive, so you can run it on a device and scroll through the articles, click on the news items, and more. The second part of this project is a dialog that pops up when interacting with the app and asks users to submit feedback. This feature, which we’ll call the NPS (Net Promoter Score) module, is built with Flutter.
First, go to the example repository. Here you’ll see a folder for each platform containing the native code for the newsfeed app. Also in the repository is the flutter_nps
folder that contains all of the Flutter code for the NPS module.
Adding Flutter on the web with Angular
The Flutter module runs as an <iframe>
within the native web app. To integrate the feature into the Angular codebase, first run a Flutter build for the web target. This step generates an index.html
and other necessary files. Copy all of the build files into the Angular app src
folder. From there, you can reference the build files within the iframe
. The next time you run the web app, you’ll see the Flutter feature!
View the README for full instructions.
Adding Flutter to Android with Kotlin
Now let’s add the NPS module into the Android app. First start a Flutter activity using a cached engine. As soon as you launch the native news app, the Flutter engine warms up in the background. Then, you’ll start a new activity and point it to the Flutter activity. This ensures a quick transition from the native Kotlin code to Flutter and allow the Flutter feature to work seamlessly within the Android app.
View the README for full instructions.
Adding Flutter on iOS with SwiftUI
Finally, we can add the NPS Module into the iOS App. First, embed the compiled Flutter module into your application in Xcode’s build settings. Then, in your application delegate, create an instance of the Flutter engine and start it up. With that done, you’re ready to display Flutter UI wherever needed — just create a FlutterViewController
using the Flutter Engine and present it. Then run flutter build ios-framework
with the path.
View the README for full instructions.
Put Flutter to work for you (and your team!)
Now that you have the Flutter code up and running within your app, you can experiment with some of the fun parts of Flutter. The following sections include some ideas for where to begin.
Supporting multiple platforms
In this newsfeed example, you can see how the NPS module supports platform differences. On the web, the module appears as a dialog on top of the newsfeed and reacts to input from a mouse or screen reader. On mobile, the module takes up the full screen space and reacts to input via touch or screen reader.
Note that the Flutter NPS module contains Material widgets, which automatically handle gesture detection, depending on the user’s device. If using a desktop device, the app receives mouse input, and if using a mobile device, the app receives touch input.
Animations
This prototype includes a few implicit animations that are easy to adjust since they are built into the Flutter framework. For example, if you want to make changes to the AnimatedContainer
widget, simply adjust its properties, such as the duration of the animation, the height of the container, its shape, and color.
...
return AnimatedContainer(
duration: duration,
height: Spacing.huge,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: isSelected
? NpsColors.colorSecondary
: NpsColors.colorGrey5,
),
...
)
The NPS module includes a custom page animation transition. Take a look at the SlideTransition
widget for another animation example that could be customized by updating its duration and other elements.
SlideTransition(
position: Tween<Offset>(
begin: const Offset(0, 1),
end: Offset.zero,
).animate(animation),
child: child,
);
If you want to take your animations to the next level, you could import the animations
package from pub.dev and use some of the fancy, pre-built animations.
Theming
It’s also simple to update the theme of the NPS module. Because it uses the built-in Material theming via ThemeData
, you can simply update the colors, button style, and font all in one place. For example, to change the accentColor
and backgroundColor
of the NPS module with Flutter, update to your desired color using the provided Material color palette shades, or your desired custom colors.
class AppTheme {
ThemeData get theme => ThemeData(
colorScheme: ColorScheme.fromSwatch(
accentColor: NpsColors.colorSecondary,
backgroundColor: NpsColors.colorWhite,
),
scaffoldBackgroundColor: NpsColors.colorWhite,
elevatedButtonTheme: ElevatedButtonThemeData(
style: ElevatedButton.styleFrom(
primary: NpsColors.colorSecondary,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(24),
),
).copyWith(
backgroundColor: MaterialStateProperty.resolveWith<Color?>(
(Set<MaterialState> states) {
if (!states.contains(MaterialState.disabled)) {
return NpsColors.colorSecondary;
} else if (states.contains(MaterialState.disabled)) {
return NpsColors.colorWhite;
}
return null;
},
),
),
),
textTheme: const TextTheme(
headline5: NpsStyles.headline5,
subtitle1: NpsStyles.subtitle1,
bodyText2: NpsStyles.link,
),
);
Additional features
The Flutter NPS module uses flutter_bloc
for state management to keep track of the user’s score response. Cubit is one of many options for state management when building Flutter applications. The feature also includes unit and widget tests, which are useful tools to ensure that the code you’re writing is working as intended. Finally, the codebase has localization support for 78 languages out of the box. This project has GitHub Actions integration for continuous integration to run formatting, linting, and test phases before merging changes.
Backend
While this prototype does not currently interact with a backend, you could configure it with a backend of your choosing to store the data from the NPS module, or even pull in sample articles for the native newsfeed. One option to explore is Firebase, which integrates seamlessly with Flutter. See the Firebase documentation to add Firebase to your Flutter app.
Now that you know how to add a Flutter feature into a native web, Android, and iOS codebase, you could follow a similar process to integrate the feature into any native app. See the full add-to-app documentation for more information.
Check out the full code in the open source repository here.
Original article source at https://medium.com
#dart #flutter #programming
1656665297
This project is a demo intended to help people test drive Flutter by integrating it into their existing applications.
Included in this repo is a Flutter add-to-app module, which contains the UI and logic for displaying a popup to capture user feedback (a "net promoter score"). Alongside the module are three newsfeed applications for iOS, Android, and web, built with SwiftUI, Kotlin, and Angular, respectively.
The applications demonstrate how to import a Flutter module and display it using native platform code. If you're looking to learn, for example, how to wire up a UI element in Swift to navigate to content displayed with Flutter, the iOS newsfeed app can show you.
If you'd like to try Flutter in your own applications, you can download a prebuilt copy of the Flutter module for Android, iOS, or web from this repo, and follow the instructions below to integrate it into your own apps. Note that you don't need the Flutter SDK installed on your development machine for these to work!
Full instructions for adding a module to an existing iOS app are available in the add-to-app documentation at flutter.dev, but you can find the short version for both Swift and Objective-C below.
Download a recent framework build of the Flutter module from this repo.
Unzip the archive into the root of your project directory. It will create a directory there called flutter-framework
containing the compiled Flutter module.
Open the flutter-framework/Release
directory and drag App.xcframework
and Flutter.xcframework
to the General > Frameworks, Libraries, and Embedded Content section of your app target in Xcode.
Once the Flutter module is linked into your application, you're ready to fire up an instance of the Flutter engine and present the Flutter view controller.
Swift
In AppDelegate.swift
, add the following three lines marked as "NEW":
import UIKit
import Flutter // NEW!
@UIApplicationMain
class AppDelegate: FlutterAppDelegate {
lazy var flutterEngine = FlutterEngine(name: "my flutter engine") // NEW!
override func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
flutterEngine.run(); // NEW!
return super.application(application, didFinishLaunchingWithOptions: launchOptions);
}
}
Then, in a ViewController class somewhere in your app, call these three lines of code to present the Flutter module's UI:
let flutterEngine = (UIApplication.shared.delegate as! AppDelegate).flutterEngine
let flutterViewController =
FlutterViewController(engine: flutterEngine, nibName: nil, bundle: nil)
present(flutterViewController, animated: true, completion: nil)
For a demo, it's usually best to call these in response to a UI event like a button press. Once they're executed, the net promoter score UI will appear in your app!
Objective-C
In AppDelegate.h
, add this import:
@import Flutter;
and this property to the AppDelegate interface:
@property (nonatomic,strong) FlutterEngine *flutterEngine;
Next, in AppDelegate.m
, add these two lines to didFinishLaunchingWithOptions
:
self.flutterEngine = [[FlutterEngine alloc] initWithName:@"my flutter engine"];
[self.flutterEngine run];
Then, somewhere in a UIViewController class in your app, @import Flutter
and call these lines of code:
FlutterEngine *flutterEngine =
((AppDelegate *)UIApplication.sharedApplication.delegate).flutterEngine;
FlutterViewController *flutterViewController =
[[FlutterViewController alloc] initWithEngine:flutterEngine nibName:nil bundle:nil];
[self presentViewController:flutterViewController animated:YES completion:nil];
For a demo, it's usually best to call these in response to a UI event like a button press. Once they're executed, the net promoter score UI will appear in your app!
Full instructions for adding a module to an existing Android app are available in the add-to-app documentation at flutter.dev, but you can find the short version for both Kotlin and Java below.
First, download a recent aar
build of the Flutter module from this repo. Then, create a directory in the root of your project called flutter
and unzip the archive into that directory.
Next, add the following entries to the repositories and dependencies sections to your app/build.gradle
file:
repositories {
// Add these two maven entries.
maven {
url '../flutter'
}
maven {
url 'https://storage.googleapis.com/download.flutter.io'
}
}
dependencies {
// Add these three entries.
debugImplementation 'com.example.flutter_module:flutter_debug:1.0'
profileImplementation 'com.example.flutter_module:flutter_profile:1.0'
releaseImplementation 'com.example.flutter_module:flutter_release:1.0'
}
Once the Flutter module is linked into your application, fire up an instance of the Flutter engine and present the Flutter Activity.
Kotlin
In your app's Application
class, add a property for a Flutter engine:
lateinit var flutterEngine : FlutterEngine
In onCreate
, instantiate and cache a running Flutter engine with this code:
// Instantiate a FlutterEngine.
flutterEngine = FlutterEngine(this)
// Start executing Dart code to pre-warm the FlutterEngine.
flutterEngine.dartExecutor.executeDartEntrypoint(
DartExecutor.DartEntrypoint.createDefault()
)
// Cache the FlutterEngine to be used by FlutterActivity.
FlutterEngineCache
.getInstance()
.put("my_engine_id", flutterEngine)
Then, in an Activity class somewhere in your app, call these four lines of code to launch the Flutter module's UI:
startActivity(
FlutterActivity
.withCachedEngine("my_engine_id")
.build(this)
)
For a demo, it's usually best to call these in response to a UI event like a button press. Once they're executed, the net promoter score UI will appear in your app!
Java
In your app's Application
class, add a property for a Flutter engine:
public FlutterEngine flutterEngine;
In onCreate
, instantiate and cache a running Flutter engine with this code:
// Instantiate a FlutterEngine.
flutterEngine = new FlutterEngine(this);
// Start executing Dart code to pre-warm the FlutterEngine.
flutterEngine.getDartExecutor().executeDartEntrypoint(
DartEntrypoint.createDefault()
);
// Cache the FlutterEngine to be used by FlutterActivity.
FlutterEngineCache
.getInstance()
.put("my_engine_id", flutterEngine);
Then, in an Activity class somewhere in your app, call these four lines of code to launch the Flutter module's UI:
startActivity(
FlutterActivity
.withCachedEngine("my_engine_id")
.build(currentActivity)
);
For a demo, it's usually best to call these in response to a UI event like a button press. Once they're executed, the net promoter score UI will appear in your app!
There are nearly as many ways to build a website as there are published websites, so to a certain extent the "right" way to integrate the Flutter module into your site will depend on the particular client-side technologies you're using (Angular, React, Vue, etc.) and the server you're using to host your content (Firebase Hosting, nginx, etc.).
It's not possible to cover all of the possibilities here, but the basic approach is roughly the same for any of them:
Download a recent web build of the Flutter module from this repo.
Unzip the archive into a folder somewhere within your project's source tree where it will be picked up and served by the server technology you're using (in the "public" folder of a Firebase Hosting project, for example).
Add an iframe to one of the pages in your site, and set its src URL to point to the index.html
file inside the web-project-flutter
folder created when unzipping the archive in the previous step.
Add client-side code to the same page to display the iframe in response to a convenient UI event, such as a button press.
The sample web application in this repo is built with Angular, and can serve as a model for web integrations. If you're also running Angular, follow the steps below to integrate the model into your project.
First, download a recent web build of the Flutter module and unzip it into the src
directory of your project.
Next, change <base href="/">
to <base href="./">
in src/web-project-flutter/index.html
.
Then update angular.json
to include the new files:
"assets": [
"src/favicon.ico",
"src/assets",
"src/web-project-flutter"
],
Add an iframe
<iframe src="./web-project-flutter/index.html"> </iframe>
Move to your Angular project directory and run:
npm install
or npm install --legacy-peed-deps
depending on your npm dependencies.
Finally, run ng serve
to start the application.
Download Details:
Author: flutter
Source Code: https://github.com/flutter/put-flutter-to-work
#flutter #dart #programming
1656664449
Want to write a fuzz test? In this video, we are joined by Saba, a Software Engineering Manager at Google, who shows how to write a fuzz test for a simple reverse string function, run the go command, and understand the log. You will need Go 1.18 to write a fuzz test!
Go 1.18 installation instructions → https://goo.gle/3AbDsoA
Chapters:
0:00 - Intro
0:19 - Prerequisites for Fuzzing
0:56 - Demo
6:16 - Wrap up
#go #golang #programming
1656661330
Kickstart your Web Development career or take it to the next level with this JavaScript tutorial. This JavaScript course is perfect for Beginners as well as Intermediate Developers.
⏳Timestamps:
00:00:00 - Set up Your Environment as a JavaScript Developer
00:07:01 - JavaScript 101
00:22:39 - Build Your First App with JavaScript (Beginner Weather App)
00:40:44 - Build a Basic Tip Calculator App (For Beginners)
01:11:22 - What are Arrays in JavaScript? (Understanding Arrays | Arrays 101)
01:25:10 - What are Objects in JavaScript? (Understanding Objects | Objects 101)
01:45:04 - Understanding For Loops in JavaScript (Getting Started with Loops)
02:04:21 - Practice Arrays and Objects with these Exercises (For Beginners)
03:01:02 - Array Methods (mapping, filtering, reducing | Understanding Array Methods)
03:42:02 - Understanding the DOM (DOM Manipulation for Beginners)
04:22:31 - Build the Advanced Tip Calculator Project (For Beginners & Intermediate)
05:06:06 - Build a Rock Paper Scissors App (Part 1)
05:40:31 - Building the Rock Paper Scissors App with JavaScript (Portfolio Project!)
06:12:18 - What are APIs in JavaScript? (Understanding APIs)
06:33:51 - Build the Superhero App (For Beginners & Intermediate)
07:39:58 - Understanding Async Programming (Promises, Async, Await, Fetch, Then)
08:26:28 - Building the Weather App with JavaScript (Portfolio Project for Entry Level Developers)
08:54:59 - What are Classes in JavaScript? (Object Oriented Programming | OOP)
10:14:56 - Advanced Web Development (Loops, Listeners, Audio | For Beginners & Intermediate)
10:32:03 - Building the Fighting Game with JavaScript (Portfolio Project!)
11:49:52 - Building the Netflix Clone with JavaScript (Portfolio Project!)
#web3 #frontend #javascript #programming #developer #webdev #blockchain
1656659535
How To Make Circular Progress Bar | HTML CSS JavaScript - Make a Circular Progress Bar | HTML CSS JavaScript, step-by-step from scratch.
JavaScript Circular Progress bar is used on various websites to show education and experience. I made this circle progress bar with the help of HTML CSS and javascript. In the meantime, I have designed another Progress Bar with the help of Bootstrap.
This design is made much simpler and fully responsive. Here I made three bars. The first is for displaying HTML, the second for CSS, and the third for JavaScript percentages.
First of all, I have given the background color of web page # 0d0c2d. Then I made a box on that web page. I have kept the color of the box and the background color of the webpage the same.
In this case, we have used a shadow in the box which indicates the size of this box. It has three Circular Progress Bars and each has a percentage and a text. I used color # 36e617 to show progress here. You can use any color you want here.
It is made in a very simple and easy way. Here you can add percentages as you need.
Below I have given a demo section that will help you to better understand how it works. Here you will find the required source code which you can copy and use in your own work.
If you are a beginner and want to know how to create Circle Progress Bar then follow the tutorial below.
I used some CDN links to make this. The first is JQuery and the second is easyPieChart. I have given the link of these two below. You can copy these links and add them to the head section of your HTML file.
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/easy-pie-chart/2.1.6/jquery.easypiechart.min.js"></script>
First of all, I designed the background of this web page and made a box in it. In this box, I put all the Progress Bar. Webpage background: # 0d0c2d I have given blue and used height 100vh.
<div class="container">
</div>
body {
margin: 0;
padding: 0;
justify-content: center;
height: 100vh;
color:white;
background: #0d0c2d;
font-family: sans-serif;
display: flex;
}
.container {
background: #0d0c2d;
padding: 60px;
display: grid;
grid-template-columns: repeat(1, 160px);
grid-gap: 80px;
margin: auto 0;
box-shadow: -5px -5px 8px rgba(94, 104, 121, 0.288),
4px 4px 6px rgba(94, 104, 121, 0.288);
}
Now I have added all the elements of this javascript circular progress bar using HTML code. Here data-percent = "" is used to determine the value of your circle progress bar. I have 90% for HTML, 72% for CSS and 81% for JavaScript. If you want to change, you can change the values here.
With this, I have used a text that will help to know which bar is for which work. Now I have executed this Circular Progress Bar using the jQuery code. In order to execute the jquery code, I have first added the jquery CDN link.
<div class="box">
<div class="chart" data-percent="90" >90%</div>
<h2>HTML</h2>
</div>
<div class="box">
<div class="chart" data-percent="72" >72%</div>
<h2>CSS</h2>
</div>
<div class="box">
<div class="chart" data-percent="81" >81%</div>
<h2>JAVASCRIPT</h2>
</div>
➤ First I used size: 160 which will determine the size of this circle.
➤ barColor: "# 36e617" I used which will determine this progressive color. Here I have used green. You can use any other color if you want.
➤ lineWidth: 15 basically helps to determine the size of the color line in this bar.
➤ trackColor: "# 525151" is here mainly for the background of the circular.
➤ I have used animate: 2000, which means it will take 2000 milliseconds (2 seconds) for the animation to take place. As a result, when you open the page, it will take you two seconds to reach the mean you set from zero.
$(function() {
$('.chart').easyPieChart({
size: 160,
barColor: "#36e617",
scaleLength: 0,
lineWidth: 15,
trackColor: "#525151",
lineCap: "circle",
animate: 2000,
});
});
I designed the titles using the following CSS codes.
.container .box {
width: 100%;
}
.container .box h2 {
display: block;
text-align: center;
color: #fff;
}
I designed and positioned the percentage I used here using the CSS codes below. If you have seen the demo, you will understand that the text here is placed in the middle of the progress bar.
Used text-align: center and position: relative for this. I used font-size: 40px and the color white to make the text sizes a little bigger.
.container .box .chart {
position: relative;
width: 100%;
height: 100%;
text-align: center;
font-size: 40px;
line-height: 160px;
height: 160px;
color: #fff;
}
I have specified the position of this circular progress bar using the following codes. For this, I have used position: absolute and left and top zero.
.container .box canvas {
position: absolute;
top: 0;
left: 0;
width: 100%;
width: 100%;
}
Now, this design is ready to use in any of your websites or project. However, I have made it responsive for all devices using some CSS code below. I have used CSS's @media for this. Then I have determined how it will look for any screen size.
@media (min-width: 420px) and (max-width: 659px) {
.container {
grid-template-columns: repeat(2, 160px);
}
}
@media (min-width: 660px) {
.container {
grid-template-columns: repeat(3, 160px);
}
}
Hopefully from this tutorial, you have learned step by step how I have created this circular progress bar using HTML CSS and javaScript.
#html #css #javascript #programming #webdev
1656656195
In this tutorial, we will talk about JavaScript's main thread as well as how the call stack works.
A call stack is a mechanism for an interpreter (like the JavaScript interpreter in a web browser) to keep track of its place in a script that calls multiple functions — what function is currently being run and what functions are called from within that function, etc.
function greeting() {
// [1] Some code here
sayHi();
// [2] Some code here
}
function sayHi() {
return "Hi!";
}
// Invoke the `greeting` function
greeting();
// [3] Some code here
The code above would be executed like this:
greeting()
function invocation.greeting()
function to the call stack list.Note: Call stack list: - greeting
greeting()
function.sayHi()
function invocation.sayHi()
function to the call stack list.Note: Call stack list: - sayHi - greeting
sayHi()
function, until reaches its end.sayHi()
and continue executing the rest of the greeting()
function.sayHi()
function from our call stack list.Note: Call stack list: - greeting
greeting()
function has been executed, return to its invoking line to continue executing the rest of the JS code.greeting()
function from the call stack list.Note: Call stack list: EMPTY
In summary, then, we start with an empty Call Stack. Whenever we invoke a function, it is automatically added to the Call Stack. Once the function has executed all of its code, it is automatically removed from the Call Stack. Ultimately, the Stack is empty again.
Thread in computer science is the execution of running multiple tasks or programs at the same time. Each unit capable of executing code is called a thread.
The main thread is the one used by the browser to handle user events, render and paint the display, and to run the majority of the code that comprises a typical web page or app. Because these things are all happening in one thread, a slow website or app script slows down the entire browser; worse, if a site or app script enters an infinite loop, the entire browser will hang. This results in a frustrating, sluggish (or worse) user experience.
However, modern JavaScript offers ways to create additional threads, each executing independently while possibly communicating between one another. This is done using technologies such as web workers, which can be used to spin off a sub-program which runs concurrently with the main thread in a thread of its own. This allows slow, complex, or long-running tasks to be executed independently of the main thread, preserving the overall performance of the site or app—as well as that of the browser overall. This also allows individuals to take advantage of modern multi-core processors.
A special type of worker, called a service worker, can be created which can be left behind by a site—with the user's permission—to run even when the user isn't currently using that site. This is used to create sites capable of notifying the user when things happen while they're not actively engaged with a site. Such as notifying a user they have received new email even though they're not currently logged into their mail service.
Overall it can be observed these threads within our operating system are extremely helpful. They help minimize the context switching time, enables more efficient communication and allows further use of the multiprocessor architecture.
#javascript #programming
1656648650
In this tutorial, we will talk about execution context and hoisting
Timestamps
0:00 - Intro
0:44 - What Is Execution Context?
1:39 - The 2 Phases
3:32 - Step By Step Examination
6:12 - Examine Creation In Browser
7:32 - Step-Through Execution
8:52 - Hoisting
11:26 - var vs let & const
JavaScript is an easy-to-learn programming language compared to many of its counterparts. However, a few basic concepts need a bit more attention if you want to understand, debug, and write better code.
In this article, we will learn about two such concepts,
As a beginner to JavaScript, understanding these concepts will help you understand the this
keyword, scope
, and closure
much more comfortably. So enjoy, and keep reading.
Execution Context in JavaScript
In general, a JavaScript source file will have multiple lines of code. As developers, we organize the code into variables, functions, data structures like objects and arrays, and more.
A Lexical Environment
determines how and where we write our code physically. Take a look at the code below:
function doSomething() {
var age= 7;
// Some more code
}
In the above code, the variable age
is lexically inside the function doSomething
.
Please note that our code does not run as-is. It has to be translated by the compiler into computer understandable byte-code. So the compiler needs to map what is lexically placed where in the meaningful and valid way.
Usually, there will be more than one Lexical Environment
in your code. However, not all the environments get executed at once.
The environment that helps the code get executed is called the Execution Context
. It is the code that's currently running, and everything surrounding that helps to run it.
There can be lots of Lexical Environment
s available, but the one currently running code is managed by the Execution Context
.
Check out the image below to understand the difference between a Lexical Environment and Execution Context:
Lexical Environment vs Execution Context
So what exactly happens in the Execution Context? The code gets parsed line-by-line, generates executable byte-code, allocates memory, and executes.
Let's take the same function we have seen above. What do you think may happen when the following line gets executed?
var age = 7;
There are many things happening behind the scenes. That piece of source code goes through the following phases before it is finally gets executed:
Tokens
. For example, the code var age = 7;
tokenizes into var, age, =, 7 and, ;.AST
(Abstract Syntax Tree).The animated picture below shows the transition of the source code to executable byte-code.
Source Code to Executable Byte-Code
All these things happen in an Execution Context
. So the execution context is the environment where a specific portion of the code executes.
There are two types of execution contexts:
And each of the execution contexts has two phases:
Let's take a detailed look at each of them and understand them a bit better.
Whenever we execute JavaScript code, it creates a Global Execution Context (also knows as Base Execution Context). The global execution context has two phases.
In the creation phase, two unique things get created:
window
(for the client-side JavaScript).this
.If there are any variables declared in the code, the memory gets allocated for the variable. The variable gets initialized with a unique value called undefined
. If there is a function
in the code, it gets placed directly into the memory. We will learn more about this part in the Hoisting
section later.
The code execution starts in this phase. Here, the value assignment of the global variables takes place. Please note that no function gets invoked here as it happens in the Function Execution Context. We will see that in a while.
Let's understand both the phases with a couple of examples.
Create an empty JavaScript file with the name index.js
. Now create an HTML file with the following content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src='./index.js'></script>
</head>
<body>
I'm loading an empty script
</body>
</html>
Note that we are importing the empty script file into the HTML file using the <script>
tag.
Load the HTML file in the browser and open Chrome DevTools (usually using the F12
key) or equivalent for other browsers. Browse to the console
tab, type window
, and press enter. You should see the value as the browser's Window
object.
The Window object
Now, type the word this
and press enter. You should see the same Window
object value printed in the browser console.
Value of 'this'
Great, now try to check if window is equal to this
. Yes, it is.
window is equal to 'this'
Alright, so what have we learned?
window
object and this
.window
object and this
are equal.Let's now see an example with some code in the JavaScript file. We'll add a variable (blog) with a value assigned to it. We'll also define a function with the name logBlog
.
var blog = 'freeCodeCamp';
function logBlog() {
console.log(this.blog);
}
In the creation phase:
window
and the variable this
get created.blog
and the function logBlog
.blog
gets initialized by a special value undefined
. The function logBlog
gets placed in the memory directly.In the execution phase:
freeCodeCamp
is assigned to the variable blog
.When we invoke a function, a Function Execution Context gets created. Let's extend the same example we used above, but this time we will call the function.
var blog = 'freeCodeCamp';
function logBlog() {
console.log(this.blog);
}
// Let us call the function
logBlog();
The function execution context goes through the same phases, creation and execution.
The function execution phase has access to a special value called arguments
. It is the arguments passed to the function. In our example, there are no arguments passed.
Please note that the window
object and the this
variable created in the Global Execution Context are still accessible in this context.
When a function invokes another function, a new function execution context gets created for the new function call. Each of the function execution contexts determines the scope
of the variables used in the respective functions.
Hoisting in JavaScript
I hope you enjoyed learning about Execution Context
. Let's move over to another fundamental concept called Hoisting
. When I first heard about hoisting, it took some time to realize something seriously wrong with the name Hoisting
.
In the English language, hoisting means raising something using ropes and pulleys. The name may mislead you to think that the JavaScript engine pulls the variables and functions up at a specific code execution phase. Well, this isn't what happens.
So let's understand Hoisting
using the concept of the Execution Context
.
Please have a look at the example below and guess the output:
console.log(name);
var name;
I'm sure you guessed it already. It's the following:
undefined
However, the question is why? Suppose we use similar code in some other programming language. In that case, we may get an error saying the variable name
is not declared, and we are trying to access it well before that. The answer lies in the execution context.
In the creation
phase,
name
, andundefined
is assigned to the variable.In the execution
phase,
console.log(name)
statement will execute.This mechanism of allocating memory for variables and initializing with the value undefined
at the execution context's creation phase is called Variable Hoisting
.
The special value
undefined
means that a variable is declared but no value is assigned.
If we assign the variable a value like this:
name = 'freeCodeCamp';
The execution phase will assign this value to the variable.
Now let's talk about Function Hoisting
. It follows the same pattern as Variable Hoisting
.
The creation phase of the execution context puts the function declaration into the memory, and the execution phase executes it. Please have a look at the example below:
// Invoke the function functionA
functionA();
// Declare the function functionA
function functionA() {
console.log('Function A');
// Invoke the function FunctionB
functionB();
}
// Declare the function FunctionB
function functionB() {
console.log('Function B');
}
The output is the following:
Function A
Function B
functionA
in it.functionB
as well.Putting the entire function declaration ahead into the memory at the creation phase is called Function Hoisting
.
Since we understand the concept of Hoisting
now, let's understand a few ground rules:
logMe();
var logMe = function() {
console.log('Logging...');
}
The code execution will break because with function initialization, the variable logMe
will be hoisted as a variable, not as function. So with variable hoisting, memory allocation will happen with the initialization with undefined
. That's the reason we will get the error:
Error in hoisting a function initialization
Suppose we try to access a variable ahead of declaration and use the let
and const
keywords to declare it later. In that case, they will be hoisted but not assigned with the default undefined
. Accessing such variables will result in the ReferenceError
. Here is an example:
console.log(name);
let name;
It will throw the error:
Error with hoisting variable declared with let and const keywords
The same code will run without a problem if we use var
instead of let
and const
. This error is a safeguard mechanism from the JavaScript language as we have discussed already, as accidental hoisting may cause unnecessary troubles.
#javascript #programming
1656645860
A cache object that deletes the least-recently-used items.
Specify a max number of the most recently used items that you want to keep, and this cache will keep that many of the most recently accessed items.
This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no preemptive pruning of expired items by default, but you may set a TTL on the cache or on a single set
. If you do so, it will treat expired items as missing, and delete them when fetched. If you are more interested in TTL caching than LRU caching, check out @isaacs/ttlcache.
As of version 7, this is one of the most performant LRU implementations available in JavaScript, and supports a wide diversity of use cases. However, note that using some of the features will necessarily impact performance, by causing the cache to have to do more work. See the "Performance" section below.
npm install lru-cache --save
const LRU = require('lru-cache')
// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
// unsafe unbounded storage.
//
// In most cases, it's best to specify a max for performance, so all
// the required memory allocation is done up-front.
//
// All the other options are optional, see the sections below for
// documentation on what each one does. Most of them can be
// overridden for specific items in get()/set()
const options = {
max: 500,
// for use with tracking overall storage size
maxSize: 5000,
sizeCalculation: (value, key) => {
return 1
},
// for use when you need to clean up something when objects
// are evicted from the cache
dispose: (value, key) => {
freeFromMemoryOrWhatever(value)
},
// how long to live in ms
ttl: 1000 * 60 * 5,
// return stale items before removing from cache?
allowStale: false,
updateAgeOnGet: false,
updateAgeOnHas: false,
// async method to use for cache.fetch(), for
// stale-while-revalidate type of behavior
fetchMethod: async (key, staleValue, { options, signal }) => {}
}
const cache = new LRU(options)
cache.set("key", "value")
cache.get("key") // "value"
// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)
cache.clear() // empty the cache
If you put more stuff in it, then items will fall out.
max
The maximum number (or size) of items that remain in the cache (assuming no TTL pruning or explicit deletions). Note that fewer items may be stored if size calculation is used, and maxSize
is exceeded. This must be a positive finite intger.
At least one of max
, maxSize
, or TTL
is required. This must be a positive integer if set.
It is strongly recommended to set a max
to prevent unbounded growth of the cache. See "Storage Bounds Safety" below.
maxSize
Set to a positive integer to track the sizes of items added to the cache, and automatically evict items in order to stay below this size. Note that this may result in fewer than max
items being stored.
Optional, must be a positive integer if provided. Required if other size tracking features are used.
At least one of max
, maxSize
, or TTL
is required. This must be a positive integer if set.
Even if size tracking is enabled, it is strongly recommended to set a max
to prevent unbounded growth of the cache. See "Storage Bounds Safety" below.
sizeCalculation
Function used to calculate the size of stored items. If you're storing strings or buffers, then you probably want to do something like n => n.length
. The item is passed as the first argument, and the key is passed as the second argument.
This may be overridden by passing an options object to cache.set()
.
Requires maxSize
to be set.
Deprecated alias: length
fetchMethod
Function that is used to make background asynchronous fetches. Called with fetchMethod(key, staleValue, { signal, options, context })
. May return a Promise.
If fetchMethod
is not provided, then cache.fetch(key)
is equivalent to Promise.resolve(cache.get(key))
.
The signal
object is an AbortSignal
if that's available in the global object, otherwise it's a pretty close polyfill.
If at any time, signal.aborted
is set to true
, or if the signal.onabort
method is called, or if it emits an 'abort'
event which you can listen to with addEventListener
, then that means that the fetch should be abandoned. This may be passed along to async functions aware of AbortController/AbortSignal behavior.
The options
object is a union of the options that may be provided to set()
and get()
. If they are modified, then that will result in modifying the settings to cache.set()
when the value is resolved. For example, a DNS cache may update the TTL based on the value returned from a remote DNS server by changing options.ttl
in the fetchMethod
.
fetchContext
Arbitrary data that can be passed to the fetchMethod
as the context
option.
Note that this will only be relevant when the cache.fetch()
call needs to call fetchMethod()
. Thus, any data which will meaningfully vary the fetch response needs to be present in the key. This is primarily intended for including x-request-id
headers and the like for debugging purposes, which do not affect the fetchMethod()
response.
noDeleteOnFetchRejection
If a fetchMethod
throws an error or returns a rejected promise, then by default, any existing stale value will be removed from the cache.
If noDeleteOnFetchRejection
is set to true
, then this behavior is suppressed, and the stale value remains in the cache in the case of a rejected fetchMethod
.
This is important in cases where a fetchMethod
is only called as a background update while the stale value is returned, when allowStale
is used.
This may be set in calls to fetch()
, or defaulted on the constructor.
dispose
Function that is called on items when they are dropped from the cache, as this.dispose(value, key, reason)
.
This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer stored in the cache.
NOTE: It is called before the item has been fully removed from the cache, so if you want to put it right back in, you need to wait until the next tick. If you try to add it back in during the dispose()
function call, it will break things in subtle and weird ways.
Unlike several other options, this may not be overridden by passing an option to set()
, for performance reasons. If disposal functions may vary between cache entries, then the entire list must be scanned on every cache swap, even if no disposal function is in use.
The reason
will be one of the following strings, corresponding to the reason for the item's deletion:
evict
Item was evicted to make space for a new additionset
Item was overwritten by a new valuedelete
Item was removed by explicit cache.delete(key)
or by calling cache.clear()
, which deletes everything.The dispose()
method is not called for canceled calls to fetchMethod()
. If you wish to handle evictions, overwrites, and deletes of in-flight asynchronous fetches, you must use the AbortSignal
provided.
Optional, must be a function.
disposeAfter
The same as dispose
, but called after the entry is completely removed and the cache is once again in a clean state.
It is safe to add an item right back into the cache at this point. However, note that it is very easy to inadvertently create infinite recursion in this way.
The disposeAfter()
method is not called for canceled calls to fetchMethod()
. If you wish to handle evictions, overwrites, and deletes of in-flight asynchronous fetches, you must use the AbortSignal
provided.
noDisposeOnSet
Set to true
to suppress calling the dispose()
function if the entry key is still accessible within the cache.
This may be overridden by passing an options object to cache.set()
.
Boolean, default false
. Only relevant if dispose
or disposeAfter
options are set.
ttl
Max time to live for items before they are considered stale. Note that stale items are NOT preemptively removed by default, and MAY live in the cache, contributing to its LRU max, long after they have expired.
Also, as this cache is optimized for LRU/MRU operations, some of the staleness/TTL checks will reduce performance, as they will incur overhead by deleting from Map objects rather than simply throwing old Map objects away.
This is not primarily a TTL cache, and does not make strong TTL guarantees. There is no pre-emptive pruning of expired items, but you may set a TTL on the cache, and it will treat expired items as missing when they are fetched, and delete them.
Optional, but must be a positive integer in ms if specified.
This may be overridden by passing an options object to cache.set()
.
At least one of max
, maxSize
, or TTL
is required. This must be a positive integer if set.
Even if ttl tracking is enabled, it is strongly recommended to set a max
to prevent unbounded growth of the cache. See "Storage Bounds Safety" below.
If ttl tracking is enabled, and max
and maxSize
are not set, and ttlAutopurge
is not set, then a warning will be emitted cautioning about the potential for unbounded memory consumption.
Deprecated alias: maxAge
noUpdateTTL
Boolean flag to tell the cache to not update the TTL when setting a new value for an existing key (ie, when updating a value rather than inserting a new value). Note that the TTL value is always set (if provided) when adding a new entry into the cache.
This may be passed as an option to cache.set()
.
Boolean, default false.
ttlResolution
Minimum amount of time in ms in which to check for staleness. Defaults to 1
, which means that the current time is checked at most once per millisecond.
Set to 0
to check the current time every time staleness is tested.
Note that setting this to a higher value will improve performance somewhat while using ttl tracking, albeit at the expense of keeping stale items around a bit longer than intended.
ttlAutopurge
Preemptively remove stale items from the cache.
Note that this may significantly degrade performance, especially if the cache is storing a large number of items. It is almost always best to just leave the stale items in the cache, and let them fall out as new items are added.
Note that this means that allowStale
is a bit pointless, as stale items will be deleted almost as soon as they expire.
Use with caution!
Boolean, default false
allowStale
By default, if you set ttl
, it'll only delete stale items from the cache when you get(key)
. That is, it's not preemptively pruning items.
If you set allowStale:true
, it'll return the stale value as well as deleting it. If you don't set this, then it'll return undefined
when you try to get a stale entry.
Note that when a stale entry is fetched, even if it is returned due to allowStale
being set, it is removed from the cache immediately. You can immediately put it back in the cache if you wish, thus resetting the TTL.
This may be overridden by passing an options object to cache.get()
. The cache.has()
method will always return false
for stale items.
Boolean, default false, only relevant if ttl
is set.
Deprecated alias: stale
noDeleteOnStaleGet
When using time-expiring entries with ttl
, by default stale items will be removed from the cache when the key is accessed with cache.get()
.
Setting noDeleteOnStaleGet
to true
will cause stale items to remain in the cache, until they are explicitly deleted with cache.delete(key)
, or retrieved with noDeleteOnStaleGet
set to false
.
This may be overridden by passing an options object to cache.get()
.
Boolean, default false, only relevant if ttl
is set.
updateAgeOnGet
When using time-expiring entries with ttl
, setting this to true
will make each item's age reset to 0 whenever it is retrieved from cache with get()
, causing it to not expire. (It can still fall out of cache based on recency of use, of course.)
This may be overridden by passing an options object to cache.get()
.
Boolean, default false, only relevant if ttl
is set.
updateAgeOnHas
When using time-expiring entries with ttl
, setting this to true
will make each item's age reset to 0 whenever its presence in the cache is checked with has()
, causing it to not expire. (It can still fall out of cache based on recency of use, of course.)
This may be overridden by passing an options object to cache.has()
.
Boolean, default false, only relevant if ttl
is set.
new LRUCache(options)
Create a new LRUCache. All options are documented above, and are on the cache as public members.
cache.max
, cache.maxSize
, cache.allowStale
,cache.noDisposeOnSet
, cache.sizeCalculation
, cache.dispose
, cache.maxSize
, cache.ttl
, cache.updateAgeOnGet
, cache.updateAgeOnHas
All option names are exposed as public members on the cache object.
These are intended for read access only. Changing them during program operation can cause undefined behavior.
cache.size
The total number of items held in the cache at the current moment.
cache.calculatedSize
The total size of items in cache when using size tracking.
set(key, value, [{ size, sizeCalculation, ttl, noDisposeOnSet, start }])
Add a value to the cache.
Optional options object may contain ttl
and sizeCalculation
as described above, which default to the settings on the cache object.
If start
is provided, then that will set the effective start time for the TTL calculation. Note that this must be a previous value of performance.now()
if supported, or a previous value of Date.now()
if not.
Options object my also include size
, which will prevent calling the sizeCalculation
function and just use the specified number if it is a positive integer, and noDisposeOnSet
which will prevent calling a dispose
function in the case of overwrites.
Will update the recency of the entry.
Returns the cache object.
get(key, { updateAgeOnGet, allowStale } = {}) => value
Return a value from the cache.
Will update the recency of the cache entry found.
If the key is not found, get()
will return undefined
. This can be confusing when setting values specifically to undefined
, as in cache.set(key, undefined)
. Use cache.has()
to determine whether a key is present in the cache at all.
sizeCalculation, ttl, noDisposeOnSet } = {}) => Promise`
If the value is in the cache and not stale, then the returned Promise resolves to the value.
If not in the cache, or beyond its TTL staleness, then fetchMethod(key, staleValue, options)
is called, and the value returned will be added to the cache once resolved.
If called with allowStale
, and an asynchronous fetch is currently in progress to reload a stale value, then the former stale value will be returned.
Multiple fetches for the same key
will only call fetchMethod
a single time, and all will be resolved when the value is resolved, even if different options are used.
If fetchMethod
is not specified, then this is effectively an alias for Promise.resolve(cache.get(key))
.
When the fetch method resolves to a value, if the fetch has not been aborted due to deletion, eviction, or being overwritten, then it is added to the cache using the options provided.
peek(key, { allowStale } = {}) => value
Like get()
but doesn't update recency or delete stale items.
Returns undefined
if the item is stale, unless allowStale
is set either on the cache or in the options object.
has(key, { updateAgeOnHas } = {}) => Boolean
Check if a key is in the cache, without updating the recency of use. Age is updated if updateAgeOnHas
is set to true
in either the options or the constructor.
Will return false
if the item is stale, even though it is technically in the cache.
delete(key)
Deletes a key out of the cache.
Returns true
if the key was deleted, false
otherwise.
clear()
Clear the cache entirely, throwing away all values.
Deprecated alias: reset()
keys()
Return a generator yielding the keys in the cache, in order from most recently used to least recently used.
rkeys()
Return a generator yielding the keys in the cache, in order from least recently used to most recently used.
values()
Return a generator yielding the values in the cache, in order from most recently used to least recently used.
rvalues()
Return a generator yielding the values in the cache, in order from least recently used to most recently used.
entries()
Return a generator yielding [key, value]
pairs, in order from most recently used to least recently used.
rentries()
Return a generator yielding [key, value]
pairs, in order from least recently used to most recently used.
find(fn, [getOptions])
Find a value for which the supplied fn
method returns a truthy value, similar to Array.find()
.
fn
is called as fn(value, key, cache)
.
The optional getOptions
are applied to the resulting get()
of the item found.
dump()
Return an array of [key, entry]
objects which can be passed to cache.load()
The start
fields are calculated relative to a portable Date.now()
timestamp, even if performance.now()
is available.
Stale entries are always included in the dump
, even if allowStale
is false.
Note: this returns an actual array, not a generator, so it can be more easily passed around.
load(entries)
Reset the cache and load in the items in entries
in the order listed. Note that the shape of the resulting cache may be different if the same options are not used in both caches.
The start
fields are assumed to be calculated relative to a portable Date.now()
timestamp, even if performance.now()
is available.
purgeStale()
Delete any stale entries. Returns true
if anything was removed, false
otherwise.
Deprecated alias: prune
getRemainingTTL(key)
Return the number of ms left in the item's TTL. If item is not in cache, returns 0
. Returns Infinity
if item is in cache without a defined TTL.
forEach(fn, [thisp])
Call the fn
function with each set of fn(value, key, cache)
in the LRU cache, from most recent to least recently used.
Does not affect recency of use.
If thisp
is provided, function will be called in the this
-context of the provided object.
rforEach(fn, [thisp])
Same as cache.forEach(fn, thisp)
, but in order from least recently used to most recently used.
pop()
Evict the least recently used item, returning its value.
Returns undefined
if cache is empty.
In order to optimize performance as much as possible, "private" members and methods are exposed on the object as normal properties, rather than being accessed via Symbols, private members, or closure variables.
Do not use or rely on these. They will change or be removed without notice. They will cause undefined behavior if used inappropriately. There is no need or reason to ever call them directly.
This documentation is here so that it is especially clear that this not "undocumented" because someone forgot; it is documented, and the documentation is telling you not to do it.
Do not report bugs that stem from using these properties. They will be ignored.
initializeTTLTracking()
Set up the cache for tracking TTLsupdateItemAge(index)
Called when an item age is updated, by internal IDsetItemTTL(index)
Called when an item ttl is updated, by internal IDisStale(index)
Called to check an item's staleness, by internal IDinitializeSizeTracking()
Set up the cache for tracking item size. Called automatically when a size is specified.removeItemSize(index)
Updates the internal size calculation when an item is removed or modified, by internal IDaddItemSize(index)
Updates the internal size calculation when an item is added or modified, by internal IDindexes()
An iterator over the non-stale internal IDs, from most recently to least recently used.rindexes()
An iterator over the non-stale internal IDs, from least recently to most recently used.newIndex()
Create a new internal ID, either reusing a deleted ID, evicting the least recently used ID, or walking to the end of the allotted space.evict()
Evict the least recently used internal ID, returning its ID. Does not do any bounds checking.connect(p, n)
Connect the p
and n
internal IDs in the linked list.moveToTail(index)
Move the specified internal ID to the most recently used position.keyMap
Map of keys to internal IDskeyList
List of keys by internal IDvalList
List of values by internal IDsizes
List of calculated sizes by internal IDttls
List of TTL values by internal IDstarts
List of start time values by internal IDnext
Array of "next" pointers by internal IDprev
Array of "previous" pointers by internal IDhead
Internal ID of least recently used itemtail
Internal ID of most recently used itemfree
Stack of deleted internal IDsThis implementation aims to be as flexible as possible, within the limits of safe memory consumption and optimal performance.
At initial object creation, storage is allocated for max
items. If max
is set to zero, then some performance is lost, and item count is unbounded. Either maxSize
or ttl
must be set if max
is not specified.
If maxSize
is set, then this creates a safe limit on the maximum storage consumed, but without the performance benefits of pre-allocation. When maxSize
is set, every item must provide a size, either via the sizeCalculation
method provided to the constructor, or via a size
or sizeCalculation
option provided to cache.set()
. The size of every item must be a positive integer.
If neither max
nor maxSize
are set, then ttl
tracking must be enabled. Note that, even when tracking item ttl
, items are not preemptively deleted when they become stale, unless ttlAutopurge
is enabled. Instead, they are only purged the next time the key is requested. Thus, if ttlAutopurge
, max
, and maxSize
are all not set, then the cache will potentially grow unbounded.
In this case, a warning is printed to standard error. Future versions may require the use of ttlAutopurge
if max
and maxSize
are not specified.
If you truly wish to use a cache that is bound only by TTL expiration, consider using a Map
object, and calling setTimeout
to delete entries when they expire. It will perform much better than an LRU cache.
Here is an implementation you may use, under the same license as this package:
// a storage-unbounded ttl cache that is not an lru-cache
const cache = {
data: new Map(),
timers: new Map(),
set: (k, v, ttl) => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.set(k, setTimeout(() => cache.del(k), ttl))
cache.data.set(k, v)
},
get: k => cache.data.get(k),
has: k => cache.data.has(k),
delete: k => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.delete(k)
return cache.data.delete(k)
},
clear: () => {
cache.data.clear()
for (const v of cache.timers.values()) {
clearTimeout(v)
}
cache.timers.clear()
}
}
If that isn't to your liking, check out @isaacs/ttlcache.
As of January 2022, version 7 of this library is one of the most performant LRU cache implementations in JavaScript.
Benchmarks can be extremely difficult to get right. In particular, the performance of set/get/delete operations on objects will vary wildly depending on the type of key used. V8 is highly optimized for objects with keys that are short strings, especially integer numeric strings. Thus any benchmark which tests solely using numbers as keys will tend to find that an object-based approach performs the best.
Note that coercing anything to strings to use as object keys is unsafe, unless you can be 100% certain that no other type of value will be used. For example:
const myCache = {}
const set = (k, v) => myCache[k] = v
const get = (k) => myCache[k]
set({}, 'please hang onto this for me')
set('[object Object]', 'oopsie')
Also beware of "Just So" stories regarding performance. Garbage collection of large (especially: deep) object graphs can be incredibly costly, with several "tipping points" where it increases exponentially. As a result, putting that off until later can make it much worse, and less predictable. If a library performs well, but only in a scenario where the object graph is kept shallow, then that won't help you if you are using large objects as keys.
In general, when attempting to use a library to improve performance (such as a cache like this one), it's best to choose an option that will perform well in the sorts of scenarios where you'll actually use it.
This library is optimized for repeated gets and minimizing eviction time, since that is the expected need of a LRU. Set operations are somewhat slower on average than a few other options, in part because of that optimization. It is assumed that you'll be caching some costly operation, ideally as rarely as possible, so optimizing set over get would be unwise.
If performance matters to you:
null
, objects, or some mix of types, or if you aren't sure, then this library will work well for you.dispose
function, size tracking, or especially ttl behavior, unless absolutely needed. These features are convenient, and necessary in some use cases, and every attempt has been made to make the performance impact minimal, but it isn't nothing.This library changed to a different algorithm and internal data structure in version 7, yielding significantly better performance, albeit with some subtle changes as a result.
If you were relying on the internals of LRUCache in version 6 or before, it probably will not work in version 7 and above.
For more info, see the change log.
Download Details:
Author: isaacs
Source Code: https://github.com/isaacs/node-lru-cache
License: ISC license
#node #javascript #programming
1656645026
A Laravel package to help track user onboarding steps
This package lets you set up an onboarding flow for your application's users.
Here's an example of how it's set up:
use App\User;
use Spatie\Onboard\Facades\Onboard;
Onboard::addStep('Complete Profile')
->link('/profile')
->cta('Complete')
->completeIf(function (User $user) {
return $user->profile->isComplete();
});
Onboard::addStep('Create Your First Post')
->link('/post/create')
->cta('Create Post')
->completeIf(function (User $user) {
return $user->posts->count() > 0;
});
You can then render this onboarding flow however you want in your templates:
@if (auth()->user()->onboarding()->inProgress())
<div>
@foreach (auth()->user()->onboarding()->steps as $step)
<span>
@if($step->complete())
<i class="fa fa-check-square-o fa-fw"></i>
<s>{{ $loop->iteration }}. {{ $step->title }}</s>
@else
<i class="fa fa-square-o fa-fw"></i>
{{ $loop->iteration }}. {{ $step->title }}
@endif
</span>
<a href="{{ $step->link }}" {{ $step->complete() ? 'disabled' : '' }}>
{{ $step->cta }}
</a>
@endforeach
</div>
@endif
You can install the package via composer:
composer require spatie/laravel-onboard
Add the Spatie\Onboard\Concerns\GetsOnboarded
trait and Spatie\Onboard\Concerns\Onboardable
interface to any model or class in your app, for example the User
model:
class User extends Model implements \Spatie\Onboard\Concerns\Onboardable
{
use \Spatie\Onboard\Concerns\GetsOnboarded;
...
Configure your steps in your App\Providers\AppServiceProvider.php
use App\User;
use Spatie\Onboard\Facades\Onboard;
class AppServiceProvider extends ServiceProvider
{
// ...
public function boot()
{
Onboard::addStep('Complete Profile')
->link('/profile')
->cta('Complete')
/**
* The completeIf will pass the class that you've added the
* interface & trait to. You can use Laravel's dependency
* injection here to inject anything else as well.
*/
->completeIf(function (User $model) {
return $model->profile->isComplete();
});
Onboard::addStep('Create Your First Post')
->link('/post/create')
->cta('Create Post')
->completeIf(function (User $model) {
return $model->posts->count() > 0;
});
The variable name passed to the completeIf
callback must be $model
.
Now you can access these steps along with their state wherever you like. Here is an example blade template:
@if (auth()->user()->onboarding()->inProgress())
<div>
@foreach (auth()->user()->onboarding()->steps as $step)
<span>
@if($step->complete())
<i class="fa fa-check-square-o fa-fw"></i>
<s>{{ $loop->iteration }}. {{ $step->title }}</s>
@else
<i class="fa fa-square-o fa-fw"></i>
{{ $loop->iteration }}. {{ $step->title }}
@endif
</span>
<a href="{{ $step->link }}" {{ $step->complete() ? 'disabled' : '' }}>
{{ $step->cta }}
</a>
@endforeach
</div>
@endif
Check out all the available features below:
/** @var \Spatie\Onboard\OnboardingManager $onboarding **/
$onboarding = Auth::user()->onboarding();
$onboarding->inProgress();
$onboarding->percentageCompleted();
$onboarding->finished();
$onboarding->steps()->each(function($step) {
$step->title;
$step->cta;
$step->link;
$step->complete();
$step->incomplete();
});
Excluding steps based on condition:
Onboard::addStep('Excluded Step')
->excludeIf(function (User $model) {
return $model->isAdmin();
});
Definining custom attributes and accessing them:
// Defining the attributes
Onboard::addStep('Step w/ custom attributes')
->attributes([
'name' => 'Waldo',
'shirt_color' => 'Red & White',
]);
// Accessing them
$step->name;
$step->shirt_color;
If you want to ensure that your User is redirected to the next unfinished onboarding step, whenever they access your web application, you can use the following middleware as a starting point:
<?php
namespace App\Http\Middleware;
use Auth;
use Closure;
class RedirectToUnfinishedOnboardingStep
{
public function handle($request, Closure $next)
{
if (auth()->user()->onboarding()->inProgress()) {
return redirect()->to(
auth()->user()->onboarding()->nextUnfinishedStep()->link
);
}
return $next($request);
}
}
Quick tip: Don't add this middleware to routes that update the state of the onboarding steps, your users will not be able to progress because they will be redirected back to the onboarding step.
composer test
Download Details:
Author: spatie
Source Code: https://github.com/spatie/laravel-onboard
License: MIT
#laravel #php #programming
1656643500
How to create and use classes, as well as some common problems that you may encounter when working with them.
JavaScript is a powerful language that offers many features for developers. One of these features is classes. Classes allow you to create objects that inherit properties from other objects. In this blog post, we will cover how to create and use classes, as well as some common problems that you may encounter when working with them.
Read more at https://javascript.plainenglish.io
#javascript #programming
1656643272
Vite is the new front-end tooling for Laravel. Let's see how we can move a given Laravel project from Webpack to Vite to Vite.
Vite is the Next Generation Frontend Tooling,
which is Laravel's default from now on.
The Laravel documentation contains an entire section explaining how it works and how to use it. But most of us are more interested in using it in an existing project.
So that's what this post is for.
Note: This article concentrates on migrating a basic Laravel application. There will be differences if you use different tools like React or Vue.
Before switching to a new tool, it is a good idea to think about why
you want to do that. It is already enough for me to be Laravel's new default front-end bundling, but let's also talk about some details.
The main benefit is the overall improved performance.
Vite is faster in starting a new dev server, bundling assets, and updating them than other tools like webpack.
Vite is leveraging new advancements in the ecosystem, like the availability of native ES modules in the browser and the rise of JavaScript tools written in compile-to-native languages. There is a detailed explanation in the Why Vite section of the official docs.
Make sure to have the latest version of Laravel, which today is 9.19
, to use the new Vite tooling. Then we need to install two new dependencies:
npm install --save-dev vite laravel-vite-plugin
Also, the scripts
section of our package.json
file will change due to the new Vite scripts.
"scripts": {
"dev": "vite",
"build": "vite build"
},
That's all we need for the scripts.
Since Vite is a replacement for webpack, we can remove the laravel-mix
dependency and delete the webpack.mix.js
file from our application.
npm remove laravel-mix && rm webpack.mix.js
Your package.json
file will now look something like this:
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build"
},
"devDependencies": {
"axios": "^0.25",
"laravel-vite-plugin": "^0.2.1",
"lodash": "^4.17.19",
"postcss": "^8.4.14",
"postcss-import": "^14.0.1",
"vite": "^2.9.11",
}
}
Now we need to set up Vite. Therefore create a new vite.config.js
file in the root of your Laravel application.
import laravel from 'laravel-vite-plugin'
import {defineConfig} from 'vite'
export default defineConfig({
plugins: [
laravel([
'resources/css/app.css',
'resources/js/app.js',
]),
],
});
This is where we use the vite
and laravel-vite-plugin
packages, and we also define our asset paths.
In the head
of your template files, we have to load the assets through the new @vite
blade directive.
@vite(['resources/css/app.css', 'resources/js/app.js'])
You don't need to use mix
or load them manually anymore.
To run Vite, you use the npm script npm run dev,
which we have defined, which is just an alias for npm run vite.
It will compile your assets lighting fast! To make your assets production-ready, you can use the new npm run build
script to version and bundle your assets.
In the background, Vite is using the new assets compiled to the public/build
directory. This means we can delete the old asset folders, public/css
and public/js,
in my case.
Vite only supports ES modules, so require
doesn't work anymore, and you need to import
modules now in your scripts.
Example - not working with Vite anymore:
require('my-package');
Example - working with Vite:
import myPackage from 'my-package';
If you use Tailwind CSS in your Laravel project, your styles won't work. That's because we need Post CSS
for Tailwind CSS.
Create a postcss.config.js
file, if you haven't already, and define two plugins there:
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};
You also need to have those packages installed, but if you already use Tailwind CSS, you should have them already.
Vite will look for the PostCSS configuration and automatically apply it if given. That's already it. Your Tailwind CSS styles should now work too.
You can inject environment variables to JavaScript through your .env
file, by prefixing them with VITE_
.
For example, the given Laravel Pusher
variables are no longer of use to you.
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
If you like to use them in JavaScript, rename them.
VITE_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
VITE_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
Laravel Shift is an excellent service known for migrating your Laravel apps to a newer version with just one click.
Today was also a new shift released that migrates Laravel Mix to Vite.
One of Vite's most prominent features is Hot Module Replacement
for Vue.js and React.
But it's also great for refreshing a browser after file changes. By default, this is not working with Blade
files, but Freek and Spatie got a working solution.
Just update your Vite configuration with a custom plugin named blade
here:
import laravel from 'laravel-vite-plugin'
import {defineConfig} from 'vite'
export default defineConfig({
plugins: [
laravel([
'resources/css/app.css',
'resources/js/app.js',
]),
{
name: 'blade',
handleHotUpdate({ file, server }) {
if (file.endsWith('.blade.php')) {
server.ws.send({
type: 'full-reload',
path: '*',
});
}
},
}
],
});
This is enough to refresh your browser after a blade file is changing. And again, this happens lightning fast!
Some browsers, like Brave, block Vite's request by default. You will see those errors in your browser console. This can prevent Vite from compiling your assets. So make sure to check your browser if you see blocking-request errors.
You may also run into an issue if you are running Laravel Valet
locally with HTTPS
secured sites. But, again, Freek already found a solution.
It may help you to see how others have already successfully moved their Laravel projects to Vite:
I like Vite a lot, and I hope this guide can help you switch from an existing Laravel project faster.
Original article source at https://christoph-rumpel.com
#laravel #webpack #vite #php #programming
1656641081
Cách tạo và sử dụng các lớp, cũng như một số vấn đề chung mà bạn có thể gặp phải khi làm việc với chúng.
JavaScript là một ngôn ngữ mạnh mẽ cung cấp nhiều tính năng cho các nhà phát triển. Một trong những tính năng này là các lớp. Các lớp cho phép bạn tạo các đối tượng kế thừa các thuộc tính từ các đối tượng khác. Trong bài đăng trên blog này, chúng tôi sẽ đề cập đến cách tạo và sử dụng các lớp, cũng như một số vấn đề phổ biến mà bạn có thể gặp phải khi làm việc với chúng.
Trước ES6, JavaScript không có mô hình kế thừa dựa trên lớp. Trước ES6, nếu bạn muốn tạo một đối tượng kế thừa các thuộc tính từ một đối tượng khác, bạn phải sử dụng kế thừa dựa trên nguyên mẫu. Kế thừa dựa trên nguyên mẫu phức tạp hơn một chút so với kế thừa dựa trên lớp và có thể khó hiểu lúc đầu.
Để giúp lập trình hướng đối tượng dễ dàng hơn cho các nhà phát triển, ES6 đã giới thiệu các lớp học. Các lớp là đường cú pháp trên cơ sở kế thừa dựa trên nguyên mẫu, và chúng giúp tạo và làm việc với các đối tượng dễ dàng hơn nhiều. Mặc dù bạn không cần thiết phải hiểu kế thừa dựa trên nguyên mẫu để sử dụng các lớp, nhưng điều quan trọng là phải nhận ra rằng cú pháp của lớp chỉ đơn giản là đường cú pháp và đang trừu tượng hóa mô hình kế thừa dựa trên nguyên mẫu cơ bản.
Nếu bạn có kinh nghiệm về ngôn ngữ Lập trình hướng đối tượng, chẳng hạn như Java hoặc C ++, thì việc tạo các lớp trong JavaScript cũng sẽ cảm thấy tương tự. Cú pháp lớp được tạo ra không chỉ để trừu tượng hóa một số ý tưởng phức tạp của kế thừa dựa trên nguyên mẫu, mà còn làm cho JavaScript giống các ngôn ngữ hướng đối tượng khác.
Tạo một lớp rất đơn giản. Bạn sử dụng từ khóa lớp, theo sau là tên của lớp. Đây là một quy ước cho tên của một lớp được viết hoa. Sau đó, chúng ta có thể thêm hàm khởi tạo của mình. Hàm khởi tạo là một hàm đặc biệt được gọi khi một thể hiện của một lớp được tạo. Nó được sử dụng để khởi tạo các thuộc tính của cá thể của chúng ta.
class Book {
constructor(title, author) {
this.title = title;
this.author = author;
}
}
Trong ví dụ trên, chúng ta đã tạo một lớp Sách. Lớp Sách của chúng tôi có hai thuộc tính: tên sách và tác giả. Chúng tôi khởi tạo các thuộc tính này trong hàm khởi tạo của chúng tôi. Khi chúng tôi tạo một thể hiện của lớp Sách của mình, hàm khởi tạo sẽ được gọi và thuộc tính tiêu đề và tác giả sẽ được đặt thành các giá trị mà chúng tôi truyền vào.
Trong hàm tạo của chúng tôi, this
từ khóa đề cập đến phiên bản mới được tạo.
Bây giờ chúng ta đã tạo lớp Sách của mình, hãy tạo một thể hiện của nó.
const harryPotterBook = new Book(“Harry Potter and the Sorcerer’s Stone”, “J.K. Rowling”);
Trong ví dụ trên, chúng tôi đã tạo một thể hiện của lớp Sách của chúng tôi. Để tạo một thể hiện của một lớp, chúng tôi sử dụng từ khóa mới. Chúng tôi đã chuyển hai đối số cho hàm tạo của chúng tôi: “Harry Potter và Hòn đá phù thủy” cho tiêu đề và “JK Rowling” cho tác giả. Các đối số này sẽ được chuyển đến hàm khởi tạo của chúng tôi và được sử dụng để khởi tạo thuộc tính tiêu đề và tác giả của phiên bản sách của chúng tôi.
console.log(harryPotterBook)
/* Book {
title: ‘Harry Potter and the Sorcerer’s Stone’,
author: ‘J.K. Rowling’
}
*/
Ngoài các thuộc tính, các lớp cũng có thể có các phương thức. Các phương thức là các hàm được liên kết với một thể hiện của lớp. Chúng có thể được sử dụng để thực hiện các hành động hoặc các giá trị được tính toán có liên quan đến cá thể.
Hãy thêm một phương thức vào lớp Sách của chúng ta để xuất ra mô tả về sách.
class Book {
constructor(title, author) {
this.title = title;
this.author = author;
}
getDescription() {
return `${this.title} was written by ${this.author}.`;
}
}
Từ this
khóa đề cập đến trường hợp mà phương thức đang được gọi. Trong trường hợp này, hãy this
tham chiếu đến phiên bản harryPotterBook của chúng tôi. Bây giờ, chúng ta có thể gọi phương thức getDescription của mình trên phiên bản sách của chúng ta.
const harryPotterBook = new Book(“Harry Potter and the Sorcerer’s Stone”, “J.K. Rowling”);
console.log(harryPotterBook.getDescription())
// Harry Potter and the Sorcerer’s Stone was written by J.K. Rowling.
Lớp là một công cụ mạnh mẽ mà bạn có thể sử dụng trong các chương trình JavaScript của mình. Chúng cho phép các nhà phát triển tạo mã có thể tái sử dụng, một thành phần chính của Lập trình hướng đối tượng. Các lớp trong JavaScript giúp dễ dàng tạo và làm việc với các đối tượng và chúng loại bỏ một số ý tưởng phức tạp hơn về kế thừa dựa trên nguyên mẫu.
Tôi hy vọng bạn đã học được điều gì đó từ bài viết này! Chúc may mắn với các cuộc phỏng vấn viết mã của bạn!
Nguồn bài viết gốc tại https://javascript.plainenglish.io
#javascript #programming
1656597180
Soss
Soss is a library for probabilistic programming.
Let's look at an example. First we'll load things:
using MeasureTheory
using Soss
MeasureTheory.jl is designed specifically with PPLs like Soss in mind, though you can also use Distributions.jl.
Now for a model. Here's a linear regression:
m = @model x begin
α ~ Lebesgue(ℝ)
β ~ Normal()
σ ~ Exponential()
y ~ For(x) do xj
Normal(α + β * xj, σ)
end
return y
end
Next we'll generate some fake data to work with. For x
-values, let's use
x = randn(20)
Now loosely speaking, Lebesgue(ℝ)
is uniform over the real numbers, so we can't really sample from it. Instead, let's transform the model and make α
an argument:
julia> predα = predictive(m, :α)
@model (x, α) begin
σ ~ Exponential()
β ~ Normal()
y ~ For(x) do xj
Normal(α + β * xj, σ)
end
return y
end
Now we can do
julia> y = rand(predα(x=x,α=10.0))
20-element Vector{Float64}:
10.554133456468438
9.378065258831002
12.873667041657287
8.940799408080496
10.737189595204965
9.500536439014208
11.327606120726893
10.899892855024445
10.18488773139243
10.386969795947177
10.382195272387214
8.358407507910297
10.727173015711768
10.452311211064654
11.076232496702387
11.362009520020141
9.539433052406448
10.61851691333643
11.586170856832645
9.197496058151618
Now for inference! Let's use DynamicHMC
, which we have wrapped in SampleChainsDynamicHMC
.
julia> using SampleChainsDynamicHMC
[ Info: Precompiling SampleChainsDynamicHMC [6d9fd711-e8b2-4778-9c70-c1dfb499d4c4]
julia> post = sample(m(x=x) | (y=y,), dynamichmc())
4000-element MultiChain with 4 chains and schema (σ = Float64, β = Float64, α = Float64)
(σ = 1.0±0.15, β = 0.503±0.26, α = 10.2±0.25)
First, a fine point: When people say "the Turing PPL" they usually mean what's technically called "DynamicPPL".
Soss and DynamicPPL are both maturing and becoming more complete, so the above will change over time. It's also worth noting that we (the Turing team and I) hope to move toward a natural way of using these systems together to arrive at the best of both.
I'm glad you asked! Lots of things:
For more details, please see the documentation.
Author: cscherrer
Source Code: https://github.com/cscherrer/Soss.jl
License: MIT license
1656571868
The next-gen web framework.
Fresh is a next generation web framework, built for speed, reliability, and simplicity. Some stand out features:
The documentation is available on fresh.deno.dev.
You can scaffold a new project by running the Fresh init script. To scaffold a project in the myproject
folder, run the following:
deno run -A -r https://fresh.deno.dev my-project
To now start the project, use deno task
:
deno task start
To deploy the script to Deno Deploy, push your project to GitHub, create a Deno Deploy project, and link it to the main.ts
file in the root of the created repository.
For a more in-depth getting started guide, visit the Getting Started page in the Fresh docs.
Download Details:
Author: denoland
Source Code: https://github.com/denoland/fresh
License: MIT
#deno #webdec #programming