NPM

NPM

npm is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. It consists of a command line client, also called npm, and an online database of public and paid-for private packages, called the npm registry
Desmond  Gerber

Desmond Gerber

1668523680

Why You Don't Need MongoJS Library for NodeJS Application

Learn the proper way to connect to a MongoDB library using NodeJS.

The mongojs npm package is a NodeJS module that implements the MongoDB native driver in NodeJS. The purpose of the package is to help JavaScript developers to connect their NodeJS instance to their MongoDB instance.

But with the release of the official MongoDB driver for NodeJS, the mongojs package is now obsolete and it’s recommended for you to use mongodb package instead.

The mongodb NodeJS driver comes with asynchronous Javascript API functions that you can use to connect and manipulate your MongoDB data.

You need to install the mongodb package to your project first:

npm install mongodb
# or
yarn add mongodb

Once installed, you need to import the MongoClient class from the library as follows:

const { MongoClient } = require("mongodb");

Then, you can connect to your MongoDB database by creating a new instance of the MongoClient class and passing the URI as its argument:

const uri = "mongodb+srv://<Your MongoDB atlas cluster scheme>";

const client = new MongoClient(uri, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
});

If you’re installing MongoDB locally, then you can connect by passing the MongoDB localhost URI scheme as shown below:

const client = new MongoClient("mongodb://localhost:27017", {
  useNewUrlParser: true,
  useUnifiedTopology: true,
});

Then, connect to the database using the client.connect() method as shown below:

client.connect(async (err) => {
  if (err) {
    console.log(err);
  }
  const collection = client.db("test").collection("animals");
  // perform actions on the collection object
});

Almost all MongoDB API methods will return a Promise object when you don’t pass a callback function to the methods.

This means you can use either the callback pattern, the promise chain pattern, or the async/await pattern when manipulating your collection’s data.

See also: Wait for JavaScript functions to finish using callbacks and promises

Here’s how to use the connect() method using the async/await pattern. Don’t forget to wrap the code below inside an async function:

try {
  await client.connect();
  console.log(`connect OK`);

  const collection = client.db("test").collection("animals");
  // perform actions on the collection object
} catch (error) {
  console.log(`connect error: ${error}`);
}

Once connected, you can choose the database you want to connect with by calling the client.db() method and specify the database string as its argument:

const db = client.db("test");

You can also choose the collection you want to manipulate by calling the db.collection() method and specify the collection string as its argument:

const db = client.db("test");
const collection = db.collection("animals");

The db() and collection() method calls can be chained for brevity.

Once a connection has been established to the MongoDB instance, you can manipulate the database using methods provided by MongoDB library.

Inserting documents to a MongoDB collection

To insert a document to your MongoDB collection, you can use either insertOne() or insertMany() method.

The insertOne() method is used to insert a single document into your collection object. Here’s an example of the method in action:

await collection.insertOne({
  name: "Barnes",
  species: "Horse",
  age: 5,
});

Or when using callback pattern:

collection.insertOne(
  {
    name: "Barnes",
    species: "Horse",
    age: 5,
  },
  (err, result) => {
    // handle error/ result...
  }
);

When using insertMany() method, you need to wrap all your object documents in a single array:

await collection.insertOne([
  {
    name: "Barnes",
    species: "Horse",
    age: 5,
  },
  {
    name: "Jack"
    species: "Dog",
    age: 4,
  }
]);

Next, let’s look at finding documents in your collection.

Finding documents in a MongoDB collection

To find documents inside your MongoDB collection, the library provides you with the findOne() and find() method.

Here’s how to use them:

const doc = await collection.findOne({ name: "Barnes" });
console.log('The document with { name: "Barnes" } =>', doc);

// or

const filteredDocs = await collection.find({ species: "Horse" }).toArray();
console.log("Found documents =>", filteredDocs);

To retrieve all documents available in your collection, you can pass an empty object {} as the parameter to your find() method call:

const docs = await collection.find({}).toArray();
console.log("All documents =>", docs);

To query by comparison, you need to use the built-in Comparison Query Operators provided by MongoDB.

For example, the following find() call only retrieve documents with age value greater than 2:

const filteredDocs = await collection.find({ age: { $gt: 2 } }).toArray();
console.log("Found documents =>", filteredDocs);

Next, let’s look at how you can update the documents.

Updating documents in a MongoDB collection

You can update documents inside your collection by using the updateOne() or updateMany() method.

The update query starts with the $set operator as shown below:

{ $set: { <field1>: <value1>, <field2>: <value2>, ... } }

For example, the code below updates a document with name:"Barnes" to have age: 10:

const updatedDoc = await collection.updateOne(
  { name: "Barnes" },
  { $set: { age: 10 } }
);
console.log("Updated document =>", updatedDoc);

When there is more than one document that matches the query, the updateOne() method only updates the first document returned by the query (commonly the first document inserted into the collection)

To update all documents that match the query, you need to use the updateMany() method:

const updatedDocs = await collection.updateMany(
  { species: "Dog" },
  { $set: { age: 3 } }
);
console.log("Set all Dog age to 3 =>", updatedDocs);

Now let’s see how you can delete documents from the collection.

Deleting documents in a MongoDB collection

The deleteOne() and deleteMany() methods are used to remove document(s) from your collection.

You need to pass the query for the delete methods as shown below:

const deletedDoc = await collection.deleteOne({ name: "Barnes" });
console.log('Deleted document with {name: "Barnes"} =>', deletedDoc);

// or

const deletedDocs = await collection.deleteMany({ species: "Horse" })
console.log("Deleted horse documents =>", deletedDocs);

Closing your MongoDB connection

Finally, when you’re done with manipulating the database entries, you need to close the Database connection properly to free your computing resource and prevent connection leak

Just call the client.close() method as shown below:

client.close();

And that will be all for this tutorial.

For more information, visit the official MongoDB NodeJS driver documentation and select the API documentation for your current version from the right side menu:

MongoDB API documentations

MongoDB API documentations

Thank you for reading 😉

Original article source at: https://sebhastian.com/

#node #mongo #npm 

Why You Don't Need MongoJS Library for NodeJS Application

Build a Cross-platform App with the React Native Paper Library

Learn how to build a cross-platform app with Material Design elements from the React Native Paper library. Use React Native Paper to set up a starter app with the some of the most prominent and recognizable Material Design features.

If you’re building a cross-platform mobile app, it’s a good idea to base your app’s UI and UX on Material Design, Google’s own design language, which it uses in all its mobile apps.

Many of the most popular mobile apps use Material Design concepts heavily, including WhatsApp, Uber, Lyft, Google Maps, and more. Therefore, your users are already familiar with the look and feel of Material Design, and they will quickly and easily understand how to use your app if you adhere to the same design language.

React Native Paper is the heavy hitter of Material Design component libraries for React Native. In this article, we’ll focus on using React Native Paper to set up a starter app with the some of the most prominent and recognizable Material Design features, including a hamburger menu, floating action button (FAB), contextual action bar, and drawer navigation. Let’s get started!

  • React Native demo app
  • Setting up React Native
  • Initial screens
  • Hamburger menu and drawer navigation
  • Floating action button
  • Contextual action bar
  • Theming with Material Design

React Native demo app

We’ll build the starter app in the gif below. As you read through this guide, you can reference the full code for this demo in the material-ui-in-react-native GitHub repo:

MUI React Native Starter App Demo

MUI in React Native app demo gif

Setting up React Native

First, I’ll initialize my React Native app using Expo. Run the following command in your terminal:

npx create-expo-app material-ui-in-react-native --template expo-template-blank-typescript
cd material-ui-in-react-native

To install the React Native Paper package, run the following command in your terminal:

#npm
npm install react-native-paper
#yarn
yarn add react-native-paper 

To enable tree shaking and reduce the bundle size of React Native Paper, follow these additional installation instructions.

I’m also adding React Navigation to this project, and I recommend that you use it as well. React Navigation is the most popular navigation library for React Native, and there is more support for running it alongside React Native Paper compared to other navigation libraries.

You should follow the installation instructions for React Navigation since they’re slightly different depending on whether you use Expo or plain React Native.

Initial screens

Create two files in your app’s main directory called MyFriends.tsx and Profile.tsx. If you want to review the styles used, you can reference the GitHub repo:

import React from 'react';
import {View} from 'react-native';
import {Title} from 'react-native-paper';
import base from './styles/base';

interface IMyFriendsProps {}

const MyFriends: React.FunctionComponent<IMyFriendsProps> = (props) => {
  return (
    <View style={base.centered}>
      <Title>MyFriends</Title>
    </View>
  );
};
export default MyFriends;


import React from 'react';
import {View} from 'react-native';
import {Title} from 'react-native-paper';
import base from './styles/base';

interface IProfileProps {}

const Profile: React.FunctionComponent<IProfileProps> = (props) => {
  return (
    <View style={base.centered}>
      <Title>Profile</Title>
    </View>
  );
};
export default Profile;

In this guide, I’ll link these screens to each other using both a navigation drawer and a hamburger menu and add MUI components to each of them.

Hamburger menu and drawer navigation

Since Material Design promotes using a navigation drawer, I’ll use one to make the My Friends and Profile screens navigable to and from each other. First, I’ll add React Navigation’s drawer library:

yarn add @react-navigation/native @react-navigation/drawer

Now, I’ll add the following code into my App.tsx file to enable the drawer navigation:

import React from 'react';
import {createDrawerNavigator} from '@react-navigation/drawer';
import {NavigationContainer} from '@react-navigation/native';
import {StatusBar} from 'expo-status-bar';
import {SafeAreaProvider} from 'react-native-safe-area-context';
import MyFriends from './MyFriends';
import Profile from './Profile';

export default function App() {
  const Drawer = createDrawerNavigator();
  return (
    <SafeAreaProvider>
        <NavigationContainer>
          <Drawer.Navigator>
            <Drawer.Screen name='My Friends' component={MyFriends} />
            <Drawer.Screen name='Profile' component={Profile} />
          </Drawer.Navigator>
        </NavigationContainer>
      <StatusBar style='auto' />
    </SafeAreaProvider>
  );
}

This drawer also needs a button to open it. It should look like the classic hamburger icon , and it should open the navigation drawer when pressed. The code for your button might look like the code below in components/MenuIcon.tsx:

import React from 'react';
import {IconButton} from 'react-native-paper';
import {DrawerActions, useNavigation} from '@react-navigation/native';
import {useCallback} from 'react';

export default function MenuIcon() {
  const navigation = useNavigation();
  const openDrawer = useCallback(() => {
    navigation.dispatch(DrawerActions.openDrawer());
  }, []);

  return <IconButton icon='menu' size={24} onPress={openDrawer} />;
}

There are a few things to notice here. For one, we’ll use React Navigation’s useNavigation Hook to execute navigation actions, from changing screens to opening drawers.

React Native Paper’s<IconButton> supports all the Material Design icons by name and optionally supports any React node that you want to pass, meaning you can add in any desired icon from any third-party library.

Now, I’ll add <MenuIcon> to my navigation drawer by replacing the code below from App.tsx with the following code, respectively:

  <Drawer.Navigator>
    ...
  </Drawer.Navigator>import MenuIcon from './components/MenuIcon.tsx';
...
  <Drawer.Navigator
    screenOptions={{headerShown: true, headerLeft: () => <MenuIcon />}}
  >
    ...
  </Drawer.Navigator>

Lastly, I can customize my navigation drawer using the drawerContent prop of the same <Drawer.Navigator> component I just altered. I’ll show an example that adds a header image to the top of the drawer, but feel free to customize whatever you want to put in the drawer. Add the code below in components/MenuContent.tsx:

import React from 'react';
import {
  DrawerContentComponentProps,
  DrawerContentScrollView,
  DrawerItemList,
} from '@react-navigation/drawer';
import {Image} from 'react-native';
const MenuContent: React.FunctionComponent<DrawerContentComponentProps> = (
  props
) => {
  return (
    <DrawerContentScrollView {...props}>
      <Image
        resizeMode='cover'
        style={{width: '100%', height: 140}}
        source={require('../assets/drawerHeaderImage.jpg')}
      />
      <DrawerItemList {...props} />
    </DrawerContentScrollView>
  );
};
export default MenuContent;

Now, I’ll pass <MenuContent> into <Drawer.Navigator>. To do so, I’ll change the code in App.tsx from the code block below to the following code block, respectively:

import MenuIcon from './components/MenuIcon.tsx';
...
  <Drawer.Navigator
    screenOptions={{headerShown: true, headerLeft: () => <MenuIcon />}}
  >
    ...
  </Drawer.Navigator>import MenuIcon from './components/MenuIcon.tsx';
import MenuContent from './components/MenuContent.tsx';
...
  <Drawer.Navigator
    screenOptions={{headerShown: true, headerLeft: () => <MenuIcon />}}
    drawerContent={(props) => <MenuContent {...props} />}
  >
    ...
  </Drawer.Navigator>

Now, I have fully functioning drawer navigation with a custom image header. Below is the result:

React MUI Drawer Navigation Header Image

Gif of drawer navigation

Next, we’ll flesh out the main screens with more Material Design concepts.

Floating action button

One of the hallmarks of Material Design is the floating action button (FAB). The <FAB> and <FAB.Group> components provide a useful implementation of the floating action button according to Material Design principles. With minimal setup, I’ll add this to the My Friends screen.

First, I’ll need to add the <Provider> component from React Native Paper and wrap that component around the <NavigationContainer> in App.tsx as follows:

import {Provider} from 'react-native-paper';
...
  <Provider>
    <NavigationContainer>
      ...
    </NavigationContainer>
  </Provider>

Now, I’ll add my floating action button to the My Friends screen. To do so, I’ll need the following:

  • The <Portal> and <FAB.Group> components from React Native Paper
  • A state variable fabIsOpen to keep track of whether the FAB is open or closed
  • Some information about whether or not this screen is currently visible to the user, isScreenFocused

Without isScreenFocused, the FAB might end up visible on screens other than the My Friends screen.

With all that added in, the My Friends screen looks like the following code in MyFriends.tsx:

import {useIsFocused} from '@react-navigation/native';
import React, {useState} from 'react';
import {View} from 'react-native';
import {FAB, Portal, Title} from 'react-native-paper';
import base from './styles/base';

interface IMyFriendsProps {}

const MyFriends: React.FunctionComponent<IMyFriendsProps> = (props) => {
  const isScreenFocused = useIsFocused();
  const [fabIsOpen, setFabIsOpen] = useState(false);

  return (
    <View style={base.centered}>
      <Title>MyFriends</Title>
      <Portal>
        <FAB.Group
          visible={isScreenFocused}
          open={fabIsOpen}
          onStateChange={({open}) => setFabIsOpen(open)}
          icon={fabIsOpen ? 'close' : 'account-multiple'}
          actions={[
            {
              icon: 'plus',
              label: 'Add new friend',
              onPress: () => {},
            },
            {
              icon: 'file-export',
              label: 'Export friend list',
              onPress: () => {},
            },
          ]}
        />
      </Portal>
    </View>
  );
};
export default MyFriends;

Now, the My Friends screen behaves as follows:

My Friends Screen Floating Action Button

Gif of floating action button demonstration

Next, I’ll add a contextual action bar, which you can activate by long pressing an item on any of the screens.

Contextual action bar

Apps like Gmail and Google Photos use a Material Design concept called the contextual action bar. In our current app, I’ll quickly implement a version of this.

First, I’ll build the ContextualActionBar component itself using the appbar component from React Native Paper. To start with, it should look something like the following:

./components/ContextualActionBar.tsximport React from 'react';
import {Appbar} from 'react-native-paper';

interface IContextualActionBarProps {}

const ContextualActionBar: React.FunctionComponent<IContextualActionBarProps> = (
  props
) => {
  return (
    <Appbar.Header {...props} style={{width: '100%'}}>
      <Appbar.Action icon='close' onPress={() => {}} />
      <Appbar.Content title='' />
      <Appbar.Action icon='delete' onPress={() => {}} />
      <Appbar.Action icon='content-copy' onPress={() => {}} />
      <Appbar.Action icon='magnify' onPress={() => {}} />
      <Appbar.Action icon='dots-vertical' onPress={() => {}} />
    </Appbar.Header>
  );
};
export default ContextualActionBar;

Whenever an item is long pressed, I want this component to render on top of the given screen’s header. I’ll render the contextual action bar over the screen’s header on the My Friends screen by adding the following code to MyFriends.tsx:

>import {useNavigation} from '@react-navigation/native';
import ContextualActionBar from './components/ContextualActionBar';
...
  const [cabIsOpen, setCabIsOpen] = useState(false);
  const navigation = useNavigation();

  const openHeader = useCallback(() => {
    setCabIsOpen(!cabIsOpen);
  }, [cabIsOpen]);

  useEffect(() => {
    if (cabIsOpen) {
      navigation.setOptions({
        // have to use props: any since that's the type signature
        // from react-navigation...
        header: (props: any) => (<ContextualActionBar {...props} />),
      });
    } else {
      navigation.setOptions({header: undefined});
    }
  }, [cabIsOpen]);
...
  return (
    ...
    <List.Item
      title='Friend #1'
      description='Mar 18 | 3:31 PM'
      style={{width: '100%'}}
      onPress={() => {}}
      onLongPress={openHeader}
    />
    ...
  );

In the code above, I’m toggling a state boolean value cabIsOpen whenever a given item is long pressed. Based on that value, I either switch the React Navigation header to render the <ContextualActionBar> or switch back to render the default React Navigation header.

Now, when I long press the Friend #1 item, a contextual action bar should appear. However, the title is still empty, and I can’t do anything in any of the actions. The <ContextualActionBar> is unaware of the state of either the Friend #1 item or the larger My Friends screen as a whole.

Next, we’ll add a title into the <ContextualActionBar>, and we’ll also pass in a function to close the bar that will be triggered by one of the buttons in the bar. To do this, I’ll add another state variable to the My Friends screen:

const [selectedItemName, setSelectedItemName] = useState('');

I also need to create a function that will close the header and reset the state variable above:

  const closeHeader = useCallback(() => {
    setCabIsOpen(false);
    setSelectedItemName('');
  }, []);

Then, I need to pass both selectedItemName and closeHeader as props to <ContextualActionBar>:

  useEffect(() => {
    if (cabIsOpen) {
      navigation.setOptions({
        header: (props: any) => (
          <ContextualActionBar
            {...props}
            title={selectedItemName}
            close={closeHeader}
          />
      ),
      });
    } else {
      navigation.setOptions({header: undefined});
    }
  }, [cabIsOpen, selectedItemName]);

Lastly, I need to set selectedItemName to the title of the item that has been long pressed:

  ...
  const openHeader = useCallback((str: string) => {
    setSelectedItemName(str);
    setCabIsOpen(!cabIsOpen);
  }, [cabIsOpen]);
  ...
  return (
    ...
    <List.Item
      title='Friend #1'
      ...
      onLongPress={() => openHeader('Friend #1')}
    />
  );

Now, I can use the title and close props in <ContextualActionBar>. Add the code below to ./components/ContextualActionBar.tsx:

interface IContextualActionBarProps {
  title: string;
  close: () => void;
}
...
  return (
      ...
      <Appbar.Action icon='close' onPress={props.close} />
      <Appbar.Content title={props.title} />
      ...
  );

Now, I have a functional, Material Design-inspired contextual action bar that utilizes React Native Paper and React Navigation. It looks like the following:

Material Design Contextual Action Bar

Contextual action bar, activates when the user long presses an item

Theming with Material Design

Finally, I want to theme my app so I can change the primary color, secondary color, text colors, and more.

Theming is a little tricky because both React Navigation and React Native Paper have their own ThemeProvider components, which can easily conflict with each other. Fortunately, there’s a great guide available on theming an app that uses both React Native Paper and React Navigation.

I’ll add in a little extra help for those who use TypeScript and would run into esoteric errors trying to follow the guide above.

First, I’ll create a theme file, theme.ts, which looks like the following code:

import {
  DarkTheme as NavigationDarkTheme,
  DefaultTheme as NavigationDefaultTheme,
  Theme,
} from '@react-navigation/native';
import {ColorSchemeName} from 'react-native';
import {
  DarkTheme as PaperDarkTheme,
  DefaultTheme as PaperDefaultTheme,
} from 'react-native-paper';
declare global {
  namespace ReactNativePaper {
    interface ThemeColors {
      animationColor: string;
    }
    interface Theme {
      statusBar: 'light' | 'dark' | 'auto' | 'inverted' | undefined;
    }
  }
}
interface ReactNavigationTheme extends Theme {
  statusBar: 'light' | 'dark' | 'auto' | 'inverted' | undefined;
}
export function combineThemes(
  themeType: ColorSchemeName
): ReactNativePaper.Theme | ReactNavigationTheme {
  const CombinedDefaultTheme: ReactNativePaper.Theme = {
    ...NavigationDefaultTheme,
    ...PaperDefaultTheme,
    statusBar: 'dark',
    colors: {
      ...NavigationDefaultTheme.colors,
      ...PaperDefaultTheme.colors,
      animationColor: '#2922ff',
      primary: '#079c20',
      accent: '#2922ff',
    },
  };
  const CombinedDarkTheme: ReactNativePaper.Theme = {
    ...NavigationDarkTheme,
    ...PaperDarkTheme,
    mode: 'adaptive',
    statusBar: 'light',
    colors: {
      ...NavigationDarkTheme.colors,
      ...PaperDarkTheme.colors,
      animationColor: '#6262ff',
      primary: '#079c20',
      accent: '#2922ff',
    },
  };
  return themeType === 'dark' ? CombinedDarkTheme : CombinedDefaultTheme;
}

The combineThemes return type encompasses both ReactNavigationTheme and ReactNativePaper.Theme. I changed the primary and accent colors, which will affect the CAB and FAB, respectively. I added a new color to the theme called animationColor. If you don’t want to add a new color, you don’t need to declare the global namespace.

In App.tsx, I’ll add my theme to both the React Native Paper Provider component and the NavigationContainer component from React Navigation:

import {useColorScheme} from 'react-native';
import {NavigationContainer, Theme} from '@react-navigation/native';
import {combineThemes} from './theme';
...
  const colorScheme = useColorScheme() as 'light' | 'dark';
  const theme = combineThemes(colorScheme);
  ...
    <Provider theme={theme as ReactNativePaper.Theme}>
      <NavigationContainer theme={theme as Theme}>
      </NavigationContainer>
    </Provider>

I’m using Expo, so I also need to add the following code in app.json to enable dark mode. However, you may not need to:

"userInterfaceStyle": "automatic",

Now, you have a custom themed, dark mode enabled, Material Design-inspired app:

Contextual Action Bar Floating Action Button Light Mode

Contextual action bar and floating action button with custom colors, light theme

Drawer Open Header Image Light Mode

Drawer open, showing header image, light mode

Contextual Action Bar Floating Action Button Dark Theme

Contextual action bar and floating action button with custom colors, dark theme

Drawer Open Header Image Dark Mode

Drawer open, showing header image, dark mode

Conclusion

At this point, you should have your own cross-platform app with Material Design elements from the React Native Paper library, like a drawer navigation with custom designs in the drawer menu, a floating action button, and a contextual action bar.

You should also have theming enabled, which works nicely with both the React Native Paper and React Navigation libraries. This setup should enable you to quickly and stylishly build out your next mobile app with ease.

Original article source at https://blog.logrocket.com

#reactnative #react 

Build a Cross-platform App with the React Native Paper Library
Reid  Rohan

Reid Rohan

1667178300

TSdx: Zero-config CLI for TypeScript Package Development

TSdx

Despite all the recent hype, setting up a new TypeScript (x React) library can be tough. Between Rollup, Jest, tsconfig, Yarn resolutions, ESLint, and getting VSCode to play nicely....there is just a whole lot of stuff to do (and things to screw up). TSDX is a zero-config CLI that helps you develop, test, and publish modern TypeScript packages with ease--so you can focus on your awesome new library and not waste another afternoon on the configuration.

Features

TSDX comes with the "battery-pack included" and is part of a complete TypeScript breakfast:

  • Bundles your code with Rollup and outputs multiple module formats (CJS & ESM by default, and also UMD if you want) plus development and production builds
  • Comes with treeshaking, ready-to-rock lodash optimizations, and minification/compression
  • Live reload / watch-mode
  • Works with React
  • Human readable error messages (and in VSCode-friendly format)
  • Bundle size snapshots
  • Opt-in to extract invariant error codes
  • Jest test runner setup with sensible defaults via tsdx test
  • ESLint with Prettier setup with sensible defaults via tsdx lint
  • Zero-config, single dependency
  • Escape hatches for customization via .babelrc.js, jest.config.js, .eslintrc.js, and tsdx.config.js

Quick Start

npx tsdx create mylib
cd mylib
yarn start

That's it. You don't need to worry about setting up TypeScript or Rollup or Jest or other plumbing. Just start editing src/index.ts and go!

Below is a list of commands you will probably find useful:

npm start or yarn start

Runs the project in development/watch mode. Your project will be rebuilt upon changes. TSDX has a special logger for your convenience. Error messages are pretty printed and formatted for compatibility VS Code's Problems tab.

Your library will be rebuilt if you make edits.

npm run build or yarn build

Bundles the package to the dist folder. The package is optimized and bundled with Rollup into multiple formats (CommonJS, UMD, and ES Module).

npm test or yarn test

Runs your tests using Jest.

npm run lint or yarn lint

Runs Eslint with Prettier on .ts and .tsx files. If you want to customize eslint you can add an eslint block to your package.json, or you can run yarn lint --write-file and edit the generated .eslintrc.js file.

prepare script

Bundles and packages to the dist folder. Runs automatically when you run either npm publish or yarn publish. The prepare script will run the equivalent of npm run build or yarn build. It will also be run if your module is installed as a git dependency (ie: "mymodule": "github:myuser/mymodule#some-branch") so it can be depended on without checking the transpiled code into git.

Optimizations

Aside from just bundling your module into different formats, TSDX comes with some optimizations for your convenience. They yield objectively better code and smaller bundle sizes.

After TSDX compiles your code with TypeScript, it processes your code with 3 Babel plugins:

Development-only Expressions + Treeshaking

babel-plugin-annotate-pure-calls + babel-plugin-dev-expressions work together to fully eliminate dead code (aka treeshake) development checks from your production code. Let's look at an example to see how it works.

Imagine our source code is just this:

// ./src/index.ts
export const sum = (a: number, b: number) => {
  if (process.env.NODE_ENV !== 'production') {
    console.log('Helpful dev-only error message');
  }
  return a + b;
};

tsdx build will output an ES module file and 3 CommonJS files (dev, prod, and an entry file). If you want to specify a UMD build, you can do that as well. For brevity, let's examine the CommonJS output (comments added for emphasis):

// Entry File
// ./dist/index.js
'use strict';

// This determines which build to use based on the `NODE_ENV` of your end user.
if (process.env.NODE_ENV === 'production') {
  module.exports = require('./mylib.cjs.production.js');
} else {
  module.exports = require('./mylib.development.cjs');
}
// CommonJS Development Build
// ./dist/mylib.development.cjs
'use strict';

const sum = (a, b) => {
  {
    console.log('Helpful dev-only error message');
  }

  return a + b;
};

exports.sum = sum;
//# sourceMappingURL=mylib.development.cjs.map
// CommonJS Production Build
// ./dist/mylib.cjs.production.js
'use strict';
exports.sum = (s, t) => s + t;
//# sourceMappingURL=test-react-tsdx.cjs.production.js.map

AS you can see, TSDX stripped out the development check from the production code. This allows you to safely add development-only behavior (like more useful error messages) without any production bundle size impact.

For ESM build, it's up to end-user to build environment specific build with NODE_ENV replace (done by Webpack 4 automatically).

Rollup Treeshaking

TSDX's rollup config removes getters and setters on objects so that property access has no side effects. Don't do it.

Advanced babel-plugin-dev-expressions

TSDX will use babel-plugin-dev-expressions to make the following replacements before treeshaking.

__DEV__

Replaces

if (__DEV__) {
  console.log('foo');
}

with

if (process.env.NODE_ENV !== 'production') {
  console.log('foo');
}

IMPORTANT: To use __DEV__ in TypeScript, you need to add declare var __DEV__: boolean somewhere in your project's type path (e.g. ./types/index.d.ts).

// ./types/index.d.ts
declare var __DEV__: boolean;

Note: The dev-expression transform does not run when NODE_ENV is test. As such, if you use __DEV__, you will need to define it as a global constant in your test environment.

invariant

Replaces

invariant(condition, 'error message here');

with

if (!condition) {
  if ('production' !== process.env.NODE_ENV) {
    invariant(false, 'error message here');
  } else {
    invariant(false);
  }
}

Note: TSDX doesn't supply an invariant function for you, you need to import one yourself. We recommend https://github.com/alexreardon/tiny-invariant.

To extract and minify invariant error codes in production into a static codes.json file, specify the --extractErrors flag in command line. For more details see Error extraction docs.

warning

Replaces

warning(condition, 'dev warning here');

with

if ('production' !== process.env.NODE_ENV) {
  warning(condition, 'dev warning here');
}

Note: TSDX doesn't supply a warning function for you, you need to import one yourself. We recommend https://github.com/alexreardon/tiny-warning.

Using lodash

If you want to use a lodash function in your package, TSDX will help you do it the right way so that your library does not get fat shamed on Twitter. However, before you continue, seriously consider rolling whatever function you are about to use on your own. Anyways, here is how to do it right.

First, install lodash and lodash-es as dependencies

yarn add lodash lodash-es

Now install @types/lodash to your development dependencies.

yarn add @types/lodash --dev

Import your lodash method however you want, TSDX will optimize it like so.

// ./src/index.ts
import kebabCase from 'lodash/kebabCase';

export const KebabLogger = (msg: string) => {
  console.log(kebabCase(msg));
};

For brevity let's look at the ES module output.

import o from"lodash-es/kebabCase";const e=e=>{console.log(o(e))};export{e as KebabLogger};
//# sourceMappingURL=test-react-tsdx.esm.production.js.map

TSDX will rewrite your import kebabCase from 'lodash/kebabCase' to import o from 'lodash-es/kebabCase'. This allows your library to be treeshakable to end consumers while allowing to you to use @types/lodash for free.

Note: TSDX will also transform destructured imports. For example, import { kebabCase } from 'lodash' would have also been transformed to `import o from "lodash-es/kebabCase".

Error extraction

After running --extractErrors, you will have a ./errors/codes.json file with all your extracted invariant error codes. This process scans your production code and swaps out your invariant error message strings for a corresponding error code (just like React!). This extraction only works if your error checking/warning is done by a function called invariant.

Note: We don't provide this function for you, it is up to you how you want it to behave. For example, you can use either tiny-invariant or tiny-warning, but you must then import the module as a variable called invariant and it should have the same type signature.

⚠️Don't forget: you will need to host the decoder somewhere. Once you have a URL, look at ./errors/ErrorProd.js and replace the reactjs.org URL with yours.

Known issue: our transformErrorMessages babel plugin currently doesn't have sourcemap support, so you will see "Sourcemap is likely to be incorrect" warnings. We would love your help on this.

TODO: Simple guide to host error codes to be completed

Customization

Rollup

❗⚠️❗ Warning
These modifications will override the default behavior and configuration of TSDX. As such they can invalidate internal guarantees and assumptions. These types of changes can break internal behavior and can be very fragile against updates. Use with discretion!

TSDX uses Rollup under the hood. The defaults are solid for most packages (Formik uses the defaults!). However, if you do wish to alter the rollup configuration, you can do so by creating a file called tsdx.config.js at the root of your project like so:

// Not transpiled with TypeScript or Babel, so use plain Es6/Node.js!
module.exports = {
  // This function will run for each entry/format/env combination
  rollup(config, options) {
    return config; // always return a config.
  },
};

The options object contains the following:

export interface TsdxOptions {
  // path to file
  input: string;
  // Name of package
  name: string;
  // JS target
  target: 'node' | 'browser';
  // Module format
  format: 'cjs' | 'umd' | 'esm' | 'system';
  // Environment
  env: 'development' | 'production';
  // Path to tsconfig file
  tsconfig?: string;
  // Is error extraction running?
  extractErrors?: boolean;
  // Is minifying?
  minify?: boolean;
  // Is this the very first rollup config (and thus should one-off metadata be extracted)?
  writeMeta?: boolean;
  // Only transpile, do not type check (makes compilation faster)
  transpileOnly?: boolean;
}

Example: Adding Postcss

const postcss = require('rollup-plugin-postcss');
const autoprefixer = require('autoprefixer');
const cssnano = require('cssnano');

module.exports = {
  rollup(config, options) {
    config.plugins.push(
      postcss({
        plugins: [
          autoprefixer(),
          cssnano({
            preset: 'default',
          }),
        ],
        inject: false,
        // only write out CSS for the first bundle (avoids pointless extra files):
        extract: !!options.writeMeta,
      })
    );
    return config;
  },
};

Babel

You can add your own .babelrc to the root of your project and TSDX will merge it with its own Babel transforms (which are mostly for optimization), putting any new presets and plugins at the end of its list.

Jest

You can add your own jest.config.js to the root of your project and TSDX will shallow merge it with its own Jest config.

ESLint

You can add your own .eslintrc.js to the root of your project and TSDX will deep merge it with its own ESLint config.

patch-package

If you still need more customizations, we recommend using patch-package so you don't need to fork. Keep in mind that these types of changes may be quite fragile against version updates.

Inspiration

TSDX was originally ripped out of Formik's build tooling. TSDX has several similarities to @developit/microbundle, but that is because Formik's Rollup configuration and Microbundle's internals had converged around similar plugins.

Comparison with Microbundle

Some key differences include:

  • TSDX includes out-of-the-box test running via Jest
  • TSDX includes out-of-the-box linting and formatting via ESLint and Prettier
  • TSDX includes a bootstrap command with a few package templates
  • TSDX allows for some lightweight customization
  • TSDX is TypeScript focused, but also supports plain JavaScript
  • TSDX outputs distinct development and production builds (like React does) for CJS and UMD builds. This means you can include rich error messages and other dev-friendly goodies without sacrificing final bundle size.

API Reference

tsdx watch

Description
  Rebuilds on any change

Usage
  $ tsdx watch [options]

Options
  -i, --entry           Entry module
  --target              Specify your target environment  (default web)
  --name                Specify name exposed in UMD builds
  --format              Specify module format(s)  (default cjs,esm)
  --tsconfig            Specify your custom tsconfig path (default <root-folder>/tsconfig.json)
  --verbose             Keep outdated console output in watch mode instead of clearing the screen
  --onFirstSuccess      Run a command on the first successful build
  --onSuccess           Run a command on a successful build
  --onFailure           Run a command on a failed build
  --noClean             Don't clean the dist folder
  --transpileOnly       Skip type checking
  -h, --help            Displays this message

Examples
  $ tsdx watch --entry src/foo.tsx
  $ tsdx watch --target node
  $ tsdx watch --name Foo
  $ tsdx watch --format cjs,esm,umd
  $ tsdx watch --tsconfig ./tsconfig.foo.json
  $ tsdx watch --noClean
  $ tsdx watch --onFirstSuccess "echo The first successful build!"
  $ tsdx watch --onSuccess "echo Successful build!"
  $ tsdx watch --onFailure "echo The build failed!"
  $ tsdx watch --transpileOnly

tsdx build

Description
  Build your project once and exit

Usage
  $ tsdx build [options]

Options
  -i, --entry           Entry module
  --target              Specify your target environment  (default web)
  --name                Specify name exposed in UMD builds
  --format              Specify module format(s)  (default cjs,esm)
  --extractErrors       Opt-in to extracting invariant error codes
  --tsconfig            Specify your custom tsconfig path (default <root-folder>/tsconfig.json)
  --transpileOnly       Skip type checking
  -h, --help            Displays this message

Examples
  $ tsdx build --entry src/foo.tsx
  $ tsdx build --target node
  $ tsdx build --name Foo
  $ tsdx build --format cjs,esm,umd
  $ tsdx build --extractErrors
  $ tsdx build --tsconfig ./tsconfig.foo.json
  $ tsdx build --transpileOnly

tsdx test

This runs Jest, forwarding all CLI flags to it. See https://jestjs.io for options. For example, if you would like to run in watch mode, you can run tsdx test --watch. So you could set up your package.json scripts like:

{
  "scripts": {
    "test": "tsdx test",
    "test:watch": "tsdx test --watch",
    "test:coverage": "tsdx test --coverage"
  }
}

tsdx lint

Description
  Run eslint with Prettier

Usage
  $ tsdx lint [options]

Options
  --fix               Fixes fixable errors and warnings
  --ignore-pattern    Ignore a pattern
  --max-warnings      Exits with non-zero error code if number of warnings exceed this number  (default Infinity)
  --write-file        Write the config file locally
  --report-file       Write JSON report to file locally
  -h, --help          Displays this message

Examples
  $ tsdx lint src
  $ tsdx lint src --fix
  $ tsdx lint src test --ignore-pattern test/foo.ts
  $ tsdx lint src test --max-warnings 10
  $ tsdx lint src --write-file
  $ tsdx lint src --report-file report.json

Contributing

Please see the Contributing Guidelines.

Download Details:

Author: jaredpalmer
Source Code: https://github.com/jaredpalmer/tsdx 
License: MIT license

#typescript #react #npm #rollup 

TSdx: Zero-config CLI for TypeScript Package Development

Angular FilePond | A handy FilePond Adapter Component for Angular

Angular FilePond is a handy adapter component for FilePond, a JavaScript library that can upload anything you throw at it, optimizes images for faster uploads, and offers a great, accessible, silky smooth user experience.

Core Features

  • Accepts directories, files, blobs, local URLs, remote URLs and Data URIs.
  • Drop files, select on filesystem, copy and paste files, or add files using the API.
  • Async uploading with AJAX, or encode files as base64 data and send along form post.
  • Accessible, tested with AT software like VoiceOver and JAWS, navigable by Keyboard.
  • Image optimization, automatic image resizing, cropping, and fixes EXIF orientation.
  • Responsive, automatically scales to available space, is functional on both mobile and desktop devices.

Learn more about FilePond

 


Also need Image Editing?

Pintura the modern JavaScript Image Editor is what you're looking for. Pintura supports setting crop aspect ratios, resizing, rotating, cropping, and flipping images. Above all, it integrates beautifully with FilePond.

Learn more about Pintura

Installation

Install FilePond component from npm.

npm install filepond ngx-filepond --save

Import FilePondModule and if needed register any plugins. Please note that plugins need to be installed from npm separately.

Add FilePond styles path ./node_modules/filepond/dist/filepond.min.css to the build.options.styles property in angular.json

Setting up FilePond with Angular 13

// app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';

// import filepond module
import { FilePondModule, registerPlugin } from 'ngx-filepond';

// import and register filepond file type validation plugin
import * as FilePondPluginFileValidateType from 'filepond-plugin-file-validate-type';
registerPlugin(FilePondPluginFileValidateType);

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    FilePondModule // add filepond module here
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }
<!-- app.component.html -->
<file-pond #myPond 
    [options]="pondOptions" 
    [files]="pondFiles"
    (oninit)="pondHandleInit()"
    (onaddfile)="pondHandleAddFile($event)"
    (onactivatefile)="pondHandleActivateFile($event)">
</file-pond>
// app.component.ts
import { Component } from '@angular/core';
import { FilePondOptions } from 'filepond';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})

export class AppComponent {

  pondOptions: FilePondOptions = {
    allowMultiple: true,
    labelIdle: 'Drop files here...'
  }

  pondFiles: FilePondOptions["files"] = [
    {
      source: 'assets/photo.jpeg',
      options: {
        type: 'local'
      }
    }
  ]

  pondHandleInit() {
    console.log('FilePond has initialised');
  }

  pondHandleAddFile(event: any) {
    console.log('A file was added', event);
  }

  pondHandleActivateFile(event: any) {
    console.log('A file was activated', event)
  }

}

Read the docs for more information

Download Details:
 

Author: pqina
Download Link: Download The Source Code
Official Website: https://github.com/pqina/ngx-filepond 
License: MIT license

#angular #npm 

Angular FilePond | A handy FilePond Adapter Component for Angular
code bucks

code bucks

1664194586

How to host React application on Netlify ☁ for free!

https://youtu.be/OG71ARNRPT4

Hey there👋,

If you have portfolio that you use to showcase your different projects then it is better to host all of your projects and share that link in the portfolio for each projects. This is really good approach then using only images of your work. By using this approach the client can easily check your projects from your portfolio and this will also help you to land more clients.

In this tutorial you will learn,
▶️ How to deploy React application on Netlify
▶️ How to use env variables on Netlify
▶️ How to configure Netlify when you are using react-router

I have choose netlify to host a React application, because netlify has this starter plan which is enough to host your different projects without paying anything. Also, netlify provides many cool features as well.

Here is the tutorial on how How to host React application on Netlify ☁ for free!

#react #React #node #javascript #npm #web-development #developer 

How to host React application on Netlify ☁ for free!
Monty  Boehm

Monty Boehm

1662452700

5 Best Angular Radio Button Libraries

In this article, we take a look at The 5 Best Angular Radio Button Libraries. 

Angular is one of the most prominent frameworks allowing you to build front-end web applications much faster and more efficiently. Developed by Google and based on TypeScript, a popular programming language developed and maintained by Microsoft, Angular is the first choice of many professional developers for building web applications. In a very short time, this open-source web application framework won the hearts of many web developers and customers across the globe. In fact, according to BuiltWith, over 300,000 websites are built on the Angular framework. The vast range of reusable and accessible component libraries that can help speed up the web development process makes Angular a preferred choice.

Let’s look at some popular Angular component libraries that can help you create robust web apps and systems.

1 - NP-ui-lib: Native Angular UI Components and Design Framework

NPM Command to add package to application

npm i np-ui-lib --save

List of components:

      
Input TextTextareaDate PickerTime PickerColor PickerSwitch
DropdownAuto CompleteTagsNumber BoxFile UploadSlider
CheckboxRadio ButtonRich TextMenubarPanel MenubarBreadcrumb
ToolbarData GridPaginatorCalendarTimelineAlert
NotificationTooltipPopoverPanelAccordionTabs
StepsCardProgressLoaderBlockUiCarousel
ModalDialogTree ViewListVirtual ScrollSidepanel
MaskingDeferi18NThemes and Framework CSSGrid LayoutPadding and Margin
Button and Button GroupBadgesForm   

Modules to be pre-installed

"@angular/common": "^14.0.0",
"@angular/core": "^14.0.0",
"@angular/cdk": "^14.0.0",
"@angular/forms": "^14.0.0",
"@angular/router": "^14.0.0",
"rxjs": "^7.5.0"

Import below css to your application

@import "~@angular/cdk/overlay-prebuilt.css";
@import "np-ui-lib/styles/default-theme.scss";

How to Run this project?

Run below commands in sequence
$ ng build np-ui-lib --watch
$ npm run copy-assets
$ ng serve

View Github

2 - NGX-pretty-checkbox: Quickly integrate pretty checkbox components with Angular 

Installation

  • Step 1

Install the pretty-checkbox from npm or yarn package manager

> npm install pretty-checkbox // or
> yarn add pretty-checkbox

Alternatively, you can also use CDN link

https://cdn.jsdelivr.net/npm/pretty-checkbox@3.0/dist/pretty-checkbox.min.css
  • Step 2

Download pretty-checkbox angular module from npm package manager

> npm install ngx-pretty-checkbox
  • Step 3

Add dist/pretty-checkbox.min.css file from node_module of pretty-checkbox in your html or import src/pretty-checkbox.scss file in your scss file

@import '~pretty-checkbox/src/pretty-checkbox.scss';
  • Step 4

Add ngx-pretty-checkbox in your AppModule

import { NgxPrettyCheckboxModule } from 'ngx-pretty-checkbox';

@NgModule({
  declarations: [ ... ],
  imports: [
    ...,
    NgxPrettyCheckboxModule
  ],
  providers: [ ... ],
  bootstrap: [ ... ]
})
export class AppModule { 
  ...
}
  • Step 5

Use it in your angular application

<p-checkbox>
  Default
</p-checkbox>

More demos and document

There are more features like Radio buttons , Toggle , States , Animations , Border less , Lock , Scale, SCSS Settings.

Please refer the documentation to know about them.
 

Browser support

Works in all modern browsers.

Chrome >= 26 Firefox >= 16 Safari >= 6.1 Opera >= 15 IE >= 9

Dependencies

Latest version available for each version of Angular

ngx-pretty-checkboxangular
12.0.012.x
11.0.011.x
1.2.0>10.x (ivy)
1.1.0>8.x
1.0.46.x 7.x

View Github

3 - Angular-form-controls: A set of form control components for Angular 1

Installation

You can install this package using yarn or npm:

#yarn
yarn add @meanie/angular-form-controls

#npm
npm install @meanie/angular-form-controls --save

Include the script node_modules/@meanie/angular-form-controls/release/angular-form-controls.js in your build process, or add it via a <script> tag to your index.html:

<script src="node_modules/@meanie/angular-form-controls/release/angular-form-controls.js"></script>

Add FormControls.Component as a dependency for your app.

Styling

These form controls don’t come with styling included, so you will have to supply your own styling. An example is included in the source code.

Usage

The form controls which take an array as options, support both simple options and complex (object) options. The examples below use the following option arrays:

//Simple options, an array with string or numeric values
let simple = ['Option 1', 'Option 2', 'Option 3'];

//Complex options, an array with objects
let complex = [
  {
    id: 1,
    name: 'Option 1'
  },
  {
    id: 2,
    name: 'Option 2'
  },
  {
    id: 3,
    name: 'Option 3'
  }
];

Radio buttons

Simple options, tracked by option value (e.g. Option 1, Option 2, ...):

<radio-buttons
  options="simple"
  ng-model="model"
  on-change="updateModel(value)"
></radio-buttons>

Simple options, tracked by index of option in array (e.g. 0, 1, ...):

<radio-buttons
  options="simple"
  ng-model="model"
  track-by="$index"
  on-change="updateModel(value)"
></radio-buttons>

Simple options, disabled:

<radio-buttons
  options="simple"
  ng-model="model"
  ng-disabled="true"
  on-change="updateModel(value)"
></radio-buttons>

Simple options, required:

<radio-buttons
  name="fieldName"
  options="simple"
  ng-model="model"
  ng-required="true"
  on-change="updateModel(value)"
></radio-buttons>

Complex options, tracked by index of option in array (e.g. 0, 1, ...):

<radio-buttons
  options="complex"
  ng-model="model"
  track-by="$index"
  label-by="name"
  on-change="updateModel(value)"
></radio-buttons>

Complex options, tracked by ID property value (e.g. 1, 2, ...):

<radio-buttons
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  on-change="updateModel(value)"
></radio-buttons>

Complex options, tracked by ID property value, but returned as object:

<radio-buttons
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  as-object="true"
  on-change="updateModel(value)"
></radio-buttons>

Complex options, tracked by ID property value, nullable with specified null value and label:

<radio-buttons
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  is-nullable="true"
  null-label="'None'"
  null-value="0"
  on-change="updateModel(value)"
></radio-buttons>

Select box

Simple options, tracked by option value (e.g. Option 1, Option 2, ...):

<select-box
  options="simple"
  ng-model="model"
  on-change="updateModel(value)"
></select-box>

Simple options, tracked by index of option in array (e.g. 0, 1, ...):

<select-box
  options="simple"
  ng-model="model"
  track-by="$index"
  on-change="updateModel(value)"
></select-box>

Simple options, disabled:

<select-box
  options="simple"
  ng-model="model"
  ng-disabled="true"
  on-change="updateModel(value)"
></select-box>

Simple options, required:

<select-box
  name="fieldName"
  options="simple"
  ng-model="model"
  ng-required="true"
  on-change="updateModel(value)"
></select-box>

Complex options, tracked by index of option in array (e.g. 0, 1, ...):

<select-box
  options="complex"
  ng-model="model"
  track-by="$index"
  label-by="name"
  on-change="updateModel(value)"
></select-box>

Complex options, tracked by ID property value (e.g. 1, 2, ...):

<select-box
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  on-change="updateModel(value)"
></select-box>

Complex options, tracked by ID property value, but returned as object:

<select-box
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  as-object="true"
  on-change="updateModel(value)"
></select-box>

Complex options, tracked by ID property value, nullable with specified null value and label:

<select-box
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  is-nullable="true"
  null-label="'None'"
  null-value="0"
  on-change="updateModel(value)"
></select-box>

Check boxes

For check boxes, the value passed to the on-change handler is always an array of checked values. The model value is also expected to be an array.

Simple options, tracked by option value (e.g. Option 1, Option 2, ...):

<check-boxes
  options="simple"
  ng-model="model"
  on-change="updateModel(value)"
></check-boxes>

Simple options, tracked by index of option in array (e.g. 0, 1, ...):

<check-boxes
  options="simple"
  ng-model="model"
  track-by="$index"
  on-change="updateModel(value)"
></check-boxes>

Simple options, disabled:

<check-boxes
  options="simple"
  ng-model="model"
  ng-disabled="true"
  on-change="updateModel(value)"
></check-boxes>

Simple options, required:

<check-boxes
  name="fieldName"
  options="simple"
  ng-model="model"
  ng-required="true"
  on-change="updateModel(value)"
></check-boxes>

Complex options, tracked by index of option in array (e.g. 0, 1, ...):

<check-boxes
  options="complex"
  ng-model="model"
  track-by="$index"
  label-by="name"
  on-change="updateModel(value)"
></check-boxes>

Complex options, tracked by ID property value (e.g. 1, 2, ...):

<check-boxes
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  on-change="updateModel(value)"
></check-boxes>

Complex options, tracked by ID property value, but returned as object:

<check-boxes
  options="complex"
  ng-model="model"
  track-by="id"
  label-by="name"
  as-object="true"
  on-change="updateModel(value)"
></check-boxes>

Check box (for boolean values)

Standard:

<check-box
  ng-model="model"
  on-change="updateModel(value)"
>Label for checkbox</check-box>

Inverse (e.g. appears checked when model value is false):

<check-box
  ng-model="model"
  is-inverse="true"
  on-change="updateModel(value)"
>Label for checkbox</check-box>

Disabled:

<check-box
  ng-model="model"
  ng-disabled="true"
  on-change="updateModel(value)"
>Label for checkbox</check-box>

Required:

<check-box
  ng-model="model"
  ng-required="true"
  on-change="updateModel(value)"
>Label for checkbox</check-box>

Issues & feature requests

Please report any bugs, issues, suggestions and feature requests in the @meanie/angular-form-controls issue tracker.

Contributing

Pull requests are welcome! If you would like to contribute to Meanie, please check out the Meanie contributing guidelines.

View Github

4 - NgGenericRadio

Use of NgGenericRadio is to provide a way for frontend developers to simulate the behaviour of radio button.

To use NgGenericRadio we have to use generic-radio-group component and inside it we need to provide ng-generic-radio-option with both selected and unselected state HTML.

NgGenericRadio is fully compatible with Angular Reactive and Template drivend forms.

Installation

npm install --save ng-generic-radio

For Angular 10 use version 2.0.x

Usage

Import NgGenericRadioModule in the component module where we want to use generict radio.

  import { NgGenericRadioModule } from 'ng-generic-radio';

  @NgModule({
  declarations: [
    AppComponent,
  ],
  imports: [
    BrowserModule,
    NgGenericRadioModule,
    FormsModule,
    ReactiveFormsModule,
  ],
  providers: [],
  bootstrap: [AppComponent],

})
export class AppModule { }

Use generic-radio-group tag in html and provide ng-generic-radio-option both unselected and selcted state html insdie the ng-generic-radio-option tag. *selectedState directive is use for selected state and *unSelectedState directive is used for unselected state.

<ng-generic-radio-group [(ngModel)]="selectedModel">
  <ng-generic-radio-option value="option-1">
    <p *selectedState>selected option 1</p>
    <p *unSelectedState>not selected option 1</p>
  </ng-generic-radio-option><br>
  <ng-generic-radio-option value="option-2">
    <p *selectedState>selected option 2</p>
    <p *unSelectedState>not selected option 2</p>
  </ng-generic-radio-option><br>
  <ng-generic-radio-option value="option-3">
    <p *selectedState>selected option 3</p>
    <p *unSelectedState>not selected option 3</p>
  </ng-generic-radio-option>
</ng-generic-radio-group>

With Reactive From :-

<ng-generic-radio-group [formControl]="demoFormControl">
  <ng-generic-radio-option value="option-1">
    <p *selectedState>selected option 1</p>
    <p *unSelectedState>not selected option 1</p>
  </ng-generic-radio-option><br>
  <ng-generic-radio-option value="option-2">
    <p *selectedState>selected option 2</p>
    <p *unSelectedState>not selected option 2</p>
  </ng-generic-radio-option><br>
  <ng-generic-radio-option value="option-3">
    <p *selectedState>selected option 3</p>
    <p *unSelectedState>not selected option 3</p>
  </ng-generic-radio-option>
</ng-generic-radio-group>

Example

Here is the Demo

View Github

5 - ng2-RadioBoxList: Angular 2 radiobox list component

Getting Started

npm install ng2-radioboxlist --save

Checking before using

this component assume that run under Angular2 application, so has this implicit dependency:

    "@angular/common": "^4.4.0-RC.0",
    "@angular/core": "4.4.0-RC.0",
    "rxjs": "5.4.3"

Simple Use

import

//app.module.ts file
....
import { RadioBoxList } from 'ng2-radioboxlist/radioboxlist.js';
@NgModule({
  declarations: [
    AppComponent,
    RadioBoxList
  ],
  ....

//app.component.ts file
 export class AppComponent {
  title = 'app';
  listItemToPass:any[] = [
    {id:1, color:"white"}, 
    {id:2, color:"red"}, 
    {id:3, color:"blue"},
    {id:4, color:"green"}
  ];
  checkboxStyles:string[] = ["container:greenClass, shadow", "title:whiteClass"];
  itemSelectedManager(event){
    console.log("selected item -> ", event);
  }
}

insert selector

<!-- app.component.html file -->
<radiobox-list 
               [title]="'choose a color'"
               [list]="listItemToPass" 
               [id] ="'id'"
               [value] = "'color'"
               [styles] = "checkboxStyles"
               [preselected]="'1'" //id to preselect as string
               (selected) = "itemSelectedManager($event)"
               ></radiobox-list>

Styling

you can style by applying class to container, title, input and label passing a string or an array of string to [styles] input property. String format is: "<container|title|input|label>:, , ..., " Array format simply is an array of these strings. in code sample 'checkboxStyles' is so declared:

    checkboxStyles:string[] = ["container:greenClass, shadow", "title:whiteClass"];

Theming

it's also possible set a theme [dark or light] using input property theme:

<!-- app.component.html file -->
<checkbox-list 
               [title]="'choose a color'"
               [list]="listItemToPass" 
               [id] ="'id'"
               [value] = "'color'"
               [theme] = "'dark'" 
               (listSelected) = "itemSelectedManager($event)"
               ></checkbox-list>

For using css file theme you have to set styles property in .angular-cli.json like so:

      "styles": [
        "styles.css",
        "../node_modules/ng2-radioboxlist/theme/radioboxlist.dark.css",
        "../node_modules/ng2-radioboxlist/theme/radioboxlist.light.css"
      ],

[IMPORTANT] if you are under ng serve command you have to stop and repeat command (.angular-cli.json isn't watch by angular compiler)

View Github

#angular  #typescript 

5 Best Angular Radio Button Libraries
Lawrence  Lesch

Lawrence Lesch

1662040691

Smartbanner.js: Customisable Smart App Banners for iOS and Android

smartbanner.js

Customisable smart app banner for iOS and Android.

Features

  • Populating smartbanner is as easy as adding meta tags, no JavaScript knowledge required
  • Default Smart App Banner like design
  • Customisable info with i18n support and design by using
    • automatically generated smartbanner--<platform> class on wrapper
    • custom design modifier for externally defined styles or same design on all platforms
  • Close button that closes the banner and sets cookie to keep banner closed
  • Platform-specific app icon and View button
  • User Agent specific targeting
  • Pure JavaScript coming at 13 KB in minified size, no jQuery required
  • Events emitted for API implementations
  • ECMAScript 6 source

smartbanner.js iOS screenshot   smartbanner.js Android screenshot

Basic Usage

smartbanner.js works automagically given following meta tags:

<!-- Start SmartBanner configuration -->
<meta name="smartbanner:title" content="Smart Application">
<meta name="smartbanner:author" content="SmartBanner Contributors">
<meta name="smartbanner:price" content="FREE">
<meta name="smartbanner:price-suffix-apple" content=" - On the App Store">
<meta name="smartbanner:price-suffix-google" content=" - In Google Play">
<meta name="smartbanner:icon-apple" content="https://url/to/apple-store-icon.png">
<meta name="smartbanner:icon-google" content="https://url/to/google-play-icon.png">
<meta name="smartbanner:button" content="VIEW">
<meta name="smartbanner:button-url-apple" content="https://ios/application-url">
<meta name="smartbanner:button-url-google" content="https://android/application-url">
<meta name="smartbanner:enabled-platforms" content="android,ios">
<meta name="smartbanner:close-label" content="Close">
<!-- End SmartBanner configuration -->

Additionally, JavaScript and CSS has to be included:

<link rel="stylesheet" href="node_modules/smartbanner.js/dist/smartbanner.min.css">
<script src="node_modules/smartbanner.js/dist/smartbanner.min.js"></script>

Advanced usage

Hide the smartbanner for certain User Agents

There are cases where you do not want to show the smart app banner on all Android and/or all iOS devices. For example:

  • your app is availabe only for some Android/iOS versions
  • your app is only availabe on iPhone, but not iPad
  • your app is a web app which also shows this website, but of course should not show the smart app banner.

In this case you can define a regular expression, which matches all user agent strings that should be excluded. Just add another meta tag like the following:

<meta name="smartbanner:exclude-user-agent-regex" content="^.*My Example Webapp$">

This regular expression would match any user agent string, that ends with My Example Webapp.

Show the smartbanner for certain User Agents

In addition to blacklisting certain user agents using the regex explained in the previous section, you can also whitelist certain user agents:

<meta name="smartbanner:include-user-agent-regex" content="iPhone 7">

Note: You can define enabled-platforms, exclude-user-agent-regex and include-user-agent-regex at the same time. enabled-platforms has the lowest priority, exclude-user-agent-regex the highest priority.

Disable Positioning

If you want to position smart app banner yourself (e.g. in CSS), you can disable built-in positioning with following option:

<meta name="smartbanner:disable-positioning" content="true">

Hide the smartbanner completely

If you want to prevent smartbanner rendering in some html pages, you can add optional meta tag:

<meta name="smartbanner:enabled-platforms" content="none">

Time-limited close

By default smartbanner would not reappear if closed. This can be prevented with hide-ttl option. Following example would keep smartbanner closed for 10 seconds (10000 ms):

<meta name="smartbanner:hide-ttl" content="10000">

Path-designated close

Once closed smartbanner would reappear if site path changes. It is default behaviour.

Following example would keep smartbanner closed site-wide (but only when user has actually closed it):

<meta name="smartbanner:hide-path" content="/">

Custom design modifier

smartbanner uses built-in platform-specific styles (e.g. smartbanner--ios or smartbanner--android), but this behaviour can be altered by adding custom design modifier that allows use of:

externally defined styles, e.g.:

<meta name="smartbanner:custom-design-modifier" content="mysite.com">

which would add smartbanner--mysite.com class on wrapper.

forced platform-specific styles on all platforms, e.g.:

<meta name="smartbanner:custom-design-modifier" content="ios">

which would add smartbanner--ios class on wrapper regardless of actual platform.

smartbanner API use

By default smartbanner is added to DOM automatically. You can disable it with

<meta name="smartbanner:api" content="true">

and add smartbanner to DOM manually:

smartbanner.publish();

Events

Following events are being dispatched:

EventDescription
smartbanner.initDispatched when smartbanner has been initialised
smartbanner.viewDispatched when smartbanner is added to display
smartbanner.clickoutDispatched when smartbanner is clicked to navigate to app store
smartbanner.exitDispatched when smartbanner is closed

Example handler (closes smartbanner when user clicks to navigate to app store):

document.addEventListener('smartbanner.clickout', smartbanner.exit);

Contributing

See CONTRIBUTING.md.

Download Details:

Author: Ain
Source Code: https://github.com/ain/smartbanner.js 
License: GPL-3.0 license

#javascript #node #android #npm #ios 

Smartbanner.js: Customisable Smart App Banners for iOS and Android
Dexter  Goodwin

Dexter Goodwin

1661965500

Sinopia: Private NPM Repository Server

sinopia - a private/caching npm repository server

It allows you to have a local npm registry with zero configuration. You don't have to install and replicate an entire CouchDB database. Sinopia keeps its own small database and, if a package doesn't exist there, it asks npmjs.org for it keeping only those packages you use.

Use cases

Use private packages.

If you want to use all benefits of npm package system in your company without sending all code to the public, and use your private packages just as easy as public ones.

See using private packages section for details.

Cache npmjs.org registry.

If you have more than one server you want to install packages on, you might want to use this to decrease latency (presumably "slow" npmjs.org will be connected to only once per package/version) and provide limited failover (if npmjs.org is down, we might still find something useful in the cache).

See using public packages section for details.

Override public packages.

If you want to use a modified version of some 3rd-party package (for example, you found a bug, but maintainer didn't accept pull request yet), you can publish your version locally under the same name.

See override public packages section for details.

Installation

# installation and starting (application will create default
# config in config.yaml you can edit later)
$ npm install -g sinopia
$ sinopia

# npm configuration
$ npm set registry http://localhost:4873/

# if you use HTTPS, add an appropriate CA information
# ("null" means get CA list from OS)
$ npm set ca null

Now you can navigate to http://localhost:4873/ where your local packages will be listed and can be searched.

Docker

A Sinopia docker image is available

Chef

A Sinopia Chef cookbook is available at Opscode community source: https://github.com/BarthV/sinopia-cookbook

Puppet

A Sinopia puppet module is available at puppet forge source: https://github.com/saheba/puppet-sinopia

Configuration

When you start a server, it auto-creates a config file.

Adding a new user

npm adduser --registry http://localhost:4873/

This will prompt you for user credentials which will be saved on the Sinopia server.

Using private packages

You can add users and manage which users can access which packages.

It is recommended that you define a prefix for your private packages, for example "local", so all your private things will look like this: local-foo. This way you can clearly separate public packages from private ones.

Using public packages from npmjs.org

If some package doesn't exist in the storage, server will try to fetch it from npmjs.org. If npmjs.org is down, it serves packages from cache pretending that no other packages exist. Sinopia will download only what's needed (= requested by clients), and this information will be cached, so if client will ask the same thing second time, it can be served without asking npmjs.org for it.

Example: if you successfully request express@3.0.1 from this server once, you'll able to do that again (with all it's dependencies) anytime even if npmjs.org is down. But say express@3.0.0 will not be downloaded until it's actually needed by somebody. And if npmjs.org is offline, this server would say that only express@3.0.1 (= only what's in the cache) is published, but nothing else.

Override public packages

If you want to use a modified version of some public package foo, you can just publish it to your local server, so when your type npm install foo, it'll consider installing your version.

There's two options here:

You want to create a separate fork and stop synchronizing with public version.

If you want to do that, you should modify your configuration file so sinopia won't make requests regarding this package to npmjs anymore. Add a separate entry for this package to config.yaml and remove npmjs from proxy_access list and restart the server.

When you publish your package locally, you should probably start with version string higher than existing one, so it won't conflict with existing package in the cache.

You want to temporarily use your version, but return to public one as soon as it's updated.

In order to avoid version conflicts, you should use a custom pre-release suffix of the next patch version. For example, if a public package has version 0.1.2, you can upload 0.1.3-my-temp-fix. This way your package will be used until its original maintainer updates his public package to 0.1.3.

Compatibility

Sinopia aims to support all features of a standard npm client that make sense to support in private repository. Unfortunately, it isn't always possible.

Basic features:

  • Installing packages (npm install, npm upgrade, etc.) - supported
  • Publishing packages (npm publish) - supported

Advanced package control:

  • Unpublishing packages (npm unpublish) - supported
  • Tagging (npm tag) - not yet supported, should be soon
  • Deprecation (npm deprecate) - not supported

User management:

  • Registering new users (npm adduser {newuser}) - supported
  • Transferring ownership (npm owner add {user} {pkg}) - not supported, sinopia uses its own acl management system

Misc stuff:

  • Searching (npm search) - supported in the browser client but not command line
  • Starring (npm star, npm unstar) - not supported, doesn't make sense in private registry

Storage

No CouchDB here. This application is supposed to work with zero configuration, so filesystem is used as a storage.

If you want to use a database instead, ask for it, we'll come up with some kind of a plugin system.

Similar existing things

  • npm + git (I mean, using git+ssh:// dependencies) - most people seem to use this, but it's a terrible idea... npm update doesn't work, can't use git subdirectories this way, etc.
  • reggie - this looks very interesting indeed... I might borrow some code there.
  • shadow-npm, public service - it uses the same code as npmjs.org + service is dead
  • gemfury and others - those are closed-source cloud services, and I'm not in a mood to trust my private code to somebody (security through obscurity yeah!)
  • npm-registry-proxy, npm-delegate, npm-proxy - those are just proxies...
  • Is there something else?

Download Details:

Author: rlidwka
Source Code: https://github.com/rlidwka/sinopia 

#javascript #npm #server 

Sinopia: Private NPM Repository Server
Dexter  Goodwin

Dexter Goodwin

1661922360

Tink: A Dependency Unwinder for Javascript

tink  

tink is an experimental package manager for JavaScript. Don't expect to be able to use this with any of your existing projects.

IN DEVELOPMENT

This package is still in development. Do not use it for production. It is missing major features and the interface should be considered extremely unstable.

If you're feeling adventurous, though, read ahead...

Usage

$ npx tink

Features

  • (mostly) npm-compatible project installation

Contributing

The tink team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.

Acknowledgements

Big thanks to Szymon Lisowiec for donating the tink package name on npm! This package was previously an error logger helper tool, but now it's a package manager runtime!

Commands

A Note About These Docs

The commands documented below are not normative, and may not reflect the current state of tink development. They are being written separately from the code itself, and may be entirely missing, or named something different, or behave completely different. tink is still under heavy development and you should expect everything to change without notice.

$ tink shell [options] [arguments]

  • Aliases: tink sh, tish

Starts an interactive tink shell. If -e or -p options are used, the string passed to them will be executed as a single line and the shell will exit immediately. If [arguments] is provided, it should be one or more executable JavaScript files, which will be loaded serially.

The interactive tink shell will automatically generate a .package-map.json describing all expected dependency files, and will fetch and make available any missing or corrupted data, as it's required. tink overrides most of Node's fs API to virtually load node_modules off a centralized cache without ever linking or extracting to node_modules itself.

By default, tink shell will automatically install and add any missing or corrupted dependencies that are found during the loading process. To disable this feature, use the --production or --offline options.

To get a physical node_modules/ directory to interact with, see tink unwind.

$ tink prepare [options] [package...]

  • Aliases: tink prep

Preloads declared dependencies. You can use this to make sure that by the time you use tink shell, all declared dependencies will already be cached and available, so there won't be any execution delay from inline fetching and repairing. If anything is missing or corrupted, it will be automatically re-fetched.

If one or more packages are passed in, they should be the names of packages already in package.json, and only the listed packages will be preloaded, instead of preloading all of them. If you want to add a new dependency, use tink add instead, which will also prepare the new dependencies for you (so tink prepare isn't necessary after a tink add).

$ tink exec [options] <pkg> [--] [args...]

  • Aliases: tink x, tx

Like npx, but for tink. Runs any binaries directly through tink.

$ tink unwind [options] [package...]

  • Aliases: tink extract, tink frog, tink unroll

Unwinds the project's dependencies into physical files in node_modules/, instead of using the fs overrides to load them. This "unwound" mode can be used to directly patch dependencies (for example, when debugging or preparing to fork), or to enable compatibility with non-tink-related tools.

If one or more [package...] arguments are provided, the unwinding process will only apply to those dependencies and their dependencies. In this case, package must be a direct dependency of your toplevel package. You cannot selectively unwind transitive dependencies, but you can make it so they're the only ones that stick around when you go back to tink mode. See tink wind for the corresponding command.

If --production, --only=<prod|dev>, or --also=<prod|dev> options are passed in, they can be used to limit which dependency types get unwound.

By default, this command will leave any files that were already in node_modules/ intact, so your patches won't be clobbered. To do a full reset, or a specific reset on a file, remove the specific file or all of node_modules/ manually before calling tink unwind

$ tink wind [options] [package...]

  • Aliases: tink roll, tink rewind, tink knit

Removes physical files from node_modules/ and configures a project to use "tink mode" for development -- a mode where dependency files are virtually loaded through fs API overrides off a central cache. This mode can greatly speed up install and start times, as well as conserve large amounts of space by sharing files (securely) across multiple projects.

If one or more [package...] arguments are provided, the wind-up process will only move the listed packages and any non-shared dependencies into the global cache to be served from there. Note that only direct dependencies can be requested this way -- there is no way to target specific transitive dependencies in tink wind, much like in tink unwind.

Any individual files in node_modules which do not match up with their standard hashes from their original packages will be left in place, unless the --wind-all option is used. For example, if you use tink unwind, then patch one of your dependencies with some console.log() calls, and you then do tink rewind, then the files you added console.log() to will remain in node_modules/, and be prioritized by tink when loading your dependencies. Any other files, including those for the same package, will be moved into the global cache and loaded from there as usual.

$ tink add [options] [spec...]

Downloads and installs each spec, which must be a valid dependency specifier parseable by npm-package-arg, and adds the newly installed dependency or dependencies to both package.json and package-lock.json, as well as updating .package-map.json as needed.

$ tink rm [options] [package...]

Removes each package, which should be a package name currently specified in package.json, from the current project's dependencies, updating package.json, package-lock.json, and .package-map.json as needed.

$ tink update [options] [spec...]

  • Aliases: tink up

Runs an interactive dependency update/upgrade UI where individual package updates can be selected. If one or more package arguments are passed in, the update prompts will be limited to packages in the tree matching those specifiers. The specifiers support full npm-package-arg specs and are used for matching existing dependencies, not the target versions to upgrade to.

If run outside of a TTY environment or if the --auto option is passed in, all dependencies, optionally limited to each named package, are updated to their maximum semver-compatible version, effectively simulating a fresh install of the project with the current declared package.json dependencies and no node_modules or package-lock.json present.

$ tink audit [options]

  • Aliases: tink odd, tink audi

Executes a full security scan of the project's dependencies, using the configured registry's audit service. --production, --only, and --also can be used to filter which dependency types are checked. --level can be used to specify the minimum vulnerability level that will make the command exit with a non-zero exit code (an error).

$ tink check-lock [options]

  • Aliases: tink lock

Verifies that package.json and package-lock.json are in sync. If --auto is specified, the inconsistency will be automatically corrected, using package.json as the source of truth.

$ tink check-licenses [options] [spec...]

By default, verifies that the current project has a valid "license" field, and that all dependencies (and transitive dependencies) have valid licenses configured.

If one or more spec arguments are provided, this behavior changes such that only the packages specified by the specs get verified according to current settings.

A list of detected licenses will be printed out. Use --json to get the licenses in a parseable format.

Additionally, two package.json fields can be used to further configure the license-checking behavior:

  • "blacklist": [licenses...] - Any detected licenses listed here will trigger an error for tink check-licenses. This takes precedence over "whitelist"
  • "whitelist": [licenses...] - Any detected licenses NOT listed in here will trigger an error.

$ tink lint [options]

  • Aliases: tink typecheck, tink type

Executes the configured lint and typecheck script(s) (in that order), or a default baseline linter will be used to report obvious syntax errors in the codebase's JavaScript.

$ tink build [options]

Executes the configured build script, if present, or executes silently.

$ tink clean [options]

Removes .package-map.json and executes the clean run-script, which should remove any artifacts generated by tink build.

$ tink test [options]

Executed the configured test run-script. Exits with an error code if no test script is configured.

$ tink check

Executes all verification-related scripts in the following sequence, grouping the output together into one big report:

  1. tink check-lock - verify that the package-lock.json and package.json are in sync, and that .package-map.json is up to date.
  2. tink audit - runs a security audit of the project's dependencies.
  3. tink check-licenses - verifies that the current project has a license configured, and that all dependencies have valid licenses, and that none of those licenses are blacklisted (or, if using a whitelist, that they are all in said whitelist -- see the tink check-licenses docs for details).
  4. tink lint - runs the configured linter, or a general, default linter that statically scans for syntax errors.
  5. tink build - if a build script is configured, the build will be executed to make sure it completes successfully -- otherwise, this step is skipped.
  6. tink test - runs the configured test suite. skipped if no tests configured, but a warning will be emitted.

The final report includes potential action items related to each step. Use --verbose to see more detailed output for each report.

$ tink publish [options] [tarball...]

Publishes the current package to the configured registry. The package will be turned into a tarball using tink pack, and the tarball will then be uploaded. This command will also print out a summary of tarball details, including the files that were included and the hashes for the tarball.

If One-Time-Passwords are configured on the registry and the terminal is a TTY, this command will prompt for an OTP token if --otp <token> is not used. If this happens outside of a TTY, the command will fail with an EOTP error.

Unlike npm publish, tink publish requires that package.json include a "files":[] array specifying which files will be included in the publish, otherwise the publish will fail with an error. .npmignore is obeyed, but does not remove the requirement for "files".

If --dry-run is used, all steps will be done, except the final data upload to the registry. Because the upload never happens, --dry-run can't be used to verify that publish credentials work.

If one or more tarball arguments are passed, they will be treated as npm-package-arg specifiers, fetched, and re-published. This is most useful with git repositories and local tarballs that have already been packaged up by tink pack

$ tink pack [options] [spec...]

Collects the current package into a tarball and writes it to ./<pkgname>-<pkgversion>.tgz. Also prints out a summary of tarball details, including the files that were included and the hashes for the tarball.

Unlike npm pack, tink pack requires that package.json include a "files":[] array specifying which files will be included in the publish, otherwise the publish will fail with an error. .npmignore is obeyed, but does not remove the requirement for "files".

If one or more spec arguments are passed, they will be treated as npm-package-arg specifiers, fetched, and their tarballed packages written to the current directory. This is most useful for fetching the tarballs of registry-hosted dependencies. For example: $ tink pack react@1.2.3 will write the tarball to ./react-1.2.3.tgz.

$ tink login

Use this command to log in to the current npm registry. This command may open a browser window.

$ tink logout

Use this command to remove any auth tokens for the current registry from your configuration.

Download Details:

Author: npm
Source Code: https://github.com/npm/tink 
License: View license

#javascript #npm 

Tink: A Dependency Unwinder for Javascript

Boilerplate for Npm Modules with ES6 Features & All The Best Practices

NPM Module Boilerplate 

NOTE: This setup is pretty old and outdated for 2022. I need to update it to use Microbundle. In the meanwhile, do yourself a favour and setup your lib with Microbundle directly (it's pretty simple and straightforward) instead of using the boilerplate code.

Start developing your NPM module in seconds

Readymade boilerplate setup with all the best practices to kick start your npm/node module development.

Happy hacking =)

Features

  • ES6/ESNext - Write ES6 code and Babel will transpile it to ES5 for backwards compatibility
  • Test - Mocha with Istanbul coverage
  • Lint - Preconfigured ESlint with Airbnb config
  • CI - TravisCI configuration setup
  • Minify - Built code will be minified for performance

Commands

  • npm run clean - Remove lib/ directory
  • npm test - Run tests with linting and coverage results.
  • npm test:only - Run tests without linting or coverage.
  • npm test:watch - You can even re-run tests on file changes!
  • npm test:prod - Run tests with minified code.
  • npm run test:examples - Test written examples on pure JS for better understanding module usage.
  • npm run lint - Run ESlint with airbnb-config
  • npm run cover - Get coverage report for your code.
  • npm run build - Babel will transpile ES6 => ES5 and minify the code.
  • npm run prepublish - Hook for npm. Do all the checks before publishing your module.

Installation

Just clone this repo and remove .git folder.

Download Details:

Author: flexdinesh
Source Code: https://github.com/flexdinesh/npm-module-boilerplate 
License: MIT license

#javascript #npm #modules #boilerplate 

Boilerplate for Npm Modules with ES6 Features & All The Best Practices
Billy Chandler

Billy Chandler

1661246332

NPM Package for Custom Draggable React Slider using React Spring & GSA

In this tutorial, we'll learn about React Draggable Slider, NPM package for my custom Draggable React Slider using React Spring and GSAP.

gif gif-mobile

Installation

npm install react-draggable-slider --save-dev

Demo

https://sanderdebr.github.io/react-draggable-slider/

Usage

Add <Slider /> component with sliderSettings object, the only required setting an array of slider items.

import { Slider } from "react-draggable-slider";
import { projectList } from "./data";

function App() {
  const sliderSettings = {
    data: projectList,
    speed: 3000,
    easing: "elastic",
    bgColor: "rgba(255, 255, 255, 0.05)",
    buttonHref: "https://www.google.com",
    buttonTarget: "_self",
    buttonText: "View project",
    showButton: true,
  };
  return <Slider sliderSettings={sliderSettings} />;
}

Use the following structure for your slider items:

export const projectList = [
  {
    title: "Cutting Edge Project",
    image: "https://source.unsplash.com/collection/347317/",
    description: "Praesent quis congue nisi...",
  },
  {
    title: "Featured Artist 3D",
    image: "https://source.unsplash.com/collection/3573299/",
    description: "Duis at tellus vitae velit aliquet varius...",
  },
];

Note: although the above example uses hooks, react-draggable-slider is compatible with Class-based components. However, since it internally uses hooks, it requires React 16.8+.

Props

The sliderSettings prop in <Slider sliderSettings={sliderSettings} /> component accepts the following props:

NameTypeDescriptionDefault Value
dataarrayarray of slider items, see below which structure you may use[]
speednumberspeed of sliding to next item when dragged in milliseconds3000 (3 seconds)
easingstring4 available GSAP easings to animate the sliding: "power", "back", "elastic", "expo".ease
bgColorstringSet background-color of the whole slider, accepts HEX and RGB(A).rgba(255, 255, 255, 0.05)
buttonTextstringText inside button per itemView case study
showButtonbooleanIf a button should be shown for all itemstrue

Using

  • React Spring
  • GSAP
  • Styled Components

Download Details: 
Author: sanderdebr
Source Code: https://github.com/sanderdebr/react-draggable-slider 
License: MIT
#react #reactjs #npm #webdev

NPM Package for Custom Draggable React Slider using React Spring & GSA
Dexter  Goodwin

Dexter Goodwin

1661239440

init-package-json: A Node Module to Get Your Node Module Started

init-package-json

A node module to get your node module started.

Usage

var init = require('init-package-json')
var path = require('path')

// a path to a promzard module.  In the event that this file is
// not found, one will be provided for you.
var initFile = path.resolve(process.env.HOME, '.npm-init')

// the dir where we're doin stuff.
var dir = process.cwd()

// extra stuff that gets put into the PromZard module's context.
// In npm, this is the resolved config object.  Exposed as 'config'
// Optional.
var configData = { some: 'extra stuff' }

// Any existing stuff from the package.json file is also exposed in the
// PromZard module as the `package` object.  There will also be three
// vars for:
// * `filename` path to the package.json file
// * `basename` the tip of the package dir
// * `dirname` the parent of the package dir

init(dir, initFile, configData, function (er, data) {
  // the data's already been written to {dir}/package.json
  // now you can do stuff with it
})

Or from the command line:

$ npm-init

See PromZard for details about what can go in the config file.

Download Details:

Author: npm
Source Code: https://github.com/npm/init-package-json 
License: View license

#javascript #json #npm 

init-package-json: A Node Module to Get Your Node Module Started
Dexter  Goodwin

Dexter Goodwin

1661231580

Promzard: A Prompting JSON Thingie

promzard

A prompting wizard for building files from specialized PromZard modules. Used by npm init.

A reimplementation of @SubStack's prompter, which does not use AST traversal.

From another point of view, it's a reimplementation of @Marak's wizard which doesn't use schemas.

The goal is a nice drop-in enhancement for npm init.

Usage

var promzard = require('promzard')
promzard(inputFile, optionalContextAdditions, function (er, data) {
  // .. you know what you doing ..
})

In the inputFile you can have something like this:

var fs = require('fs')
module.exports = {
  "greeting": prompt("Who shall you greet?", "world", function (who) {
    return "Hello, " + who
  }),
  "filename": __filename,
  "directory": function (cb) {
    fs.readdir(__dirname, cb)
  }
}

When run, promzard will display the prompts and resolve the async functions in order, and then either give you an error, or the resolved data, ready to be dropped into a JSON file or some other place.

promzard(inputFile, ctx, callback)

The inputFile is just a node module. You can require() things, set module.exports, etc. Whatever that module exports is the result, and it is walked over to call any functions as described below.

The only caveat is that you must give PromZard the full absolute path to the module (you can get this via Node's require.resolve.) Also, the prompt function is injected into the context object, so watch out.

Whatever you put in that ctx will of course also be available in the module. You can get quite fancy with this, passing in existing configs and so on.

Class: promzard.PromZard(file, ctx)

Just like the promzard function, but the EventEmitter that makes it all happen. Emits either a data event with the data, or a error event if it blows up.

If error is emitted, then data never will be.

prompt(...)

In the promzard input module, you can call the prompt function. This prompts the user to input some data. The arguments are interpreted based on type:

  1. string The first string encountered is the prompt. The second is the default value.
  2. function A transformer function which receives the data and returns something else. More than meets the eye.
  3. object The prompt member is the prompt, the default member is the default value, and the transform is the transformer.

Whatever the final value is, that's what will be put on the resulting object.

Functions

If there are any functions on the promzard input module's exports, then promzard will call each of them with a callback. This way, your module can do asynchronous actions if necessary to validate or ascertain whatever needs verification.

The functions are called in the context of the ctx object, and are given a single argument, which is a callback that should be called with either an error, or the result to assign to that spot.

In the async function, you can also call prompt() and return the result of the prompt in the callback.

For example, this works fine in a promzard module:

exports.asyncPrompt = function (cb) {
  fs.stat(someFile, function (er, st) {
    // if there's an error, no prompt, just error
    // otherwise prompt and use the actual file size as the default
    cb(er, prompt('file size', st.size))
  })
}

You can also return other async functions in the async function callback. Though that's a bit silly, it could be a handy way to reuse functionality in some cases.

Sync vs Async

The prompt() function is not synchronous, though it appears that way. It just returns a token that is swapped out when the data object is walked over asynchronously later, and returns a token.

For that reason, prompt() calls whose results don't end up on the data object are never shown to the user. For example, this will only prompt once:

exports.promptThreeTimes = prompt('prompt me once', 'shame on you')
exports.promptThreeTimes = prompt('prompt me twice', 'um....')
exports.promptThreeTimes = prompt('you cant prompt me again')

Isn't this exactly the sort of 'looks sync' that you said was bad about other libraries?

Yeah, sorta. I wouldn't use promzard for anything more complicated than a wizard that spits out prompts to set up a config file or something. Maybe there are other use cases I haven't considered.

Download Details:

Author: npm
Source Code: https://github.com/npm/promzard 
License: ISC license

#javascript #npm 

Promzard: A Prompting JSON Thingie
Lawrence  Lesch

Lawrence Lesch

1661020860

Cacache: Npm's Content-addressable Cache

cacache

cacache is a Node.js library for managing local key and content address caches. It's really fast, really good at concurrency, and it will never give you corrupted data, even if cache files get corrupted or manipulated.

On systems that support user and group settings on files, cacache will match the uid and gid values to the folder where the cache lives, even when running as root.

It was written to be used as npm's local cache, but can just as easily be used on its own.

Install

$ npm install --save cacache

Example

const cacache = require('cacache')
const fs = require('fs')

const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'

// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
  console.log(`Saved content to ${cachePath}.`)
})

const destination = '/tmp/mytar.tgz'

// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
  cachePath, key
).pipe(
  fs.createWriteStream(destination)
).on('finish', () => {
  console.log('done extracting!')
})

// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
  fs.writeFile(destination, data, err => {
    console.log('tarball data fetched based on its sha512sum and written out!')
  })
})

Features

  • Extraction by key or by content address (shasum, etc)
  • Subresource Integrity web standard support
  • Multi-hash support - safely host sha1, sha512, etc, in a single cache
  • Automatic content deduplication
  • Fault tolerance (immune to corruption, partial writes, process races, etc)
  • Consistency guarantees on read and write (full data verification)
  • Lockless, high-concurrency cache access
  • Streaming support
  • Promise support
  • Fast -- sub-millisecond reads and writes including verification
  • Arbitrary metadata storage
  • Garbage collection and additional offline verification
  • Thorough test coverage
  • There's probably a bloom filter in there somewhere. Those are cool, right? 🤔

Contributing

The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.

All participants and maintainers in this project are expected to follow Code of Conduct, and just generally be excellent to each other.

Please refer to the Changelog for project history details, too.

Happy hacking!

API

> cacache.ls(cache) -> Promise<Object>

Lists info for all entries currently in the cache as a single large object. Each entry in the object will be keyed by the unique index key, with corresponding get.info objects as the values.

Example

cacache.ls(cachePath).then(console.log)
// Output
{
  'my-thing': {
    key: 'my-thing',
    integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
    path: '.testcache/content/deadbeef', // joined with `cachePath`
    time: 12345698490,
    size: 4023948,
    metadata: {
      name: 'blah',
      version: '1.2.3',
      description: 'this was once a package but now it is my-thing'
    }
  },
  'other-thing': {
    key: 'other-thing',
    integrity: 'sha1-ANothER+hasH=',
    path: '.testcache/content/bada55',
    time: 11992309289,
    size: 111112
  }
}

> cacache.ls.stream(cache) -> Readable

Lists info for all entries currently in the cache as a single large object.

This works just like ls, except get.info entries are returned as 'data' events on the returned stream.

Example

cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
  key: 'my-thing',
  integrity: 'sha512-BaSe64HaSh',
  path: '.testcache/content/deadbeef', // joined with `cachePath`
  time: 12345698490,
  size: 13423,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

{
  key: 'other-thing',
  integrity: 'whirlpool-WoWSoMuchSupport',
  path: '.testcache/content/bada55',
  time: 11992309289,
  size: 498023984029
}

{
  ...
}

> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})

Returns an object with the cached data, digest, and metadata identified by key. The data property of this object will be a Buffer instance that presumably holds some data that means something to you. I'm sure you know what to do with it! cacache just won't care.

integrity is a Subresource Integrity string. That is, a string that can be used to verify data, which looks like <hash-algorithm>-<base64-integrity-hash>.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, the promise will be rejected.

A sub-function, get.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version of the function only returns data itself, without any wrapper.

See: options

Note

This function loads the entire cache entry into memory before returning it. If you're dealing with Very Large data, consider using get.stream instead.

Example

// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
  metadata: {
    thingName: 'my'
  },
  integrity: 'sha512-BaSe64HaSh',
  data: Buffer#<deadbeef>,
  size: 9320
}

// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>

> cacache.get.stream(cache, key, [opts]) -> Readable

Returns a Readable Stream of the cached data identified by key.

If there is no content identified by key, or if the locally-stored data does not pass the validity checksum, an error will be emitted.

metadata and integrity events will be emitted before the stream closes, if you need to collect that extra data about the cached entry.

A sub-function, get.stream.byDigest may be used for identical behavior, except lookup will happen by integrity hash, bypassing the index entirely. This version does not emit the metadata and integrity events at all.

See: options

Example

// Look up by key
cache.get.stream(
  cachePath, 'my-thing'
).on('metadata', metadata => {
  console.log('metadata:', metadata)
}).on('integrity', integrity => {
  console.log('integrity:', integrity)
}).pipe(
  fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='

// Look up by digest
cache.get.stream.byDigest(
  cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
  fs.createWriteStream('./x.tgz')
)

> cacache.get.info(cache, key) -> Promise

Looks up key in the cache index, returning information about the entry if one exists.

Fields

  • key - Key the entry was looked up under. Matches the key argument.
  • integrity - Subresource Integrity hash for the content this entry refers to.
  • path - Filesystem path where content is stored, joined with cache argument.
  • time - Timestamp the entry was first added on.
  • metadata - User-assigned metadata associated with the entry/content.

Example

cacache.get.info(cachePath, 'my-thing').then(console.log)

// Output
{
  key: 'my-thing',
  integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
  path: '.testcache/content/deadbeef',
  time: 12345698490,
  size: 849234,
  metadata: {
    name: 'blah',
    version: '1.2.3',
    description: 'this was once a package but now it is my-thing'
  }
}

> cacache.get.hasContent(cache, integrity) -> Promise

Looks up a Subresource Integrity hash in the cache. If content exists for this integrity, it will return an object, with the specific single integrity hash that was found in sri key, and the size of the found content as size. If no content exists for this integrity, it will return false.

Example

cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)

// Output
{
  sri: {
    source: 'sha256-MUSTVERIFY+ALL/THINGS==',
    algorithm: 'sha256',
    digest: 'MUSTVERIFY+ALL/THINGS==',
    options: []
  },
  size: 9001
}

cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)

// Output
false

Options

opts.integrity

If present, the pre-calculated digest for the inserted content. If this option is provided and does not match the post-insertion digest, insertion will fail with an EINTEGRITY error.

opts.memoize

Default: null

If explicitly truthy, cacache will read from memory and memoize data on bulk read. If false, cacache will read from disk data. Reader functions by default read from in-memory cache.

opts.size

If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an EBADSIZE error.

> cacache.put(cache, key, data, [opts]) -> Promise

Inserts data passed to it into the cache. The returned Promise resolves with a digest (generated according to opts.algorithms) after the cache entry has been successfully written.

See: options

Example

fetch(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
  return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(integrity => {
  console.log('integrity hash is', integrity)
})

> cacache.put.stream(cache, key, [opts]) -> Writable

Returns a Writable Stream that inserts data written to it into the cache. Emits an integrity event with the digest of written contents when it succeeds.

See: options

Example

request.get(
  'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
  cacache.put.stream(
    cachePath, 'registry.npmjs.org|cacache@1.0.0'
  ).on('integrity', d => console.log(`integrity digest is ${d}`))
)

Options

opts.metadata

Arbitrary metadata to be attached to the inserted key.

opts.size

If provided, the data stream will be verified to check that enough data was passed through. If there's more or less data than expected, insertion will fail with an EBADSIZE error.

opts.integrity

If present, the pre-calculated digest for the inserted content. If this option is provided and does not match the post-insertion digest, insertion will fail with an EINTEGRITY error.

algorithms has no effect if this option is present.

opts.integrityEmitter

Streaming only If present, uses the provided event emitter as a source of truth for both integrity and size. This allows use cases where integrity is already being calculated outside of cacache to reuse that data instead of calculating it a second time.

The emitter must emit both the 'integrity' and 'size' events.

NOTE: If this option is provided, you must verify that you receive the correct integrity value yourself and emit an 'error' event if there is a mismatch. ssri Integrity Streams do this for you when given an expected integrity.

opts.algorithms

Default: ['sha512']

Hashing algorithms to use when calculating the subresource integrity digest for inserted data. Can use any algorithm listed in crypto.getHashes() or 'omakase'/'お任せします' to pick a random hash algorithm on each insertion. You may also use any anagram of 'modnar' to use this feature.

Currently only supports one algorithm at a time (i.e., an array length of exactly 1). Has no effect if opts.integrity is present.

opts.memoize

Default: null

If provided, cacache will memoize the given cache insertion in memory, bypassing any filesystem checks for that key or digest in future cache fetches. Nothing will be written to the in-memory cache unless this option is explicitly truthy.

If opts.memoize is an object or a Map-like (that is, an object with get and set methods), it will be written to instead of the global memoization cache.

Reading from disk data can be forced by explicitly passing memoize: false to the reader functions, but their default will be to read from memory.

opts.tmpPrefix

Default: null

Prefix to append on the temporary directory name inside the cache's tmp dir.

> cacache.rm.all(cache) -> Promise

Clears the entire cache. Mainly by blowing away the cache directory itself.

Example

cacache.rm.all(cachePath).then(() => {
  console.log('THE APOCALYPSE IS UPON US 😱')
})

> cacache.rm.entry(cache, key, [opts]) -> Promise

Alias: cacache.rm

Removes the index entry for key. Content will still be accessible if requested directly by content address (get.stream.byDigest).

By default, this appends a new entry to the index with an integrity of null. If opts.removeFully is set to true then the index file itself will be physically deleted rather than appending a null.

To remove the content itself (which might still be used by other entries), use rm.content. Or, to safely vacuum any unused content, use verify.

Example

cacache.rm.entry(cachePath, 'my-thing').then(() => {
  console.log('I did not like it anyway')
})

> cacache.rm.content(cache, integrity) -> Promise

Removes the content identified by integrity. Any index entries referring to it will not be usable again until the content is re-added to the cache with an identical digest.

Example

cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
  console.log('data for my-thing is gone!')
})

> cacache.index.compact(cache, key, matchFn, [opts]) -> Promise

Uses matchFn, which must be a synchronous function that accepts two entries and returns a boolean indicating whether or not the two entries match, to deduplicate all entries in the cache for the given key.

If opts.validateEntry is provided, it will be called as a function with the only parameter being a single index entry. The function must return a Boolean, if it returns true the entry is considered valid and will be kept in the index, if it returns false the entry will be removed from the index.

If opts.validateEntry is not provided, however, every entry in the index will be deduplicated and kept until the first null integrity is reached, removing all entries that were written before the null.

The deduplicated list of entries is both written to the index, replacing the existing content, and returned in the Promise.

> cacache.index.insert(cache, key, integrity, opts) -> Promise

Writes an index entry to the cache for the given key without writing content.

It is assumed if you are using this method, you have already stored the content some other way and you only wish to add a new index to that content. The metadata and size properties are read from opts and used as part of the index entry.

Returns a Promise resolving to the newly added entry.

> cacache.clearMemoized()

Completely resets the in-memory entry cache.

> tmp.mkdir(cache, opts) -> Promise<Path>

Returns a unique temporary directory inside the cache's tmp dir. This directory will use the same safe user assignment that all the other stuff use.

Once the directory is made, it's the user's responsibility that all files within are given the appropriate gid/uid ownership settings to match the rest of the cache. If not, you can ask cacache to do it for you by calling tmp.fix(), which will fix all tmp directory permissions.

If you want automatic cleanup of this directory, use tmp.withTmp()

See: options

Example

cacache.tmp.mkdir(cache).then(dir => {
  fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})

> tmp.fix(cache) -> Promise

Sets the uid and gid properties on all files and folders within the tmp folder to match the rest of the cache.

Use this after manually writing files into tmp.mkdir or tmp.withTmp.

Example

cacache.tmp.mkdir(cache).then(dir => {
  writeFile(path.join(dir, 'file'), someData).then(() => {
    // make sure we didn't just put a root-owned file in the cache
    cacache.tmp.fix().then(() => {
      // all uids and gids match now
    })
  })
})

> tmp.withTmp(cache, opts, cb) -> Promise

Creates a temporary directory with tmp.mkdir() and calls cb with it. The created temporary directory will be removed when the return value of cb() resolves, the tmp directory will be automatically deleted once that promise completes.

The same caveats apply when it comes to managing permissions for the tmp dir's contents.

See: options

Example

cacache.tmp.withTmp(cache, dir => {
  return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
}).then(() => {
  // `dir` no longer exists
})

Options

opts.tmpPrefix

Default: null

Prefix to append on the temporary directory name inside the cache's tmp dir.

Subresource Integrity Digests

For content verification and addressing, cacache uses strings following the Subresource Integrity spec. That is, any time cacache expects an integrity argument or option, it should be in the format <hashAlgorithm>-<base64-hash>.

One deviation from the current spec is that cacache will support any hash algorithms supported by the underlying Node.js process. You can use crypto.getHashes() to see which ones you can use.

Generating Digests Yourself

If you have an existing content shasum, they are generally formatted as a hexadecimal string (that is, a sha1 would look like: 5f5513f8822fdbe5145af33b64d8d970dcf95c6e). In order to be compatible with cacache, you'll need to convert this to an equivalent subresource integrity string. For this example, the corresponding hash would be: sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=.

If you want to generate an integrity string yourself for existing data, you can use something like this:

const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'

const integrity = (
  hashAlgorithm +
  '-' +
  crypto.createHash(hashAlgorithm).update(data).digest('base64')
)

You can also use ssri to have a richer set of functionality around SRI strings, including generation, parsing, and translating from existing hex-formatted strings.

> cacache.verify(cache, opts) -> Promise

Checks out and fixes up your cache:

  • Cleans up corrupted or invalid index entries.
  • Custom entry filtering options.
  • Garbage collects any content entries not referenced by the index.
  • Checks integrity for all content entries and removes invalid content.
  • Fixes cache ownership.
  • Removes the tmp directory in the cache and all its contents.

When it's done, it'll return an object with various stats about the verification process, including amount of storage reclaimed, number of valid entries, number of entries removed, etc.

Options

opts.concurrency

Default: 20

Number of concurrently read files in the filesystem while doing clean up.

opts.filter

Receives a formatted entry. Return false to remove it. Note: might be called more than once on the same entry.

opts.log

Custom logger function:

  log: { silly () {} }
  log.silly('verify', 'verifying cache at', cache)

Example

echo somegarbage >> $CACHEPATH/content/deadbeef
cacache.verify(cachePath).then(stats => {
  // deadbeef collected, because of invalid checksum.
  console.log('cache is much nicer now! stats:', stats)
})

> cacache.verify.lastRun(cache) -> Promise

Returns a Date representing the last time cacache.verify was run on cache.

Example

cacache.verify(cachePath).then(() => {
  cacache.verify.lastRun(cachePath).then(lastTime => {
    console.log('cacache.verify was last called on' + lastTime)
  })
})

Download Details:

Author: npm
Source Code: https://github.com/npm/cacache 
License: View license

#javascript #node #npm 

Cacache: Npm's Content-addressable Cache
Reid  Rohan

Reid Rohan

1660598700

Semantic-release/npm: Semantic-release Plugin to Publish A NPM Package

@semantic-release/npm

semantic-release plugin to publish a npm package.   

StepDescription
verifyConditionsVerify the presence of the NPM_TOKEN environment variable, or an .npmrc file, and verify the authentication method is valid.
prepareUpdate the package.json version and create the npm package tarball.
addChannelAdd a release to a dist-tag.
publishPublish the npm package to the registry.

Install

$ npm install @semantic-release/npm -D

Usage

The plugin can be configured in the semantic-release configuration file:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    "@semantic-release/npm",
  ]
}

Configuration

Npm registry authentication

The npm authentication configuration is required and can be set via environment variables.

Both the token and the legacy (username, password and email) authentication are supported. It is recommended to use the token authentication. The legacy authentication is supported as the alternative npm registries Artifactory and npm-registry-couchapp only supports that form of authentication.

Notes:

  • Only the auth-only level of npm two-factor authentication is supported, semantic-release will not work with the default auth-and-writes level.
  • The presence of an .npmrc file will override any specified environment variables.

Environment variables

VariableDescription
NPM_TOKENNpm token created via npm token create
NPM_USERNAMENpm username created via npm adduser or on npmjs.com
NPM_PASSWORDPassword of the npm user.
NPM_EMAILEmail address associated with the npm user
NPM_CONFIG_USERCONFIGPath to non-default .npmrc file

Use either NPM_TOKEN for token authentication or NPM_USERNAME, NPM_PASSWORD and NPM_EMAIL for legacy authentication

Options

OptionsDescriptionDefault
npmPublishWhether to publish the npm package to the registry. If false the package.json version will still be updated.false if the package.json private property is true, true otherwise.
pkgRootDirectory path to publish..
tarballDirDirectory path in which to write the package tarball. If false the tarball is not be kept on the file system.false

Note: The pkgRoot directory must contain a package.json. The version will be updated only in the package.json and npm-shrinkwrap.json within the pkgRoot directory.

Note: If you use a shareable configuration that defines one of these options you can set it to false in your semantic-release configuration in order to use the default value.

Npm configuration

The plugin uses the npm CLI which will read the configuration from .npmrc. See npm config for the option list.

The registry can be configured via the npm environment variable NPM_CONFIG_REGISTRY and will take precedence over the configuration in .npmrc.

The registry and dist-tag can be configured in the package.json and will take precedence over the configuration in .npmrc and NPM_CONFIG_REGISTRY:

{
  "publishConfig": {
    "registry": "https://registry.npmjs.org/",
    "tag": "latest"
  }
}

Examples

The npmPublish and tarballDir option can be used to skip the publishing to the npm registry and instead, release the package tarball with another plugin. For example with the @semantic-release/github plugin:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    ["@semantic-release/npm", {
      "npmPublish": false,
      "tarballDir": "dist",
    }],
    ["@semantic-release/github", {
      "assets": "dist/*.tgz"
    }]
  ]
}

When publishing from a sub-directory with the pkgRoot option, the package.json and npm-shrinkwrap.json updated with the new version can be moved to another directory with a postversion. For example with the @semantic-release/git plugin:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    ["@semantic-release/npm", {
      "pkgRoot": "dist",
    }],
    ["@semantic-release/git", {
      "assets": ["package.json", "npm-shrinkwrap.json"]
    }]
  ]
}
{
  "scripts": {
    "postversion": "cp -r package.json .. && cp -r npm-shrinkwrap.json .."
  }
}

Download Details:

Author: Semantic-release
Source Code: https://github.com/semantic-release/npm 
License: MIT license

#javascript #npm #registry #version 

Semantic-release/npm: Semantic-release Plugin to Publish A NPM Package