1679075880
In the previous post, we talked about the Linux iptables firewall, and some people asked about authentication. Today, we will talk about the powerful framework in Linux used for authentication, which is Linux-PAM. PAM or Pluggable Authentication Modules are the management layer that sits between Linux applications and the Linux native authentication system.
There are many programs on your system that use PAM modules like su, passwd, ssh, login, and other services. We will discuss some of them.
PAM’s main focus is to authenticate your users.
Authentication in Linux is done by matching the encrypted password in /etc/shadow file with the entered one.
We have many services on our systems that require authentication like SSH, FTP, TELNET, IMAP, and many other services. So, we will have a lot of authentication files besides /etc/shadow file to maintain, and it could be a serious problem if there is any inconsistent data between these authentication files.
Here comes PAM. Linux-PAM offers a unified login system for your services.
To check if your program uses Linux-PAM or not:
$ ldd /bin/su
You should see libpam.so library.
The configuration of Linux-PAM is in the directory /etc/pam.d/.
Some PAM modules require configuration files with the PAM configuration to operate. You can find the configuration files in /etc/security.
If you misconfigure PAM, this could lead to serious problems.
The four types of PAM services:
Any application requires authentication can register with PAM using a service name.
You can list Linux services that use Linux-PAM.
$ ls /etc/pam.d/
If you open any service file, you will see that the file is divided into three columns. The first column is the management group, the second column is for control flags, and the third column is the module (so file) used.
$ cat /etc/pam.d/sshd
account required pam_nologin.so
The account is the management group, required is the control flag, and the used module is pam_nologin.so.
You may find a fourth column, which is for module parameters.
There are four Management Groups you will see in PAM services files:
We have four control flags in services files:
The order is important because each module depends on the previous module on the stack.
If you try a configuration like the following to log in:
auth required pam_unix.so
auth optional pam_deny.so
That will work correctly, but what will happen if we change the order like this:
auth optional pam_deny.so
auth required pam_unix.so
No one can log in, so the order matters.
There are PAM built-in modules on your system that you should know about, so you can use them perfectly.
This module allows access for the specified groups. You can validate user accounts like this:
auth required pam_succeed_if.so gid=1000,2000
The above line states that only users in the group whose ID 1000 or 2000 are allowed to log in.
You can use uid as user id instead.
auth requisite pam_succeed_if.so uid >= 1000
In this example, any user id greater than or equal 1000 can log in.
You can also use it with ingroup parameter like this:
auth required pam_succeed_if.so user ingroup mygroup
Only people in the group named mygroup can log in.
This module allows root only to log in if the file is available.
/etc/nologin
auth required pam_nologin.so
You can modify the login service file with this line and create /etc/nologin file, so root only can log in.
And you can use it with auth, account management groups.
This module works like the pam_succeed_if module except the pam_access module checks logging from networked hosts, while the pam_succeed_if module doesn’t care.
account required pam_access.so accessfile=/etc/security/access.conf
You can type your rules in the file like this:
/etc/security/access.conf
+:mygroup
-:ALL:ALL
The above rules state that only mygroup users are allowed to log in while others can’t.
Where plus sign means allow and minus sign means deny.
This module is used with auth, account, session, password management groups.
You can use this module for restricting access. It will always return a non-OK.
You can use it at the end of your module stack to protect yourself from any misconfiguration.
If you use it at the beginning of the module stack, your service will be disabled:
auth required pam_deny.so
auth required pam_unix.so
And you can use it with auth, account, session, password management groups.
You can use this module to check the user’s credentials against /etc/shadow file.
auth required pam_unix.so
You will see this module used in many services in your system.
And you can use it with auth, session, password management groups.
You can use this module to check if the user is in /etc/passwd.
account sufficient pam_localuser.so
And you can use it with auth, session, password, account management groups.
Instead of checking the user’s credentials against/etc/shadow, you can use a MySQL database as a backend using the pam_mysql module.
You can use it like this:
auth sufficient pam_mysql.so user=myuser passwd=mypassword host=localhost db=mydb table=users usercolumn=username passwdcolumn=password
Here we validate the user with the parameters for pam_mysql.
You can install if it is not on your system like this:
$ yum install libpam-mysql
We use this module with auth, session, password, account management groups.
Strong passwords are a must these days. This module ensures that you will use strong passwords.
password required pam_cracklib.so retry=4 minlen=12 difok=6
This example ensures that:
Password minimum length = 12
Four times to pick a strong password; otherwise, it will exit.
Your new password must have six new characters from the old password. You can use this module with the password management group.
This module checks if the user ID is 0, which means only root users can run this service.
auth sufficient pam_rootok.so
We use this module to ensure that a specific service is allowed for root users only, and you can use it with the auth management group.
You can use this module to set limits on the system resources. It affects even root users.
The limits configuration is in the /etc/security/limits.d/ directory.
session required pam_limits.so
You can use this module to protect your system resources, and you can use it with the session management group.
The limits in /etc/security/limits.conf file could be hard or soft.
Hard: The user cannot change its value, but root can.
Soft: normal user can change it.
The limits could be fsize, cpu, nproc, nproc, data, and many other limits.
@mygroup hard nproc 50
myuser hard cpu 5000
The first limit for mygroup members, which sets the number of processes for each one of them to be 50.
The second limit for the user named myuser, which limits the CPU time to 5000 minutes.
You can edit any PAM service file in /etc/pam.d/ and use the module you want to protect your services the way you want.
Original article source at: https://likegeeks.com/
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1652543820
Background Fetch is a very simple plugin which attempts to awaken an app in the background about every 15 minutes, providing a short period of background running-time. This plugin will execute your provided callbackFn
whenever a background-fetch event occurs.
There is no way to increase the rate which a fetch-event occurs and this plugin sets the rate to the most frequent possible — you will never receive an event faster than 15 minutes. The operating-system will automatically throttle the rate the background-fetch events occur based upon usage patterns. Eg: if user hasn't turned on their phone for a long period of time, fetch events will occur less frequently or if an iOS user disables background refresh they may not happen at all.
:new: Background Fetch now provides a scheduleTask
method for scheduling arbitrary "one-shot" or periodic tasks.
scheduleTask
seems only to fire when the device is plugged into power.stopOnTerminate: false
for iOS.@config enableHeadless
)⚠️ If you have a previous version of react-native-background-fetch < 2.7.0
installed into react-native >= 0.60
, you should first unlink
your previous version as react-native link
is no longer required.
$ react-native unlink react-native-background-fetch
yarn
$ yarn add react-native-background-fetch
npm
$ npm install --save react-native-background-fetch
react-native >= 0.60
react-native >= 0.60
ℹ️ This repo contains its own Example App. See /example
import React from 'react';
import {
SafeAreaView,
StyleSheet,
ScrollView,
View,
Text,
FlatList,
StatusBar,
} from 'react-native';
import {
Header,
Colors
} from 'react-native/Libraries/NewAppScreen';
import BackgroundFetch from "react-native-background-fetch";
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
events: []
};
}
componentDidMount() {
// Initialize BackgroundFetch ONLY ONCE when component mounts.
this.initBackgroundFetch();
}
async initBackgroundFetch() {
// BackgroundFetch event handler.
const onEvent = async (taskId) => {
console.log('[BackgroundFetch] task: ', taskId);
// Do your background work...
await this.addEvent(taskId);
// IMPORTANT: You must signal to the OS that your task is complete.
BackgroundFetch.finish(taskId);
}
// Timeout callback is executed when your Task has exceeded its allowed running-time.
// You must stop what you're doing immediately BackgroundFetch.finish(taskId)
const onTimeout = async (taskId) => {
console.warn('[BackgroundFetch] TIMEOUT task: ', taskId);
BackgroundFetch.finish(taskId);
}
// Initialize BackgroundFetch only once when component mounts.
let status = await BackgroundFetch.configure({minimumFetchInterval: 15}, onEvent, onTimeout);
console.log('[BackgroundFetch] configure status: ', status);
}
// Add a BackgroundFetch event to <FlatList>
addEvent(taskId) {
// Simulate a possibly long-running asynchronous task with a Promise.
return new Promise((resolve, reject) => {
this.setState(state => ({
events: [...state.events, {
taskId: taskId,
timestamp: (new Date()).toString()
}]
}));
resolve();
});
}
render() {
return (
<>
<StatusBar barStyle="dark-content" />
<SafeAreaView>
<ScrollView
contentInsetAdjustmentBehavior="automatic"
style={styles.scrollView}>
<Header />
<View style={styles.body}>
<View style={styles.sectionContainer}>
<Text style={styles.sectionTitle}>BackgroundFetch Demo</Text>
</View>
</View>
</ScrollView>
<View style={styles.sectionContainer}>
<FlatList
data={this.state.events}
renderItem={({item}) => (<Text>[{item.taskId}]: {item.timestamp}</Text>)}
keyExtractor={item => item.timestamp}
/>
</View>
</SafeAreaView>
</>
);
}
}
const styles = StyleSheet.create({
scrollView: {
backgroundColor: Colors.lighter,
},
body: {
backgroundColor: Colors.white,
},
sectionContainer: {
marginTop: 32,
paddingHorizontal: 24,
},
sectionTitle: {
fontSize: 24,
fontWeight: '600',
color: Colors.black,
},
sectionDescription: {
marginTop: 8,
fontSize: 18,
fontWeight: '400',
color: Colors.dark,
},
});
export default App;
In addition to the default background-fetch task defined by BackgroundFetch.configure
, you may also execute your own arbitrary "oneshot" or periodic tasks (iOS requires additional Setup Instructions). However, all events will be fired into the Callback provided to BackgroundFetch#configure
:
scheduleTask
on iOS seems only to run when the device is plugged into power.scheduleTask
on iOS are designed for low-priority tasks, such as purging cache files — they tend to be unreliable for mission-critical tasks. scheduleTask
will never run as frequently as you want.fetch
event is much more reliable and fires far more often.scheduleTask
on iOS stop when the user terminates the app. There is no such thing as stopOnTerminate: false
for iOS.// Step 1: Configure BackgroundFetch as usual.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback
// This is the fetch-event callback.
console.log("[BackgroundFetch] taskId: ", taskId);
// Use a switch statement to route task-handling.
switch (taskId) {
case 'com.foo.customtask':
print("Received custom task");
break;
default:
print("Default fetch task");
}
// Finish, providing received taskId.
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
// Step 2: Schedule a custom "oneshot" task "com.foo.customtask" to execute 5000ms from now.
BackgroundFetch.scheduleTask({
taskId: "com.foo.customtask",
forceAlarmManager: true,
delay: 5000 // <-- milliseconds
});
API Documentation
@param {Integer} minimumFetchInterval [15]
The minimum interval in minutes to execute background fetch events. Defaults to 15
minutes. Note: Background-fetch events will never occur at a frequency higher than every 15 minutes. Apple uses a secret algorithm to adjust the frequency of fetch events, presumably based upon usage patterns of the app. Fetch events can occur less often than your configured minimumFetchInterval
.
@param {Integer} delay (milliseconds)
ℹ️ Valid only for BackgroundFetch.scheduleTask
. The minimum number of milliseconds in future that task should execute.
@param {Boolean} periodic [false]
ℹ️ Valid only for BackgroundFetch.scheduleTask
. Defaults to false
. Set true to execute the task repeatedly. When false
, the task will execute just once.
@config {Boolean} stopOnTerminate [true]
Set false
to continue background-fetch events after user terminates the app. Default to true
.
@config {Boolean} startOnBoot [false]
Set true
to initiate background-fetch events when the device is rebooted. Defaults to false
.
❗ NOTE: startOnBoot
requires stopOnTerminate: false
.
@config {Boolean} forceAlarmManager [false]
By default, the plugin will use Android's JobScheduler
when possible. The JobScheduler
API prioritizes for battery-life, throttling task-execution based upon device usage and battery level.
Configuring forceAlarmManager: true
will bypass JobScheduler
to use Android's older AlarmManager
API, resulting in more accurate task-execution at the cost of higher battery usage.
let status = await BackgroundFetch.configure({
minimumFetchInterval: 15,
forceAlarmManager: true
}, async (taskId) => { // <-- Event callback
console.log("[BackgroundFetch] taskId: ", taskId);
BackgroundFetch.finish(taskId);
}, async (taskId) => { // <-- Task timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
BackgroundFetch.finish(taskId);
});
.
.
.
// And with with #scheduleTask
BackgroundFetch.scheduleTask({
taskId: 'com.foo.customtask',
delay: 5000, // milliseconds
forceAlarmManager: true,
periodic: false
});
@config {Boolean} enableHeadless [false]
Set true
to enable React Native's Headless JS mechanism, for handling fetch events after app termination.
index.js
(MUST BE IN index.js
):import BackgroundFetch from "react-native-background-fetch";
let MyHeadlessTask = async (event) => {
// Get task id from event {}:
let taskId = event.taskId;
let isTimeout = event.timeout; // <-- true when your background-time has expired.
if (isTimeout) {
// This task has exceeded its allowed running-time.
// You must stop what you're doing immediately finish(taskId)
console.log('[BackgroundFetch] Headless TIMEOUT:', taskId);
BackgroundFetch.finish(taskId);
return;
}
console.log('[BackgroundFetch HeadlessTask] start: ', taskId);
// Perform an example HTTP request.
// Important: await asychronous tasks when using HeadlessJS.
let response = await fetch('https://reactnative.dev/movies.json');
let responseJson = await response.json();
console.log('[BackgroundFetch HeadlessTask] response: ', responseJson);
// Required: Signal to native code that your task is complete.
// If you don't do this, your app could be terminated and/or assigned
// battery-blame for consuming too much time in background.
BackgroundFetch.finish(taskId);
}
// Register your BackgroundFetch HeadlessTask
BackgroundFetch.registerHeadlessTask(MyHeadlessTask);
@config {integer} requiredNetworkType [BackgroundFetch.NETWORK_TYPE_NONE]
Set basic description of the kind of network your job requires.
If your job doesn't need a network connection, you don't need to use this option as the default value is BackgroundFetch.NETWORK_TYPE_NONE
.
NetworkType | Description |
---|---|
BackgroundFetch.NETWORK_TYPE_NONE | This job doesn't care about network constraints, either any or none. |
BackgroundFetch.NETWORK_TYPE_ANY | This job requires network connectivity. |
BackgroundFetch.NETWORK_TYPE_CELLULAR | This job requires network connectivity that is a cellular network. |
BackgroundFetch.NETWORK_TYPE_UNMETERED | This job requires network connectivity that is unmetered. Most WiFi networks are unmetered, as in "you can upload as much as you like". |
BackgroundFetch.NETWORK_TYPE_NOT_ROAMING | This job requires network connectivity that is not roaming (being outside the country of origin) |
@config {Boolean} requiresBatteryNotLow [false]
Specify that to run this job, the device's battery level must not be low.
This defaults to false. If true, the job will only run when the battery level is not low, which is generally the point where the user is given a "low battery" warning.
@config {Boolean} requiresStorageNotLow [false]
Specify that to run this job, the device's available storage must not be low.
This defaults to false. If true, the job will only run when the device is not in a low storage state, which is generally the point where the user is given a "low storage" warning.
@config {Boolean} requiresCharging [false]
Specify that to run this job, the device must be charging (or be a non-battery-powered device connected to permanent power, such as Android TV devices). This defaults to false.
@config {Boolean} requiresDeviceIdle [false]
When set true, ensure that this job will not run if the device is in active use.
The default state is false: that is, the for the job to be runnable even when someone is interacting with the device.
This state is a loose definition provided by the system. In general, it means that the device is not currently being used interactively, and has not been in use for some time. As such, it is a good time to perform resource heavy jobs. Bear in mind that battery usage will still be attributed to your application, and shown to the user in battery stats.
Method Name | Arguments | Returns | Notes |
---|---|---|---|
configure | {FetchConfig} , callbackFn , timeoutFn | Promise<BackgroundFetchStatus> | Configures the plugin's callbackFn and timeoutFn . This callback will fire each time a background-fetch event occurs in addition to events from #scheduleTask . The timeoutFn will be called when the OS reports your task is nearing the end of its allowed background-time. |
scheduleTask | {TaskConfig} | Promise<boolean> | Executes a custom task. The task will be executed in the same Callback function provided to #configure . |
status | callbackFn | Promise<BackgroundFetchStatus> | Your callback will be executed with the current status (Integer) 0: Restricted , 1: Denied , 2: Available . These constants are defined as BackgroundFetch.STATUS_RESTRICTED , BackgroundFetch.STATUS_DENIED , BackgroundFetch.STATUS_AVAILABLE (NOTE: Android will always return STATUS_AVAILABLE ) |
finish | String taskId | Void | You MUST call this method in your callbackFn provided to #configure in order to signal to the OS that your task is complete. iOS provides only 30s of background-time for a fetch-event -- if you exceed this 30s, iOS will kill your app. |
start | none | Promise<BackgroundFetchStatus> | Start the background-fetch API. Your callbackFn provided to #configure will be executed each time a background-fetch event occurs. NOTE the #configure method automatically calls #start . You do not have to call this method after you #configure the plugin |
stop | [taskId:String] | Promise<boolean> | Stop the background-fetch API and all #scheduleTask from firing events. Your callbackFn provided to #configure will no longer be executed. If you provide an optional taskId , only that #scheduleTask will be stopped. |
BGTaskScheduler
API for iOS 13+[||]
button to initiate a Breakpoint.(lldb)
, paste the following command (Note: use cursor up/down keys to cycle through previously run commands):e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateLaunchForTaskWithIdentifier:@"com.transistorsoft.fetch"]
[ > ]
button to continue. The task will execute and the Callback function provided to BackgroundFetch.configure
will receive the event.BGTaskScheduler
api supports simulated task-timeout events. To simulate a task-timeout, your fetchCallback
must not call BackgroundFetch.finish(taskId)
:let status = await BackgroundFetch.configure({
minimumFetchInterval: 15
}, async (taskId) => { // <-- Event callback.
// This is the task callback.
console.log("[BackgroundFetch] taskId", taskId);
//BackgroundFetch.finish(taskId); // <-- Disable .finish(taskId) when simulating an iOS task timeout
}, async (taskId) => { // <-- Event timeout callback
// This task has exceeded its allowed running-time.
// You must stop what you're doing and immediately .finish(taskId)
print("[BackgroundFetch] TIMEOUT taskId:", taskId);
BackgroundFetch.finish(taskId);
});
e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateExpirationForTaskWithIdentifier:@"com.transistorsoft.fetch"]
BackgroundFetch
APIDebug->Simulate Background Fetch
$ adb logcat
:$ adb logcat *:S ReactNative:V ReactNativeJS:V TSBackgroundFetch:V
21+
:$ adb shell cmd jobscheduler run -f <your.application.id> 999
<21
, simulate a "Headless JS" event with (insert <your.application.id>)$ adb shell am broadcast -a <your.application.id>.event.BACKGROUND_FETCH
Download Details:
Author: transistorsoft
Source Code: https://github.com/transistorsoft/react-native-background-fetch
License: MIT license
1659736920
This project is based on the need for a private message system for ging / social_stream. Instead of creating our core message system heavily dependent on our development, we are trying to implement a generic and potent messaging gem.
After looking for a good gem to use we noticed the lack of messaging gems and functionality in them. Mailboxer tries to fill this void delivering a powerful and flexible message system. It supports the use of conversations with two or more participants, sending notifications to recipients (intended to be used as system notifications “Your picture has new comments”, “John Doe has updated his document”, etc.), and emailing the messageable model (if configured to do so). It has a complete implementation of a Mailbox
object for each messageable with inbox
, sentbox
and trash
.
The gem is constantly growing and improving its functionality. As it is used with our parallel development ging / social_stream we are finding and fixing bugs continously. If you want some functionality not supported yet or marked as TODO, you can create an issue to ask for it. It will be great feedback for us, and we will know what you may find useful in the gem.
Mailboxer was born from the great, but outdated, code from lpsergi / acts_as_messageable.
We are now working to make exhaustive documentation and some wiki pages in order to make it even easier to use the gem to its full potential. Please, give us some time if you find something missing or ask for it. You can also find us on the Gitter room for this repo. Join us there to talk.
Add to your Gemfile:
gem 'mailboxer'
Then run:
$ bundle install
Run install script:
$ rails g mailboxer:install
And don't forget to migrate your database:
$ rake db:migrate
You can also generate email views:
$ rails g mailboxer:views
If upgrading from 0.11.0 to 0.12.0, run the following generators:
$ rails generate mailboxer:namespacing_compatibility
$ rails generate mailboxer:install -s
Then, migrate your database:
$ rake db:migrate
We are now adding support for sending emails when a Notification or a Message is sent to one or more recipients. You should modify the mailboxer initializer (/config/initializer/mailboxer.rb) to edit these settings:
Mailboxer.setup do |config|
#Enables or disables email sending for Notifications and Messages
config.uses_emails = true
#Configures the default `from` address for the email sent for Messages and Notifications of Mailboxer
config.default_from = "no-reply@dit.upm.es"
...
end
You can change the way in which emails are delivered by specifying a custom implementation of notification and message mailers:
Mailboxer.setup do |config|
config.notification_mailer = CustomNotificationMailer
config.message_mailer = CustomMessageMailer
...
end
If you have subclassed the Mailboxer::Notification class, you can specify the mailers using a member method:
class NewDocumentNotification < Mailboxer::Notification
def mailer_class
NewDocumentNotificationMailer
end
end
class NewCommentNotification < Mailboxer::Notification
def mailer_class
NewDocumentNotificationMailer
end
end
Otherwise, the mailer class will be determined by appending 'Mailer' to the mailable class name.
Users must have an identity defined by a name
and an email
. We must ensure that Messageable models have some specific methods. These methods are:
#Returning any kind of identification you want for the model
def name
return "You should add method :name in your Messageable model"
end
#Returning the email address of the model if an email should be sent for this object (Message or Notification).
#If no mail has to be sent, return nil.
def mailboxer_email(object)
#Check if an email should be sent for that object
#if true
return "define_email@on_your.model"
#if false
#return nil
end
These names are explicit enough to avoid colliding with other methods, but as long as you need to change them you can do it by using mailboxer initializer (/config/initializer/mailboxer.rb). Just add or uncomment the following lines:
Mailboxer.setup do |config|
# ...
#Configures the methods needed by mailboxer
config.email_method = :mailboxer_email
config.name_method = :name
config.notify_method = :notify
# ...
end
You may change whatever you want or need. For example:
config.email_method = :notification_email
config.name_method = :display_name
config.notify_method = :notify_mailboxer
Will use the method notification_email(object)
instead of mailboxer_email(object)
, display_name
for name
and notify_mailboxer
for notify
.
Using default or custom method names, if your model doesn't implement them, Mailboxer will use dummy methods so as to notify you of missing methods rather than crashing.
In your model:
class User < ActiveRecord::Base
acts_as_messageable
end
You are not limited to the User model. You can use Mailboxer in any other model and use it in several different models. If you have ducks and cylons in your application and you want to exchange messages as if they were the same, just add acts_as_messageable
to each one and you will be able to send duck-duck, duck-cylon, cylon-duck and cylon-cylon messages. Of course, you can extend it for as many classes as you need.
Example:
class Duck < ActiveRecord::Base
acts_as_messageable
end
class Cylon < ActiveRecord::Base
acts_as_messageable
end
Version 0.8.0 sees Messageable#read
and Messageable#unread
renamed to mark_as_(un)read
, and Receipt#read
and Receipt#unread
to is_(un)read
. This may break existing applications, but read
is a reserved name for Active Record, and the best pratice in this case is simply avoid using it.
#alfa wants to send a message to beta
alfa.send_message(beta, "Body", "subject")
As a messageable, what you receive are receipts, which are associated with the message itself. You should retrieve your receipts for the conversation and get the message associated with them.
This is done this way because receipts save the information about the relation between messageable and the messages: is it read?, is it trashed?, etc.
#alfa gets the last conversation (chronologically, the first in the inbox)
conversation = alfa.mailbox.inbox.first
#alfa gets it receipts chronologically ordered.
receipts = conversation.receipts_for alfa
#using the receipts (i.e. in the view)
receipts.each do |receipt|
...
message = receipt.message
read = receipt.is_unread? #or message.is_unread?(alfa)
...
end
#alfa wants to reply to all in a conversation
#using a receipt
alfa.reply_to_all(receipt, "Reply body")
#using a conversation
alfa.reply_to_conversation(conversation, "Reply body")
#alfa wants to reply to the sender of a message (and ONLY the sender)
#using a receipt
alfa.reply_to_sender(receipt, "Reply body")
#delete conversations forever for one receipt (still in database)
receipt.mark_as_deleted
#you can mark conversation as deleted for one participant
conversation.mark_as_deleted participant
#Mark the object as deleted for messageable
#Object can be:
#* A Receipt
#* A Conversation
#* A Notification
#* A Message
#* An array with any of them
alfa.mark_as_deleted conversation
# get available message for specific user
conversation.messages_for(alfa)
#alfa wants to retrieve all his conversations
alfa.mailbox.conversations
#A wants to retrieve his inbox
alfa.mailbox.inbox
#A wants to retrieve his sent conversations
alfa.mailbox.sentbox
#alfa wants to retrieve his trashed conversations
alfa.mailbox.trash
You can use Kaminari to paginate the conversations as normal. Please, make sure you use the last version as mailboxer uses select('DISTINCT conversations.*')
which was not respected before Kaminari 0.12.4 according to its changelog. Working correctly on Kaminari 0.13.0.
#Paginating all conversations using :page parameter and 9 per page
conversations = alfa.mailbox.conversations.page(params[:page]).per(9)
#Paginating received conversations using :page parameter and 9 per page
conversations = alfa.mailbox.inbox.page(params[:page]).per(9)
#Paginating sent conversations using :page parameter and 9 per page
conversations = alfa.mailbox.sentbox.page(params[:page]).per(9)
#Paginating trashed conversations using :page parameter and 9 per page
conversations = alfa.mailbox.trash.page(params[:page]).per(9)
You can take a look at the full documentation for Mailboxer in rubydoc.info.
Thanks to Roman Kushnir (@RKushnir) you can test Mailboxer with this sample app.
If you need a GUI you should take a look at these links:
Author: mailboxer
Source code: https://github.com/mailboxer/mailboxer
License: MIT license
1620616862
In this remove or delete directories and files linux tutorial guide, you will learn how to remove empty directory and non empty directory linux using command line. And as well as how to remove/file files linux using command line.
So, this tutorial guide will show you you how to use the rm
, unlink
, and rmdir
commands to remove or delete files and directories in Linux with and without confirmation.
https://www.tutsmake.com/how-to-remove-directories-and-files-using-linux-command-line/
#how to delete directory in linux #how to remove non empty directory in linux #remove all files in a directory linux #linux delete all files in current directory #linux delete all files in a directory recursively #delete all files in a directory linux
1594368382
Looking to develop real-time applications?
Hire Dedicated Linux Developer from HourlyDeveloper.io, we have dedicated developers who have vast experience in developing applications for Linux and UNIX operating systems and have in-depth knowledge of their processes, kernel tools, internal architectures, and development packages.
Consult with experts:- https://bit.ly/2ZQ5ySP
#hire linux dedicated developer #linux developer #linux development company #linux development services #linux development #linux developer