FreshRSS: A Free, Self-hostable Aggregator…

FreshRSS

FreshRSS is a self-hosted RSS feed aggregator like Leed or Kriss Feed.

It is lightweight, easy to work with, powerful, and customizable.

It is a multi-user application with an anonymous reading mode. It supports custom tags. There is an API for (mobile) clients, and a Command-Line Interface.

Thanks to the WebSub standard (formerly PubSubHubbub), FreshRSS is able to receive instant push notifications from compatible sources, such as Mastodon, Friendica, WordPress, Blogger, FeedBurner, etc.

FreshRSS natively supports basic Web scraping, based on XPath, for Web sites not providing any RSS / Atom feed.

Finally, it supports extensions for further tuning.

Feature requests, bug reports, and other contributions are welcome. The best way to contribute is to open an issue on GitHub. We are a friendly community.

Disclaimer

FreshRSS comes with absolutely no warranty.

FreshRSS screenshot

Documentation

Requirements

  • A recent browser like Firefox / IceCat, Edge, Chromium / Chrome, Opera, Safari.
    • Works on mobile (except a few features)
  • Light server running Linux or Windows
    • It even works on Raspberry Pi 1 with response time under a second (tested with 150 feeds, 22k articles)
  • A web server: Apache2 (recommended), nginx, lighttpd (not tested on others)
  • PHP 7.0+
  • MySQL 5.5.3+ or MariaDB equivalent, or SQLite 3.7.4+, or PostgreSQL 9.5+

Releases

The latest stable release can be found here. New versions are released every two to three months.

If you want a rolling release with the newest features, or want to help testing or developing the next stable version, you can use the edge branch.

Installation

Automated install

  • Docker
  • YunoHost
  • Cloudron
  • PikaPods

Manual install

  1. Get FreshRSS with git or by downloading the archive
  2. Put the application somewhere on your server (expose only the ./p/ folder to the Web)
  3. Add write access to the ./data/ folder for the webserver user
  4. Access FreshRSS with your browser and follow the installation process
  5. Everything should be working :) If you encounter any problems, feel free to contact us.
  6. Advanced configuration settings can be found in config.default.php and modified in data/config.php.
  7. When using Apache, enable AllowEncodedSlashes for better compatibility with mobile clients.

More detailed information about installation and server configuration can be found in our documentation.

Advice

  • For better security, expose only the ./p/ folder to the Web.
    • Be aware that the ./data/ folder contains all personal data, so it is a bad idea to expose it.
  • The ./constants.php file defines access to the application folder. If you want to customize your installation, look here first.
  • If you encounter any problem, logs are accessible from the interface or manually in ./data/users/*/log*.txt files.
    • The special folder ./data/users/_/ contains the part of the logs that are shared by all users.

FAQ

  • The date and time in the right-hand column is the date declared by the feed, not the time at which the article was received by FreshRSS, and it is not used for sorting.
    • In particular, when importing a new feed, all of its articles will appear at the top of the feed list regardless of their declared date.

Extensions

FreshRSS supports further customizations by adding extensions on top of its core functionality. See the repository dedicated to those extensions.

APIs & native apps

FreshRSS supports access from mobile / native apps for Linux, Android, iOS, Windows and macOS, via two distinct APIs: Google Reader API (best), and Fever API (limited features and less efficient).

AppPlatformFree SoftwareMaintained & DevelopedAPIWorks offlineFast syncFetch more in individual viewsFetch read articlesFavouritesLabelsPodcastsManage feeds
News+ with Google Reader extensionAndroidPartially2015GReader✔️⭐⭐⭐✔️✔️✔️✔️✔️✔️
FeedMe*Android✔️✔️GReader✔️⭐⭐✔️✔️✔️
EasyRSSAndroid✔️✔️GReaderBug⭐⭐✔️
ReadropsAndroid✔️✔️✔️GReader✔️⭐⭐⭐✔️
Fluent Reader LiteAndroid, iOS✔️✔️✔️GReader, Fever✔️⭐⭐⭐
FocusReaderAndroid✔️✔️GReader✔️⭐⭐⭐✔️✔️
ChristopheHenryAndroid✔️Work in progressGReader✔️⭐⭐✔️✔️
Fluent ReaderWindows, Linux, macOS✔️✔️✔️GReader, Fever✔️✔️
RSS GuardWindows, GNU/Linux, macOS, OS/2✔️✔️✔️GReader✔️⭐⭐✔️✔️✔️✔️
FeedReaderGNU/Linux✔️2020GReader✔️⭐⭐✔️✔️✔️✔️
NewsFlashGNU/Linux✔️✔️✔️Fever, (GReader)⭐⭐✔️✔️✔️
Newsboat 2.24+GNU/Linux, macOS, FreeBSD✔️✔️✔️GReader✔️✔️✔️
Vienna RSSmacOS✔️✔️✔️GReader
Reeder*iOS, macOS✔️✔️GReader, Fever✔️⭐⭐⭐✔️✔️✔️
lireiOS, macOS✔️✔️GReader
UnreadiOS✔️✔️Fever✔️✔️
Fiery FeedsiOS✔️✔️Fever
ReadkitmacOS✔️✔️Fever✔️
NetnewswireiOS, macOS✔️Work in progressGReader✔️✔️✔️

* Install and enable the GReader Redate extension to have the correct publication date for feed articles if you are using Reeder 4 or FeedMe. (No longer required for Reeder 5)

Included libraries

Only for some options or configurations



Download Details:

Author: FreshRSS
Source Code: https://github.com/FreshRSS/FreshRSS 
License: AGPL-3.0 license

#php #aggregate 

What is GEEK

Buddha Community

FreshRSS: A Free, Self-hostable Aggregator…

FreshRSS: A Free, Self-hostable Aggregator…

FreshRSS

FreshRSS is a self-hosted RSS feed aggregator like Leed or Kriss Feed.

It is lightweight, easy to work with, powerful, and customizable.

It is a multi-user application with an anonymous reading mode. It supports custom tags. There is an API for (mobile) clients, and a Command-Line Interface.

Thanks to the WebSub standard (formerly PubSubHubbub), FreshRSS is able to receive instant push notifications from compatible sources, such as Mastodon, Friendica, WordPress, Blogger, FeedBurner, etc.

FreshRSS natively supports basic Web scraping, based on XPath, for Web sites not providing any RSS / Atom feed.

Finally, it supports extensions for further tuning.

Feature requests, bug reports, and other contributions are welcome. The best way to contribute is to open an issue on GitHub. We are a friendly community.

Disclaimer

FreshRSS comes with absolutely no warranty.

FreshRSS screenshot

Documentation

Requirements

  • A recent browser like Firefox / IceCat, Edge, Chromium / Chrome, Opera, Safari.
    • Works on mobile (except a few features)
  • Light server running Linux or Windows
    • It even works on Raspberry Pi 1 with response time under a second (tested with 150 feeds, 22k articles)
  • A web server: Apache2 (recommended), nginx, lighttpd (not tested on others)
  • PHP 7.0+
  • MySQL 5.5.3+ or MariaDB equivalent, or SQLite 3.7.4+, or PostgreSQL 9.5+

Releases

The latest stable release can be found here. New versions are released every two to three months.

If you want a rolling release with the newest features, or want to help testing or developing the next stable version, you can use the edge branch.

Installation

Automated install

  • Docker
  • YunoHost
  • Cloudron
  • PikaPods

Manual install

  1. Get FreshRSS with git or by downloading the archive
  2. Put the application somewhere on your server (expose only the ./p/ folder to the Web)
  3. Add write access to the ./data/ folder for the webserver user
  4. Access FreshRSS with your browser and follow the installation process
  5. Everything should be working :) If you encounter any problems, feel free to contact us.
  6. Advanced configuration settings can be found in config.default.php and modified in data/config.php.
  7. When using Apache, enable AllowEncodedSlashes for better compatibility with mobile clients.

More detailed information about installation and server configuration can be found in our documentation.

Advice

  • For better security, expose only the ./p/ folder to the Web.
    • Be aware that the ./data/ folder contains all personal data, so it is a bad idea to expose it.
  • The ./constants.php file defines access to the application folder. If you want to customize your installation, look here first.
  • If you encounter any problem, logs are accessible from the interface or manually in ./data/users/*/log*.txt files.
    • The special folder ./data/users/_/ contains the part of the logs that are shared by all users.

FAQ

  • The date and time in the right-hand column is the date declared by the feed, not the time at which the article was received by FreshRSS, and it is not used for sorting.
    • In particular, when importing a new feed, all of its articles will appear at the top of the feed list regardless of their declared date.

Extensions

FreshRSS supports further customizations by adding extensions on top of its core functionality. See the repository dedicated to those extensions.

APIs & native apps

FreshRSS supports access from mobile / native apps for Linux, Android, iOS, Windows and macOS, via two distinct APIs: Google Reader API (best), and Fever API (limited features and less efficient).

AppPlatformFree SoftwareMaintained & DevelopedAPIWorks offlineFast syncFetch more in individual viewsFetch read articlesFavouritesLabelsPodcastsManage feeds
News+ with Google Reader extensionAndroidPartially2015GReader✔️⭐⭐⭐✔️✔️✔️✔️✔️✔️
FeedMe*Android✔️✔️GReader✔️⭐⭐✔️✔️✔️
EasyRSSAndroid✔️✔️GReaderBug⭐⭐✔️
ReadropsAndroid✔️✔️✔️GReader✔️⭐⭐⭐✔️
Fluent Reader LiteAndroid, iOS✔️✔️✔️GReader, Fever✔️⭐⭐⭐
FocusReaderAndroid✔️✔️GReader✔️⭐⭐⭐✔️✔️
ChristopheHenryAndroid✔️Work in progressGReader✔️⭐⭐✔️✔️
Fluent ReaderWindows, Linux, macOS✔️✔️✔️GReader, Fever✔️✔️
RSS GuardWindows, GNU/Linux, macOS, OS/2✔️✔️✔️GReader✔️⭐⭐✔️✔️✔️✔️
FeedReaderGNU/Linux✔️2020GReader✔️⭐⭐✔️✔️✔️✔️
NewsFlashGNU/Linux✔️✔️✔️Fever, (GReader)⭐⭐✔️✔️✔️
Newsboat 2.24+GNU/Linux, macOS, FreeBSD✔️✔️✔️GReader✔️✔️✔️
Vienna RSSmacOS✔️✔️✔️GReader
Reeder*iOS, macOS✔️✔️GReader, Fever✔️⭐⭐⭐✔️✔️✔️
lireiOS, macOS✔️✔️GReader
UnreadiOS✔️✔️Fever✔️✔️
Fiery FeedsiOS✔️✔️Fever
ReadkitmacOS✔️✔️Fever✔️
NetnewswireiOS, macOS✔️Work in progressGReader✔️✔️✔️

* Install and enable the GReader Redate extension to have the correct publication date for feed articles if you are using Reeder 4 or FeedMe. (No longer required for Reeder 5)

Included libraries

Only for some options or configurations



Download Details:

Author: FreshRSS
Source Code: https://github.com/FreshRSS/FreshRSS 
License: AGPL-3.0 license

#php #aggregate 

ADELA DAVID

ADELA DAVID

1623894300

23$ Lucky Free Airdrop Trust Wallet Today Instant Withdraw New Free Airdrop Token Free

23$ Lucky Free Airdrop Trust Wallet Today Instant Withdraw New Free Airdrop Token Free

📺 The video in this post was made by Upcoming Gems
️ The origin of the article: https://www.youtube.com/watch?v=3NzjXh-pdJE

🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #lucky free airdrop #token free #free airdrop

13 Free/Low-Cost Sites to Supercharge Your Programming Self-Education

Noonies 2020 award nominee

johnnythecoder has been nominated for the Hacker Noon Contributor of the Year - LEARNING award!

** Add your vote**

Although we still talk about programming as a standalone career, the dominance of technology in our lives makes it clear that coding is much more than a career path. In my opinion, computer science is more than a college major or a high-paid job; it’s a skill, essential for thriving in a modern-day economy.

Whether you work in healthcare, marketing, business, or other fields, you will see more coding and have to deal with a growing number of technologies throughout your entire life.

Now that we live in a tech-driven world, asking “Should I learn to program” is almost synonymous with “Should I learn to speak, read, or count?”

The short answer is: yes.

How to start your journey in coding? The good news is there are plenty of resources to support you all the way through. To save you the trouble of looking them up and choosing the right ones, I created my list of learning platforms that offer well-rounded programming education and help you stay competitive on the job market.

Here are 12+ useful educational resources every coding student should check out.

1. Codegym

#learning-to-code #learn-to-code #coding #programming #programming-languages #free-programming-sites #self-improvement #learn-to-code-free-online

ACACIA  ADAM

ACACIA ADAM

1623964680

Claim 2500000 Tokens Free Airdrop Trust Wallet Today Instant Withdraw New Free Airdrop

Claim 2500000 Tokens Free Airdrop Trust Wallet Today Instant Withdraw New Free Airdrop Token Free

📺 The video in this post was made by Upcoming Gems
️ The origin of the article: https://www.youtube.com/watch?v=_6OS-gsXM94

🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞ https://geekcash.org⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #token free #free airdrop #tokens free

Edward Jackson

Edward Jackson

1653377002

PySpark Cheat Sheet: Spark in Python

This PySpark cheat sheet with code samples covers the basics like initializing Spark in Python, loading data, sorting, and repartitioning.

Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python. 

Even though working with Spark will remind you in many ways of working with Pandas DataFrames, you'll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, ... your data. What's more, if you've never worked with any other programming language or if you're new to the field, it might be hard to distinguish between RDD operations.

Let's face it, map() and flatMap() are different enough, but it might still come as a challenge to decide which one you really need when you're faced with them in your analysis. Or what about other functions, like reduce() and reduceByKey()

PySpark cheat sheet

Even though the documentation is very elaborate, it never hurts to have a cheat sheet by your side, especially when you're just getting into it.

This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that's not all. You'll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet. 

Note that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you'll be using Spark to analyze big data.

PySpark is the Spark Python API that exposes the Spark programming model to Python.

Initializing Spark 

SparkContext 

>>> from pyspark import SparkContext
>>> sc = SparkContext(master = 'local[2]')

Inspect SparkContext 

>>> sc.version #Retrieve SparkContext version
>>> sc.pythonVer #Retrieve Python version
>>> sc.master #Master URL to connect to
>>> str(sc.sparkHome) #Path where Spark is installed on worker nodes
>>> str(sc.sparkUser()) #Retrieve name of the Spark User running SparkContext
>>> sc.appName #Return application name
>>> sc.applicationld #Retrieve application ID
>>> sc.defaultParallelism #Return default level of parallelism
>>> sc.defaultMinPartitions #Default minimum number of partitions for RDDs

Configuration 

>>> from pyspark import SparkConf, SparkContext
>>> conf = (SparkConf()
     .setMaster("local")
     .setAppName("My app")
     . set   ("spark. executor.memory",   "lg"))
>>> sc = SparkContext(conf = conf)

Using the Shell 

In the PySpark shell, a special interpreter-aware SparkContext is already created in the variable called sc.

$ ./bin/spark-shell --master local[2]
$ ./bin/pyspark --master local[s] --py-files code.py

Set which master the context connects to with the --master argument, and add Python .zip..egg or.py files to the

runtime path by passing a comma-separated list to  --py-files.

Loading Data 

Parallelized Collections 

>>> rdd = sc.parallelize([('a',7),('a',2),('b',2)])
>>> rdd2 = sc.parallelize([('a',2),('d',1),('b',1)])
>>> rdd3 = sc.parallelize(range(100))
>>> rdd = sc.parallelize([("a",["x","y","z"]),
               ("b" ["p","r,"])])

External Data 

Read either one text file from HDFS, a local file system or any Hadoop-supported file system URI with textFile(), or read in a directory of text files with wholeTextFiles(). 

>>> textFile = sc.textFile("/my/directory/•.txt")
>>> textFile2 = sc.wholeTextFiles("/my/directory/")

Retrieving RDD Information 

Basic Information 

>>> rdd.getNumPartitions() #List the number of partitions
>>> rdd.count() #Count RDD instances 3
>>> rdd.countByKey() #Count RDD instances by key
defaultdict(<type 'int'>,{'a':2,'b':1})
>>> rdd.countByValue() #Count RDD instances by value
defaultdict(<type 'int'>,{('b',2):1,('a',2):1,('a',7):1})
>>> rdd.collectAsMap() #Return (key,value) pairs as a dictionary
   {'a': 2, 'b': 2}
>>> rdd3.sum() #Sum of RDD elements 4950
>>> sc.parallelize([]).isEmpty() #Check whether RDD is empty
True

Summary 

>>> rdd3.max() #Maximum value of RDD elements 
99
>>> rdd3.min() #Minimum value of RDD elements
0
>>> rdd3.mean() #Mean value of RDD elements 
49.5
>>> rdd3.stdev() #Standard deviation of RDD elements 
28.866070047722118
>>> rdd3.variance() #Compute variance of RDD elements 
833.25
>>> rdd3.histogram(3) #Compute histogram by bins
([0,33,66,99],[33,33,34])
>>> rdd3.stats() #Summary statistics (count, mean, stdev, max & min)

Applying Functions 

#Apply a function to each RFD element
>>> rdd.map(lambda x: x+(x[1],x[0])).collect()
[('a' ,7,7, 'a'),('a' ,2,2, 'a'), ('b' ,2,2, 'b')]
#Apply a function to each RDD element and flatten the result
>>> rdd5 = rdd.flatMap(lambda x: x+(x[1],x[0]))
>>> rdd5.collect()
['a',7 , 7 ,  'a' , 'a' , 2,  2,  'a', 'b', 2 , 2, 'b']
#Apply a flatMap function to each (key,value) pair of rdd4 without changing the keys
>>> rdds.flatMapValues(lambda x: x).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'),('b', 'p'),('b', 'r')]

Selecting Data

Getting

>>> rdd.collect() #Return a list with all RDD elements 
[('a', 7), ('a', 2), ('b', 2)]
>>> rdd.take(2) #Take first 2 RDD elements 
[('a', 7),  ('a', 2)]
>>> rdd.first() #Take first RDD element
('a', 7)
>>> rdd.top(2) #Take top 2 RDD elements 
[('b', 2), ('a', 7)]

Sampling

>>> rdd3.sample(False, 0.15, 81).collect() #Return sampled subset of rdd3
     [3,4,27,31,40,41,42,43,60,76,79,80,86,97]

Filtering

>>> rdd.filter(lambda x: "a" in x).collect() #Filter the RDD
[('a',7),('a',2)]
>>> rdd5.distinct().collect() #Return distinct RDD values
['a' ,2, 'b',7]
>>> rdd.keys().collect() #Return (key,value) RDD's keys
['a',  'a',  'b']

Iterating 

>>> def g (x): print(x)
>>> rdd.foreach(g) #Apply a function to all RDD elements
('a', 7)
('b', 2)
('a', 2)

Reshaping Data 

Reducing

>>> rdd.reduceByKey(lambda x,y : x+y).collect() #Merge the rdd values for each key
[('a',9),('b',2)]
>>> rdd.reduce(lambda a, b: a+ b) #Merge the rdd values
('a', 7, 'a' , 2 , 'b' , 2)

 

Grouping by

>>> rdd3.groupBy(lambda x: x % 2) #Return RDD of grouped values
          .mapValues(list)
          .collect()
>>> rdd.groupByKey() #Group rdd by key
          .mapValues(list)
          .collect() 
[('a',[7,2]),('b',[2])]

Aggregating

>> seqOp = (lambda x,y: (x[0]+y,x[1]+1))
>>> combOp = (lambda x,y:(x[0]+y[0],x[1]+y[1]))
#Aggregate RDD elements of each partition and then the results
>>> rdd3.aggregate((0,0),seqOp,combOp) 
(4950,100)
#Aggregate values of each RDD key
>>> rdd.aggregateByKey((0,0),seqop,combop).collect() 
     [('a',(9,2)), ('b',(2,1))]
#Aggregate the elements of each partition, and then the results
>>> rdd3.fold(0,add)
     4950
#Merge the values for each key
>>> rdd.foldByKey(0, add).collect()
[('a' ,9), ('b' ,2)]
#Create tuples of RDD elements by applying a function
>>> rdd3.keyBy(lambda x: x+x).collect()

Mathematical Operations 

>>>> rdd.subtract(rdd2).collect() #Return each rdd value not contained in rdd2
[('b' ,2), ('a' ,7)]
#Return each (key,value) pair of rdd2 with no matching key in rdd
>>> rdd2.subtractByKey(rdd).collect()
[('d', 1)1
>>>rdd.cartesian(rdd2).collect() #Return the Cartesian product of rdd and rdd2

Sort 

>>> rdd2.sortBy(lambda x: x[1]).collect() #Sort RDD by given function
[('d',1),('b',1),('a',2)]
>>> rdd2.sortByKey().collect() #Sort (key, value) ROD by key
[('a' ,2), ('b' ,1), ('d' ,1)]

Repartitioning 

>>> rdd.repartition(4) #New RDD with 4 partitions
>>> rdd.coalesce(1) #Decrease the number of partitions in the RDD to 1

Saving 

>>> rdd.saveAsTextFile("rdd.txt")
>>> rdd.saveAsHadoopFile("hdfs:// namenodehost/parent/child",
               'org.apache.hadoop.mapred.TextOutputFormat')

Stopping SparkContext 

>>> sc.stop()

Execution 

$ ./bin/spark-submit examples/src/main/python/pi.py

Have this Cheat Sheet at your fingertips

Original article source at https://www.datacamp.com

#pyspark #cheatsheet #spark #python