Gordon  Matlala

Gordon Matlala


Lemmy: A Link Aggregator and Forum for The Fediverse


A link aggregator and forum for the fediverse. 

About The Project


Lemmy is similar to sites like Reddit, Lobste.rs, or Hacker News: you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the Fediverse.

For a link aggregator, this means a user registered on one server can subscribe to forums on any other server, and can have discussions with users registered elsewhere.

It is an easily self-hostable, decentralized alternative to Reddit and other link aggregators, outside of their corporate control and meddling.

Each Lemmy server can set its own moderation policy; appointing site-wide admins, and community moderators to keep out the trolls, and foster a healthy, non-toxic environment where all can feel comfortable contributing.

Why's it called Lemmy?

Built With


  • Open source, AGPL License.
  • Self hostable, easy to deploy.
  • Clean, mobile-friendly interface.
    • Only a minimum of a username and password is required to sign up!
    • User avatar support.
    • Live-updating Comment threads.
    • Full vote scores (+/-) like old Reddit.
    • Themes, including light, dark, and solarized.
    • Emojis with autocomplete support. Start typing :
    • User tagging using @, Community tagging using !.
    • Integrated image uploading in both posts and comments.
    • A post can consist of a title and any combination of self text, a URL, or nothing else.
    • Notifications, on comment replies and when you're tagged.
      • Notifications can be sent via email.
      • Private messaging support.
    • i18n / internationalization support.
    • RSS / Atom feeds for All, Subscribed, Inbox, User, and Community.
  • Cross-posting support.
    • A similar post search when creating new posts. Great for question / answer communities.
  • Moderation abilities.
    • Public Moderation Logs.
    • Can sticky posts to the top of communities.
    • Both site admins, and community moderators, who can appoint other moderators.
    • Can lock, remove, and restore posts and comments.
    • Can ban and unban users from communities and the site.
    • Can transfer site and communities to others.
  • Can fully erase your data, replacing all posts and comments.
  • NSFW post / community support.
  • High performance.
    • Server is written in rust.
    • Front end is ~80kB gzipped.
    • Supports arm64 / Raspberry Pi.


Lemmy Projects



Support / Donate

Lemmy is free, open-source software, meaning no advertising, monetizing, or venture capital, ever. Your donations directly support full-time development of the project.


  • bitcoin: 1Hefs7miXS5ff5Ck5xvmjKjXf5242KzRtK
  • ethereum: 0x400c96c96acbC6E7B3B43B1dc1BB446540a88A01
  • monero: 41taVyY6e1xApqKyMVDRVxJ76sPkfZhALLTjRvVKpaAh2pBd4wv9RgYj1tSPrx8wc6iE1uWUfjtQdTmTy2FGMeChGVKPQuV
  • cardano: addr1q858t89l2ym6xmrugjs0af9cslfwvnvsh2xxp6x4dcez7pf5tushkp4wl7zxfhm2djp6gq60dk4cmc7seaza5p3slx0sakjutm



If you want to help with translating, take a look at Weblate. You can also help by translating the documentation.


Code Mirrors

English | Español | Русский | 汉语 | 漢語

Download Details:

Author: LemmyNet
Source Code: https://github.com/LemmyNet/lemmy 
License: AGPL-3.0 license

#chat #rust #reddit 

Lemmy: A Link Aggregator and Forum for The Fediverse
Gordon  Matlala

Gordon Matlala


A Simple tool Used To Retrieve Chat Messages From Livestreams, Videos

Chat Downloader

Chat Downloader is a simple tool used to retrieve chat messages from livestreams, videos, clips and past broadcasts. No authentication needed!


This tool is distributed on PyPI and can be installed with pip:

$ pip install chat-downloader

To update to the latest version, add the --upgrade flag to the above command.

Alternatively, the tool can be installed with git:

$ git clone https://github.com/xenova/chat-downloader.git
$ cd chat-downloader
$ python setup.py install


Command line

usage: chat_downloader [-h] [--version] [--start_time START_TIME]
                       [--end_time END_TIME]
                       [--message_types MESSAGE_TYPES | --message_groups MESSAGE_GROUPS]
                       [--max_attempts MAX_ATTEMPTS]
                       [--retry_timeout RETRY_TIMEOUT]
                       [--interruptible_retry [INTERRUPTIBLE_RETRY]]
                       [--max_messages MAX_MESSAGES]
                       [--inactivity_timeout INACTIVITY_TIMEOUT]
                       [--timeout TIMEOUT] [--format FORMAT]
                       [--format_file FORMAT_FILE] [--chat_type {live,top}]
                       [--ignore IGNORE]
                       [--message_receive_timeout MESSAGE_RECEIVE_TIMEOUT]
                       [--buffer_size BUFFER_SIZE] [--output OUTPUT]
                       [--overwrite [OVERWRITE]] [--sort_keys [SORT_KEYS]]
                       [--indent INDENT] [--pause_on_debug | --exit_on_debug]
                       [--logging {none,debug,info,warning,error,critical} | --testing | --verbose | --quiet]
                       [--cookies COOKIES] [--proxy PROXY]

For example, to save messages from a livestream to a JSON file, you can use:

$ chat_downloader https://www.youtube.com/watch?v=jfKfPfyJRdk --output chat.json

For a description of these options, as well as advanced command line use-cases and examples, consult the Command Line Usage page.


from chat_downloader import ChatDownloader

url = 'https://www.youtube.com/watch?v=jfKfPfyJRdk'
chat = ChatDownloader().get_chat(url)       # create a generator
for message in chat:                        # iterate over messages
    chat.print_formatted(message)           # print the formatted message

For advanced python use-cases and examples, consult the Python Documentation.

Chat Items

Chat items/messages are parsed into JSON objects (a.k.a. dictionaries) and should follow a format similar to this:

    "message_id": "xxxxxxxxxx",
    "message": "actual message goes here",
    "message_type": "text_message",
    "timestamp": 1613761152565924,
    "time_in_seconds": 1234.56,
    "time_text": "20:34",
    "author": {
        "id": "UCxxxxxxxxxxxxxxxxxxxxxxx",
        "name": "username_of_sender",
        "images": [
        "badges": [

For an extensive, documented list of included fields, consult the Chat Item Fields page.

Frequently Asked Questions

Coming soon


Found a bug or have a suggestion? File an issue here. To assist the developers in fixing the issue, please follow the issue template as closely as possible.


If you would like to help improve the tool, you'll find more information on contributing in our Contributing Guide.

Supported sites:

  • YouTube.com - Livestreams, past broadcasts and premieres.
  • Twitch.tv - Livestreams, past broadcasts and clips.
  • Zoom.us - Past broadcasts
  • Facebook.com (currently in development) - Livestreams and past broadcasts.

Download Details:

Author: xenova
Source Code: https://github.com/xenova/chat-downloader 
License: MIT license

#python #chat #youtube #twitch 

A Simple tool Used To Retrieve Chat Messages From Livestreams, Videos
Gordon  Matlala

Gordon Matlala


Kindle-gpt: AI Search & Chat on Your Kindle Highlights

Kindle GPT

AI search & chat on your Kindle highlights.

Supports .csv exporting of your embedded data.

Code is 100% open source.

Note: I recommend using on desktop only.

How It Works

Export Kindle Notebook

In the Kindle App you can export your highlights as a notebook.

The notebook provides you with a .html file of your highlights.

Import & Parse Kindle Highlights

Import the .html file into the app.

It will parse the highlights and display them.

Generate Embeddings

After parsing is complete, the highlights are ready to be embedded.

Kindle GPT uses OpenAI Embeddings (text-embedding-ada-002) to generate embeddings for each highlight.

The embedded text is the chapter/section name + the highlighted text. I found this to be the best way to get the most relevant passages.

You will also receive a downloaded .csv file of your embedded notebook to use wherever you'd like - including for importing to Kindle GPT for later use.

Search Embedded Highlights

Now you can query your highlights using the search bar.

The 1st step is to get the cosine similarity for your query and all of the highlights.

Then, the most relevant results are returned (maxing out at ~2k tokens, up to 10).

Create Prompt & Generate Answer

The results are used to create a prompt that feeds into GPT-3.5-turbo.

And finally, you get your answer!


All data is stored locally.

Kindle GPT doesn't use a database.

You can re-import any of your generated .csv files at any time to avoid having to re-embed your notebooks.

Running Locally

  • Set up OpenAI

You'll need an OpenAI API key to generate embeddings and perform chat completions.

  • Clone repo
git clone https://github.com/mckaywrigley/kindle-gpt.git
  • Install dependencies
npm i
  • Run app
npm run dev


If you have any questions, feel free to reach out to me on Twitter!

Download Details:

Author: mckaywrigley
Source Code: https://github.com/mckaywrigley/kindle-gpt 
License: MIT license

#typescript #gpt #chat #search 

Kindle-gpt: AI Search & Chat on Your Kindle Highlights
Gordon  Matlala

Gordon Matlala


Paul-graham-gpt: AI Search & Chat for All Of Paul Graham’s Essays

Paul Graham GPT

AI-powered search and chat for Paul Graham's essays.

All code & data used is 100% open-source.


The dataset is a CSV file containing all text & embeddings used.

Download it here.

I recommend getting familiar with fetching, cleaning, and storing data as outlined in the scraping and embedding scripts below, but feel free to skip those steps and just use the dataset.

How It Works

Paul Graham GPT provides 2 things:

  1. A search interface.
  2. A chat interface.


Search was created with OpenAI Embeddings (text-embedding-ada-002).

First, we loop over the essays and generate embeddings for each chunk of text.

Then in the app we take the user's search query, generate an embedding, and use the result to find the most similar passages from the book.

The comparison is done using cosine similarity across our database of vectors.

Our database is a Postgres database with the pgvector extension hosted on Supabase.

Results are ranked by similarity score and returned to the user.


Chat builds on top of search. It uses search results to create a prompt that is fed into GPT-3.5-turbo.

This allows for a chat-like experience where the user can ask questions about the book and get answers.

Running Locally

Here's a quick overview of how to run it locally.


  • Set up OpenAI

You'll need an OpenAI API key to generate embeddings.

  • Set up Supabase and create a database

Note: You don't have to use Supabase. Use whatever method you prefer to store your data. But I like Supabase and think it's easy to use.

There is a schema.sql file in the root of the repo that you can use to set up the database.

Run that in the SQL editor in Supabase as directed.

I recommend turning on Row Level Security and setting up a service role to use with the app.

Repo Setup

  • Clone repo
git clone https://github.com/mckaywrigley/paul-graham-gpt.git
  • Install dependencies
npm i
  • Set up environment variables

Create a .env.local file in the root of the repo with the following variables:




  • Run scraping script
npm run scrape

This scrapes all of the essays from Paul Graham's website and saves them to a json file.

  • Run embedding script
npm run embed

This reads the json file, generates embeddings for each chunk of text, and saves the results to your database.

There is a 200ms delay between each request to avoid rate limiting.

This process will take 20-30 minutes.


  • Run app
npm run dev


Thanks to Paul Graham for his writing.

I highly recommend you read his essays.

3 years ago they convinced me to learn to code, and it changed my life.


If you have any questions, feel free to reach out to me on Twitter!


I sacrificed composability for simplicity in the app.

Yes, you can make things more modular and reusable.

But I kept pretty much everything in the homepage component for the sake of simplicity.

Download Details:

Author: mckaywrigley
Source Code: https://github.com/mckaywrigley/paul-graham-gpt 
License: MIT license

#typescript #ai #gpt #search #chat 

Paul-graham-gpt: AI Search & Chat for All Of Paul Graham’s Essays

Arpchat: Answering The Question Nobody Asked


so... you know arp? the protocol your computer uses to find the mac addresses of other computers on your network? yeah. that.

i thought it would be a great idea to hijack it to make a chat app :)

built in two days because i was sick and had nothing better to do.

screenshot of the tool in action


  1. once a year, i'm on a client isolated network that i want to chat with friends over
  2. i'm completely insane
  3. i'm a programmer

(i swear, i might actually briefly have a use for this! it might not be entirely useless! ... and other lies i tell myself)



things i made arpchat do

you can send messages tens of thousands of characters long because i implemented a (naive) generalizable transport protocol on top of arp. there's also a bit of compression.

if you wanted, you could probably split off the networking part of this and use it instead of udp. please don't do this.

not only are join and leave notifications a thing, i built an entire presence discovery and heartbeat system to see an updated list of other online users. ironically, part of this serves a similar purpose to arp itself.

for more information on how this all works technically, check out the little article i wrote.


if you actually want to install this for some reason, you can get it from the releases page.

on windows, you probably need npcap. make sure you check "Install Npcap in WinPcap API-compatible Mode" in the installer!

on linux, you might have to give arpchat network privileges:

sudo setcap CAP_NET_RAW+ep /path/to/arpchat

interface selector

then just run the binary in a terminal. you know it's working properly if you can see your own messages when you send them. if you can't see your messages, try selecting a different interface or protocol!

have any issues? that really sucks. you can make an issue if it pleases you.


you don't really want to build this. anyway, it's tested on the latest unstable rust.

on windows, download the WinPcap Developer's Pack and set the LIB environment variable to the WpdPack/Lib/x64/ folder.

cargo build


Download Details:

Author: kognise
Source Code: https://github.com/kognise/arpchat 
License: View license

#rust #chat 

Arpchat: Answering The Question Nobody Asked

LLaMA-chat: Chat with Meta's LLaMA Models At Home Made Easy

Chat with Meta's LLaMA models at home made easy

This repository is a chat example with LLaMA (arXiv) models running on a typical home PC. You will just need a NVIDIA videocard and some RAM to chat with model.

Examples of chats here


Share your best prompts, chats or generations here in this issue: https://github.com/randaller/llama-chat/issues/7

System requirements

  • Modern enough CPU
  • NVIDIA graphics card
  • 64 or better 128 Gb of RAM (192 or 256 would be perfect)

One may run with 32 Gb of RAM, but inference will be slow (with the speed of your swap file reading)

I am running this on 12700k/128 Gb RAM/NVIDIA 3070ti 8Gb/fast huge nvme and getting one token from 30B model in a few seconds.

For example, 30B model uses around 70 Gb of RAM. 7B model fits into 18 Gb. 13B model uses 48 Gb.

If you do not have powerful videocard, you may use another repo for cpu-only inference: https://github.com/randaller/llama-cpu

Conda Environment Setup Example for Windows 10+

Download and install Anaconda Python https://www.anaconda.com and run Anaconda Prompt

conda create -n llama python=3.10
conda activate llama
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia


In a conda env with pytorch / cuda available, run

pip install -r requirements.txt

Then in this repository

pip install -e .

Download tokenizer and models




Prepare model

First, you need to unshard model checkpoints to a single file. Let's do this for 30B model.

python merge-weights.py --input_dir D:\Downloads\LLaMA --model_size 30B

In this example, D:\Downloads\LLaMA is a root folder of downloaded torrent with weights.

This will create merged.pth file in the root folder of this repo.

Place this file and corresponding (torrentroot)/30B/params.json of model into [/model] folder.

So you should end up with two files in [/model] folder: merged.pth and params.json.

Place (torrentroot)/tokenizer.model file to the [/tokenizer] folder of this repo. Now you are ready to go.

Run the chat

python example-chat.py ./model ./tokenizer/tokenizer.model

Generation parameters


Temperature is one of the key parameters of generation. You may wish to play with temperature. The more temperature is, the model will use more "creativity", and the less temperature instruct model to be "less creative", but following your prompt stronger.

Repetition penalty is a feature implemented by Shawn Presser. With this, the model will be fined, when it would like to enter to repetion loop state. Set this parameter to 1.0, if you wish to disable this feature.


By default, Meta provided us with top_p sampler only. Again, Shawn added an alternate top_k sampler, which (in my tests) performs pretty well. If you wish to switch to top_k sampler, use the following parameters:

temperature: float = 0.7,
top_p: float = 0.0,
top_k: int = 40,
sampler: str = 'top_k',

For sure, you may play with all the values to get different outputs.

Launch examples

One may modify these hyperparameters straight in the code. But it is better to leave the defaults in code and set the parameters of experiments in the launch line.

# Run with top_p sampler, with temperature 0.75, with top_p value 0.95, repetition penalty disabled
python example-chat.py ./model ./tokenizer/tokenizer.model 0.75 0.95 0 1.0 top_p

# Run with top_k sampler, with temperature 0.7, with top_k value 40, default repetition penalty value
python example-chat.py ./model ./tokenizer/tokenizer.model 0.7 0.0 40 1.17 top_k

Of course, this is also applicable to a [python example.py] as well (see below).

Enable multi-line answers

If you wish to stop generation not by "\n" sign, but by another signature, like "User:" (which is also good idea), or any other, make the following modification in the llama/generation.py:


-5 means to remove last 5 chars from resulting context, which is length of your stop signature, "User:" in this example.

Share the best with community

Share your best prompts and generations with others here: https://github.com/randaller/llama-chat/issues/7

Typical generation with prompt (not a chat)

Simply comment three lines in llama/generation.py to turn it to a generator back.


python example.py ./model ./tokenizer/tokenizer.model

Confirming that 30B model is able to generate code and fix errors in code: https://github.com/randaller/llama-chat/issues/7

Confirming that 30B model is able to generate prompts for Stable Diffusion: https://github.com/randaller/llama-chat/issues/7#issuecomment-1463691554

Confirming that 7B and 30B model support Arduino IDE: https://github.com/randaller/llama-chat/issues/7#issuecomment-1464179944

This repo is heavily based on Meta's original repo: https://github.com/facebookresearch/llama

And on Steve Manuatu's repo: https://github.com/venuatu/llama

And on Shawn Presser's repo: https://github.com/shawwn/llama

Download Details:

Author: Randaller
Source Code: https://github.com/randaller/llama-chat 
License: GPL-3.0 license

#python #chat 

LLaMA-chat: Chat with Meta's LLaMA Models At Home Made Easy
Lawrence  Lesch

Lawrence Lesch


Deltachat-desktop: Email-based instant Messaging for Desktop


Desktop Application for delta.chat


The application can be downloaded from https://get.delta.chat. Here you'll find binary releases for all supported platforms. See below for platform specific instructions. If you run into any problems please consult the Troubleshooting section below.



The primary distribution-independed way to install is to use the flatpak build. This is maintained in it's own repository, however a pre-built binary can be downloaded and installed from flathub which also has a setup guide for many Linux platforms.

Arch Linux

WARNING: Currently the AUR package compiles from latest master. This can be more recent as the latest release, introduce new features but also new bugs.

If you have a AUR helper like yay installed, you can install it by running yay -S deltachat-desktop-git and following the instruction in your terminal.

Otherwise you can still do it manually:

# Download the latest snapshot of the PKGBUILD
wget https://aur.archlinux.org/cgit/aur.git/snapshot/deltachat-desktop-git.tar.gz

# extract the archive and rm the archive file afterwards
tar xzfv deltachat-desktop-git.tar.gz && rm deltachat-desktop-git.tar.gz

# cd into extracted folder
cd deltachat-desktop-git

# build package
makepkg -si

# install package (you need to replace <version> with whatever version makepkg built)
sudo pacman -U deltachat-desktop-git-<version>.tar.xz

Mac OS


$ brew install --cask deltachat


Simply install the .dmg file as you do it with all other software on mac.

If you are getting an OpenSSL error message at the first start up you need to install OpenSSL.

$ brew install openssl

From Source

⚠ This is mostly for development purposes, this won't install/integrate deltachat into your system. So unless you know what you are doing, we recommend to stick to the methods above if possible.

# Get the code
$ git clone https://github.com/deltachat/deltachat-desktop.git
$ cd deltachat-desktop

# Install dependencies
$ npm install

# Build the app (only needed on the first time or if the code was changed)
$ npm run build

# Start the application:
$ npm start

For development with local deltachat-core read the docs


This module builds on top of deltachat-core-rust, which in turn has external dependencies. Instructions below assumes a Linux system (e.g. Ubuntu 18.10).

If you get errors when running npm install, they might be related to the build dependency rust.

If rust or cargo is missing: Follow the instruction on https://rustup.rs/ to install rust and cargo.

Then try running npm install again.

Make sure that your nodejs version is 16.0.0 or newer.

If you still get errors look at the instructions in the deltchat-node and deltachat-rust-core README files to set things up or write an issue.

Configuration and Databases

The configuration files and database are stored at application-config's default file paths.

Each database is a SQLite file that represents the account for a given email address.

How to Contribute

Read docs/DEVELOPMENT.md

For translations see our transifex page: https://www.transifex.com/delta-chat/public/

For other ways to contribute: https://delta.chat/en/contribute


You can access the log folder and the current log file under the View->Developer menu:

Read docs/LOGGING.md for an explanation about our logging system. (available options, log location and information about the used Log-Format)

Download Details:

Author: Deltachat
Source Code: https://github.com/deltachat/deltachat-desktop 
License: GPL-3.0 license

#typescript #electron #chat #email

Deltachat-desktop: Email-based instant Messaging for Desktop
Lawrence  Lesch

Lawrence Lesch


Socket-io-typescript-chat: A Socket.io Chat Example Using TypeScript

A Socket.io Chat Example Using TypeScript

This repository contains server & client side code using TypeScript language

Running Server and Client locally


First, ensure you have the following installed:

  1. NodeJS - Download and Install latest version of Node: NodeJS
  2. Git - Download and Install Git
  3. Angular CLI - Install Command Line Interface for Angular https://cli.angular.io/

After that, use Git bash to run all commands if you are on Windows platform.

Clone repository

In order to start the project use:

$ git clone https://github.com/luixaviles/socket-io-typescript-chat.git
$ cd socket-io-typescript-chat

Run Server

To run server locally, just install dependencies and run gulp task to create a build:

$ cd server
$ npm install -g gulp-cli
$ npm install
$ gulp build
$ npm start

The socket.io server will be running on port 8080

When you run npm start, this folder leverages nodemon which will automatically reload the server after you make a change and save your Typescript file. Along with nodemon, there is also a gulp watch task that you can run to reload the files but it's not necessary and is provided merely as a teaching alternative.

Run Angular Client

Open other command line window and run following commands:

$ cd client
$ npm install
$ ng serve

Now open your browser in following URL: http://localhost:4200

Server Deployment

Take a look the Wiki Page for more details about deploying on Heroku and Zeit.co.

Feel free to update that page and Readme if you add any other platform for deployment!


The Open Source community is awesome! If you're working in a fork with other tech stack, please add the reference of your project here:

React + TypeScript + Material-UI clientnilshartmannIn Progress

Blog Post

Read the blog post with details about this project: Real Time Apps with TypeScript: Integrating Web Sockets, Node & Angular

Live Demo

Try live demo: https://typescript-chat.firebaseapp.com

Support this project

  • Star GitHub repository :star:
  • Create pull requests, submit bugs or suggest new features
  • Follow updates on Twitter or Github

Download Details:

Author: luixaviles
Source Code: https://github.com/luixaviles/socket-io-typescript-chat 
License: MIT license

#typescript #socket #chat #nodejs 

Socket-io-typescript-chat: A Socket.io Chat Example Using TypeScript

A Realtime Chat App Plugin for Flutter

Flutter Realtime Chat App Plugin 📱💬

A Flutter plugin for building a realtime chat application. This plugin provides an easy-to-use API for developers to implement a chat feature into their Flutter app.

Features ✨

  • Realtime messaging: Send and receive messages in real-time 🚀
  • Push notifications: Receive push notifications for new messages 📩
  • User authentication: Authenticate users with your own backend service 🔒
  • Image support: Send and receive images in chats 📷
  • Group chat: Create group chats and chat with multiple users at once 👥
  • Typing indicators: See when users are typing in a chat ⌨️
  • Read receipts: See when a user has read a message 👀
  • Customizable UI: Customize the look and feel of the chat interface to match your app's branding 🎨


`flutter pub add chat_app_plugin`

Database Functions

inituserdatawithphoto(String uid, String name, String email, String photo)

Future inituserdatawithoutphoto(String uid, String name, String email)

Future getuserdata(String email)

finduser(String email)


Future addgroup(String uid, String name, String groupname, String groupicon)

Future addgroupwithouticon(String uid, String name, String groupname)

Future addreport(String uid, String uidofusertoreport, String messagetoreport,String date)

getgroupchats(String groupid)

getchatchats(String chatid)

getgroupmembers(String groupid)

Future<bool> isjoined(String uid, String groupid, String groupname)

Future leavegroup(String uid, String groupid, String groupname)

addgroupchat(String groupid, Map<String, dynamic> chatmessage)

addchat(String uid1, String firstusername, String uid2, String secondusername,Map<String, dynamic> chatmessage)

startnewchat(String uid, String uid2, Map<String, dynamic> chatmessage)

addnewchatmessage(String chatid, Map<String, dynamic> chat)

Future withphotoregisterwithemailpassword( String email, String password, String name, String photo)

Future<String?> tokenwithphotoregisterwithemailpassword( String email, String password, String name, String photo)

Future withoutphotoregisterwithemailpassword( String email, String password, String name)

Future<String?> tokenwithoutphotoregisterwithemailpassword( String email, String password, String name)

Future customregister( String email, String profilephoto, String name, String uid)

Future loginwithemailandpassword(String email, String password)

  • Future<String?> tokenloginwithemailpassword(String email, password)
  • Future<String?> tokenloginwithphonenumber(String phonenumber)
  • Future loginwithphonenumber(String phonenumber)
  • Future signout()
  • Future sendforgotpassword(String email)
  • Future addgroup(String uid, String adminname, String groupname, String groupicon)
  • Future addgroupwithouticon( String uid, String adminname, String groupname)

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add chat_app_plugin

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  chat_app_plugin: ^0.0.2

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:chat_app_plugin/chat_app_plugin.dart'; 


import 'package:chat_app_plugin_example/Screens/Register.dart';
import 'package:chat_app_plugin_example/User.dart';
import 'package:firebase_core/firebase_core.dart';
import 'package:flutter/material.dart';

import 'apis.dart';

void main() async {
  await Firebase.initializeApp(
      name: "Chatapp",
      options: const FirebaseOptions(
          apiKey: apiss.akey,
          appId: apiss.appId,
          messagingSenderId: apiss.messagesender,
          projectId: apiss.projectid));
  runApp(const MyApp());

class MyApp extends StatefulWidget {
  const MyApp({super.key});

  State<MyApp> createState() => _MyAppState();

class _MyAppState extends State<MyApp> {
  // Users users = Users(dp: "", email: "", password: "", uid: "");
  Widget build(BuildContext context) {
    return const MaterialApp(
      home: Register(),

Download Details:

Author: author-sanjay/

Source Code: https://github.com/author-sanjay/Realtime-chat-plugin

#flutter #chat 

A Realtime Chat App Plugin for Flutter

A Chat WIdgets Library for Flutter Applications


This library contains necessary widgets for chat application ui. All the components are customizable.


  • Chat Dialogs Widget
  • Message Bar Widget
  • Highly Customizable

Getting started

To use this widget there is not any special requirement. IF you have flutter installed you can directly start using this.

How to Use

Add This Library:

Add this line to your dependencies:

flutter_chat_widget: ^0.0.3

Then you just have to import the package with

import 'package:flutter_chat_widget/recieved_message_widget.dart';
import 'package:flutter_chat_widget/sent_message_widget.dart';
import 'package:flutter_chat_widget/message_bar_widget.dart';

Create Chat Bubbles

//for sent messages
  message: item.text,
  background: Colors.blueAccent,
  textColor: Colors.white,

//for received messages
  message: item.text,
  background: Colors.black12,
  textColor: Colors.black,

Create Message Bar

MessageBar(onCLicked: (text) {
  // send data to server


This is an initial release of the package. If you find any issue please let me know I will fix it accordingly.

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add flutter_chat_widget

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  flutter_chat_widget: ^0.0.3

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:flutter_chat_widget/flutter_chat_widget.dart'; 

Download Details:

Author: evanemran

Source Code: https://github.com/evanemran/flutter_chat_widget

#flutter #widget #chat 

A Chat WIdgets Library for Flutter Applications
Lawrence  Lesch

Lawrence Lesch


GHChat: A Chat Application for GitHub

ghChat(react version)

I hope that this project can be a chat tool for GitHub. So I will try to make it do some integration with GitHub. At present,it just support logging in with GitHub authorization and look GitHub user public information in ghChat. You can create group in ghChat for your github project and post the group link in the readme to convenient for the users' communication.

If you have anything idea about integration, welcome to create issues about feature suggestion, bug feedback or send pull requests.

What technology do ghChat use?

Front-End : React+Redux+React-router+axios+scss; Back-end: node(koa2)+mysql+JWT(Json web token); use socket.io to send messages with each other. And get other technology please follow the package.json file.

Demo with photo:






Suggest to open PWA: How to turn on PWA in chrome?

Features && Progress

Account system

  •  Log in
  •  Resister
  •  Log out
  •  log in multiple devices at the same time

Integrate with github

  •  Log in with github authorization
  •  show github user public information


  •  Basic UI components: modal,notification ...
  •  Responsive layout.

Private chat

  •  Chat with my contacts
  •  Add contact
  •  Contact information card
  •  Delete contact

Group chat

  •  Chat together in a group
  •  Create a group
  •  Join a group
  •  Group information view, include group members, group notice, group name...
  •  Quit the group
  •  Editor group information
  •  Prompt when some one join group


  •  Search users and groups in local or online obscurely

Rich chat mode

  •  Chat list sort by time every time
  •  Send photo
  •  Send emoji
  •  Send file
  •  Download file
  •  Press enter key to send message
  •  @somebody
  •  Full view photo
  •  Send photo from copy
  •  share user/group in the internal or external
  •  Markdown
  •  Quote

Message notification

  •  Browser notification
  •  Browser notification switch
  •  Show chat messages unread number in the chat list
  •  chat messages unread number still show accurately when user refresh, reopen page or (different accounts)login again


  •  Open gzip to compress static resource
  •  Lazy load chat messages. Fetch twenty messages by one time in every chat.
  •  lazy load components
  •  API request frequency limit
  •  Build file Split Chunks
  •  SQL optimization


  •  Robot smart reply (just support Chinese)
  •  Add SSL for website
  •  PWA
  •  Rewrite back end code with TS
  •  Multilingual solution with I18
  •  encapsulate back end code as sdk.
  •  CI/CD


  • clone project code
git clone https://github.com/aermin/ghChat.git
  • download npm module for front end
cd ghChat
npm i
  • download npm module for the back end
cd ghChat/server
npm i
  • init DB
// You should create a MySQL DB which name ghchat in local
DB configuration follows 'ghChat/server/src/configs/configs.dev.ts'

npm run init_sql // then check if it inits successfully

ps: if you want to use github authorization to log in and use qiniu cdn which provides storage to send photo and file, you should follow the file(ghChat/server/src/configs/configs.dev.ts) to configure. The default won't be able to use.

  • run front end and back end code
npm run start
cd ..
npm run start

use in production

Premise: pls create secrets.ts file to do configuration inside ghChat/server/ folder

export default {
  port: '3000', // server port
  dbConnection: {
    host: '', // 数据库IP
    port: 3306, // 数据库端口
    database: 'ghchat', // 数据库名称
    user: '', // 数据库用户名
    password: '', // 数据库密码
  client_secret: '', // client_secret of github authorization:  github-> settings ->  Developer settings to get
  jwt_secret: '', // secret of json web token
  qiniu: { // qiniu cdn configuration
    accessKey: '',
    secretKey: '',
    bucket: ''
  robot_key: '', // the key of robot chat api => If you want to use robot chat, pls apply this key from http://www.tuling123.com/

1.build front end code

cd src
npm run build:prod

2.build server code

cd sever
npm run build:prod
  • put the folders(build, dist) which built from step1, step2 into you server, and run dist/index.js file (here you can copy ghChat/server/package.json to your sever as well,and run command npm run start:prod)


GitHub address

Project online address(also this project's group address),support logging in with GitHub authorization

Welcome to click this link to contact me.

English | 简体中文

Download Details:

Author: aermin
Source Code: https://github.com/aermin/ghChat 
License: MIT license

#typescript #react #redux #nodejs #mysql #chat #jwt 

GHChat: A Chat Application for GitHub
Rupert  Beatty

Rupert Beatty


Stream-chat-swift: iOS Chat SDK in Swift


iOS Chat SDK in Swift - Build your own app chat experience for iOS using the official Stream Chat API

This is the official iOS SDK for Stream Chat, a service for building chat and messaging applications. This library includes both a low-level SDK and a set of reusable UI components.

Low Level Client (LLC)

The StreamChat SDK is a low level client for Stream chat service that doesn't contain any UI components. It is meant to be used when you want to build a fully custom UI. For the majority of use cases though, we recommend using our highly customizable UI SDK's.


The StreamChatUI SDK is our UI SDK for UIKit components. If your application needs to support iOS 13 and below, this is the right UI SDK for you.


The StreamChatSwiftUI SDK is our UI SDK for SwiftUI components. If your application only needs to support iOS 14 and above, this is the right UI SDK for you. This SDK is available in another repository stream-chat-swiftui.

iOS 16 and Xcode 14 support

Since the 4.20.0 release, our SDKs can be built using Xcode 14. Currently, there are no known issues on iOS 16. If you spot one, please create a ticket.

Main Features

  • Offline support: Browse channels and send messages while offline.
  • Familiar behavior: The UI elements are good platform citizens and behave like native elements; they respect tintColor, layoutMargins, light/dark mode, dynamic font sizes, etc.
  • Swift native API: Uses Swift's powerful language features to make the SDK usage easy and type-safe.
  • Uses UIKit patterns and paradigms: The API follows the design of native system SDKs. It makes integration with your existing code easy and familiar.
  • SwiftUI support: We have developed a brand new SDK to help you have smoother Stream Chat integration in your SwiftUI apps.
  • First-class support for Combine: The StreamChat SDK (Low Level Client) has Combine wrappers to make it really easy use in an app that uses Combine.
  • Fully open-source implementation: You have access to the complete source code of the SDK here on GitHub.
  • Supports iOS 11+: We proudly support older versions of iOS, so your app can stay available to almost everyone.

Quick Links

  • iOS/Swift Chat Tutorial: Learn how to use the SDK by following our simple tutorial with UIKit (or SwiftUI).
  • Register: Register to get an API key for Stream Chat.
  • Installation: Learn more about how to install the SDK using CocoaPods, SPM or Carthage.
  • Documentation: An extensive documentation is available to help with you integration.
  • SwiftUI: Check our SwiftUI SDK if you are developing with SwiftUI.
  • Demo app: This repo includes a fully functional demo app with example usage of the SDK.
  • Example apps: This section of the repo includes fully functional sample apps that you can use as reference.

Free for Makers

Stream is free for most side and hobby projects. You can use Stream Chat for free if you have less than five team members and no more than $10,000 in monthly revenue.

Main Principles

Progressive disclosure: The SDK can be used easily with very minimal knowledge of it. As you become more familiar with it, you can dig deeper and start customizing it on all levels.

Highly customizable: Every element is designed to be easily customizable. You can modify the brand color by setting tintColor, apply appearance changes using custom UI rules, or subclass existing elements and inject them everywhere in the system, no matter how deep is the logic hierarchy.

open by default: Everything is open unless there's a strong reason for it to not be. This means you can easily modify almost every behavior of the SDK such that it fits your needs.

Good platform citizen: The UI elements behave like good platform citizens. They use existing iOS patterns; their behavior is predictable and matches system UI components; they respect tintColor, layourMargins, dynamic font sizes, and other system-defined UI constants.


This SDK tries to keep the list of external dependencies to a minimum. Starting 4.6.0, and in order to improve the developer experience, dependencies are hidden inside our libraries. (Does not apply to StreamChatSwiftUI's dependencies yet).

Learn more about our dependencies here

Using Objective-C

You can still integrate our SDKs if your project is using Objective-C. In that case, any customizations would need to be done by subclassing our components in Swift, and then use those directly from the Objective-C code.

We are hiring

We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world. Check out our current openings and apply via Stream's website.

Quick Overview

Channel List

A list of channels matching provided query
Channel name and image based on the channel members or custom data
Unread messages indicator
Preview of the last message
Online indicator for avatars
Create new channel and start right away

Message List

A list of message in a channel
Photo preview
Message reactions
Message grouping based on the send time
Link preview
Inline replies
Message threads
GIPHY support

Message Composer

Support for multiline text, expands and shrinks as needed
Image and file attachments
Replies to messages
Tagging of users
Chat commands like mute, ban, giphy

Chat Commands

Easily search commands by writing / symbol or tap bolt icon
GIPHY support out of box
Supports mute, unmute, ban, unban commands
WIP support of custom commands

User Tagging Suggestion

User mentions preview
Easily search for concrete user
Mention as many users as you want

Download Details:

Author: GetStream
Source Code: https://github.com/GetStream/stream-chat-swift 
License: View license

#swift #chat #ios #sdk #messaging 

Stream-chat-swift: iOS Chat SDK in Swift
Lawrence  Lesch

Lawrence Lesch


WPPConnect: Open Source Project Developed By The JavaScript Community

WPPConnect 📞

WPPConnect is an open source project developed by the JavaScript community with the aim of exporting functions from WhatsApp Web to the node, which can be used to support the creation of any interaction, such as customer service, media sending, intelligence recognition based on phrases artificial and many other things, use your imagination... 😀🤔💭


Automatic QR Refresh
Send text, image, video, audio and docs
Get contacts, chats, groups, group members, Block List
Send contacts
Send stickers
Send stickers GIF
Multiple Sessions
Forward Messages
Receive message
insert user section
Send location
and much more

See more at WhatsApp methods


The first thing that you had to do is install the npm package :

npm i --save @wppconnect-team/wppconnect

See more at Getting Started


Building WPPConnect is really simple, to build the entire project just run

> npm run build


Maintainers are needed, I cannot keep with all the updates by myself. If you are interested please open a Pull Request.


Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Star History

Star History Chart

Download Details:

Author: wppconnect-team
Source Code: https://github.com/wppconnect-team/wppconnect 
License: View license

#typescript #javascript #nodejs #api #chat #bot #opensource 

WPPConnect: Open Source Project Developed By The JavaScript Community
Lawrence  Lesch

Lawrence Lesch


Linen.dev: Google-searchable Slack Alternative for Communities


Linen is a Google-searchable community chat tool. Linen was built as an alternative to closed tools like Slack and Discord.

Linen is free and offers unlimited message retention you can sign up at Linen.community.

Core Features:

  • Search engine friendly: Linen communities have over 50,000 pages indexed on Google with over 10,000,000 search impresssions. Most chat apps are not search engine friendly because they are very JS heavy. We made Linen search engine friendly by offering a sitemap, conditionally rendering a static version of our page to search engines, and using cursor based pagination so pages will be consistant.
  • Customer support tooling: Most communities often become a customer support channel. All of our threads have a open close state. We have a feed where you can browse all open/closed conversations in one place instead of having to worry about which channels and conversations your team have missed.
  • Async first: Chat can be very noisy especially with large communities. By having a feed of conversations that you are participating in you don't have to worry about missing messages. We also repurposed @mentions from a notification to a async notification where it shows up in your feed. We replaced it with !mention which will send a push notification to you.
  • Import communities: Linen support imports from all of your public Slack/Discord conversations, attachments, emojis, and members.
  • Single account across multiple communities: Linen let's you join multiple communities with a single login without multiple emails and passwords
  • Private communities: In addition to public communities we also support private communities that require a password login to access the content. We use this feature for internal team discussions.
  • Move threads and messages: Linen let's you drag and drop messages and merge them into a single thread as well as move threads between channels.
  • Discord Forum Support: Linen will sync Discord and make the search engine friendly


  • Github integration: Most open source communities use github issues to manage their tickets. We want to let you tag a conversation with a github issue and it will auto post a message when the ticket is closed or has an update.
  • Improved search: Currently search is done via full text search with postgres. There are a lot more improvements to be made here we are considering hosting a separate search service
  • Desktop/Mobile client: We want to support a desktop and mobile client for Linen so you can get push notifications for when there are urgent things.
  • Botting: We want to support botting and automation where you can build and add your custom bots
  • Private Channels: Channels that are invite only within the community
  • Direct messages: Direct messages within the community

Feed view:



Linen is in it's early stages of development so we are looking for a ton of feedback.

Misc Features:

  1. Markdown message support
  2. Custom community branding
  3. Custom domain hosting for Cloud edition
  4. Attachments support
  5. Emoji support

Our documentation is divided into several sections:


Data source


Linen cloud edition: https://linen.dev Join our public community: https://linen.dev/s/linen

See project roadmap: https://github.com/orgs/Linen-dev/projects/2

Download Details:

Author: Linen-dev
Source Code: https://github.com/Linen-dev/linen.dev 
License: AGPL-3.0 license

#typescript #slack #chat #discord 

Linen.dev: Google-searchable Slack Alternative for Communities