1679393580
A link aggregator and forum for the fediverse.
Desktop | Mobile |
---|---|
![]() | ![]() |
Lemmy is similar to sites like Reddit, Lobste.rs, or Hacker News: you subscribe to forums you're interested in, post links and discussions, then vote, and comment on them. Behind the scenes, it is very different; anyone can easily run a server, and all these servers are federated (think email), and connected to the same universe, called the Fediverse.
For a link aggregator, this means a user registered on one server can subscribe to forums on any other server, and can have discussions with users registered elsewhere.
It is an easily self-hostable, decentralized alternative to Reddit and other link aggregators, outside of their corporate control and meddling.
Each Lemmy server can set its own moderation policy; appointing site-wide admins, and community moderators to keep out the trolls, and foster a healthy, non-toxic environment where all can feel comfortable contributing.
(+/-)
like old Reddit.:
@
, Community tagging using !
.All
, Subscribed
, Inbox
, User
, and Community
.~80kB
gzipped.Lemmy is free, open-source software, meaning no advertising, monetizing, or venture capital, ever. Your donations directly support full-time development of the project.
1Hefs7miXS5ff5Ck5xvmjKjXf5242KzRtK
0x400c96c96acbC6E7B3B43B1dc1BB446540a88A01
41taVyY6e1xApqKyMVDRVxJ76sPkfZhALLTjRvVKpaAh2pBd4wv9RgYj1tSPrx8wc6iE1uWUfjtQdTmTy2FGMeChGVKPQuV
addr1q858t89l2ym6xmrugjs0af9cslfwvnvsh2xxp6x4dcez7pf5tushkp4wl7zxfhm2djp6gq60dk4cmc7seaza5p3slx0sakjutm
If you want to help with translating, take a look at Weblate. You can also help by translating the documentation.
English | Español | Русский | 汉语 | 漢語
Author: LemmyNet
Source Code: https://github.com/LemmyNet/lemmy
License: AGPL-3.0 license
1679373360
Chat Downloader is a simple tool used to retrieve chat messages from livestreams, videos, clips and past broadcasts. No authentication needed!
This tool is distributed on PyPI and can be installed with pip
:
$ pip install chat-downloader
To update to the latest version, add the --upgrade
flag to the above command.
Alternatively, the tool can be installed with git
:
$ git clone https://github.com/xenova/chat-downloader.git
$ cd chat-downloader
$ python setup.py install
usage: chat_downloader [-h] [--version] [--start_time START_TIME]
[--end_time END_TIME]
[--message_types MESSAGE_TYPES | --message_groups MESSAGE_GROUPS]
[--max_attempts MAX_ATTEMPTS]
[--retry_timeout RETRY_TIMEOUT]
[--interruptible_retry [INTERRUPTIBLE_RETRY]]
[--max_messages MAX_MESSAGES]
[--inactivity_timeout INACTIVITY_TIMEOUT]
[--timeout TIMEOUT] [--format FORMAT]
[--format_file FORMAT_FILE] [--chat_type {live,top}]
[--ignore IGNORE]
[--message_receive_timeout MESSAGE_RECEIVE_TIMEOUT]
[--buffer_size BUFFER_SIZE] [--output OUTPUT]
[--overwrite [OVERWRITE]] [--sort_keys [SORT_KEYS]]
[--indent INDENT] [--pause_on_debug | --exit_on_debug]
[--logging {none,debug,info,warning,error,critical} | --testing | --verbose | --quiet]
[--cookies COOKIES] [--proxy PROXY]
url
For example, to save messages from a livestream to a JSON file, you can use:
$ chat_downloader https://www.youtube.com/watch?v=jfKfPfyJRdk --output chat.json
For a description of these options, as well as advanced command line use-cases and examples, consult the Command Line Usage page.
from chat_downloader import ChatDownloader
url = 'https://www.youtube.com/watch?v=jfKfPfyJRdk'
chat = ChatDownloader().get_chat(url) # create a generator
for message in chat: # iterate over messages
chat.print_formatted(message) # print the formatted message
For advanced python use-cases and examples, consult the Python Documentation.
Chat items/messages are parsed into JSON objects (a.k.a. dictionaries) and should follow a format similar to this:
{
...
"message_id": "xxxxxxxxxx",
"message": "actual message goes here",
"message_type": "text_message",
"timestamp": 1613761152565924,
"time_in_seconds": 1234.56,
"time_text": "20:34",
"author": {
"id": "UCxxxxxxxxxxxxxxxxxxxxxxx",
"name": "username_of_sender",
"images": [
...
],
"badges": [
...
]
},
...
}
For an extensive, documented list of included fields, consult the Chat Item Fields page.
Coming soon
Found a bug or have a suggestion? File an issue here. To assist the developers in fixing the issue, please follow the issue template as closely as possible.
If you would like to help improve the tool, you'll find more information on contributing in our Contributing Guide.
Author: xenova
Source Code: https://github.com/xenova/chat-downloader
License: MIT license
1679349000
AI search & chat on your Kindle highlights.
Supports .csv exporting of your embedded data.
Code is 100% open source.
Note: I recommend using on desktop only.
In the Kindle App you can export your highlights as a notebook.
The notebook provides you with a .html file of your highlights.
Import the .html file into the app.
It will parse the highlights and display them.
After parsing is complete, the highlights are ready to be embedded.
Kindle GPT uses OpenAI Embeddings (text-embedding-ada-002
) to generate embeddings for each highlight.
The embedded text is the chapter/section name + the highlighted text. I found this to be the best way to get the most relevant passages.
You will also receive a downloaded .csv file of your embedded notebook to use wherever you'd like - including for importing to Kindle GPT for later use.
Now you can query your highlights using the search bar.
The 1st step is to get the cosine similarity for your query and all of the highlights.
Then, the most relevant results are returned (maxing out at ~2k tokens, up to 10).
The results are used to create a prompt that feeds into GPT-3.5-turbo.
And finally, you get your answer!
All data is stored locally.
Kindle GPT doesn't use a database.
You can re-import any of your generated .csv files at any time to avoid having to re-embed your notebooks.
You'll need an OpenAI API key to generate embeddings and perform chat completions.
git clone https://github.com/mckaywrigley/kindle-gpt.git
npm i
npm run dev
If you have any questions, feel free to reach out to me on Twitter!
Author: mckaywrigley
Source Code: https://github.com/mckaywrigley/kindle-gpt
License: MIT license
1679341260
AI-powered search and chat for Paul Graham's essays.
All code & data used is 100% open-source.
The dataset is a CSV file containing all text & embeddings used.
Download it here.
I recommend getting familiar with fetching, cleaning, and storing data as outlined in the scraping and embedding scripts below, but feel free to skip those steps and just use the dataset.
Paul Graham GPT provides 2 things:
Search was created with OpenAI Embeddings (text-embedding-ada-002
).
First, we loop over the essays and generate embeddings for each chunk of text.
Then in the app we take the user's search query, generate an embedding, and use the result to find the most similar passages from the book.
The comparison is done using cosine similarity across our database of vectors.
Our database is a Postgres database with the pgvector extension hosted on Supabase.
Results are ranked by similarity score and returned to the user.
Chat builds on top of search. It uses search results to create a prompt that is fed into GPT-3.5-turbo.
This allows for a chat-like experience where the user can ask questions about the book and get answers.
Here's a quick overview of how to run it locally.
You'll need an OpenAI API key to generate embeddings.
Note: You don't have to use Supabase. Use whatever method you prefer to store your data. But I like Supabase and think it's easy to use.
There is a schema.sql file in the root of the repo that you can use to set up the database.
Run that in the SQL editor in Supabase as directed.
I recommend turning on Row Level Security and setting up a service role to use with the app.
git clone https://github.com/mckaywrigley/paul-graham-gpt.git
npm i
Create a .env.local file in the root of the repo with the following variables:
OPENAI_API_KEY=
NEXT_PUBLIC_SUPABASE_URL=
SUPABASE_SERVICE_ROLE_KEY=
npm run scrape
This scrapes all of the essays from Paul Graham's website and saves them to a json file.
npm run embed
This reads the json file, generates embeddings for each chunk of text, and saves the results to your database.
There is a 200ms delay between each request to avoid rate limiting.
This process will take 20-30 minutes.
npm run dev
Thanks to Paul Graham for his writing.
I highly recommend you read his essays.
3 years ago they convinced me to learn to code, and it changed my life.
If you have any questions, feel free to reach out to me on Twitter!
I sacrificed composability for simplicity in the app.
Yes, you can make things more modular and reusable.
But I kept pretty much everything in the homepage component for the sake of simplicity.
Author: mckaywrigley
Source Code: https://github.com/mckaywrigley/paul-graham-gpt
License: MIT license
1679137330
ChatMe - Simply Encryption Messaging Flutter App. Link: https://bit.ly/chatme_env #flutter #flutterapp #chatme #chat #messaging #flutterdev #flutterdeveloper #privatechat
1679095500
so... you know arp? the protocol your computer uses to find the mac addresses of other computers on your network? yeah. that.
i thought it would be a great idea to hijack it to make a chat app :)
built in two days because i was sick and had nothing better to do.
(i swear, i might actually briefly have a use for this! it might not be entirely useless! ... and other lies i tell myself)
yes
you can send messages tens of thousands of characters long because i implemented a (naive) generalizable transport protocol on top of arp. there's also a bit of compression.
if you wanted, you could probably split off the networking part of this and use it instead of udp. please don't do this.
not only are join and leave notifications a thing, i built an entire presence discovery and heartbeat system to see an updated list of other online users. ironically, part of this serves a similar purpose to arp itself.
for more information on how this all works technically, check out the little article i wrote.
if you actually want to install this for some reason, you can get it from the releases page.
on windows, you probably need npcap. make sure you check "Install Npcap in WinPcap API-compatible Mode" in the installer!
on linux, you might have to give arpchat network privileges:
sudo setcap CAP_NET_RAW+ep /path/to/arpchat
then just run the binary in a terminal. you know it's working properly if you can see your own messages when you send them. if you can't see your messages, try selecting a different interface or protocol!
have any issues? that really sucks. you can make an issue if it pleases you.
you don't really want to build this. anyway, it's tested on the latest unstable rust.
on windows, download the WinPcap Developer's Pack and set the LIB
environment variable to the WpdPack/Lib/x64/
folder.
cargo build
Author: kognise
Source Code: https://github.com/kognise/arpchat
License: View license
1678700700
This repository is a chat example with LLaMA (arXiv) models running on a typical home PC. You will just need a NVIDIA videocard and some RAM to chat with model.
https://github.com/facebookresearch/llama/issues/162
Share your best prompts, chats or generations here in this issue: https://github.com/randaller/llama-chat/issues/7
One may run with 32 Gb of RAM, but inference will be slow (with the speed of your swap file reading)
I am running this on 12700k/128 Gb RAM/NVIDIA 3070ti 8Gb/fast huge nvme and getting one token from 30B model in a few seconds.
For example, 30B model uses around 70 Gb of RAM. 7B model fits into 18 Gb. 13B model uses 48 Gb.
If you do not have powerful videocard, you may use another repo for cpu-only inference: https://github.com/randaller/llama-cpu
Download and install Anaconda Python https://www.anaconda.com and run Anaconda Prompt
conda create -n llama python=3.10
conda activate llama
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
In a conda env with pytorch / cuda available, run
pip install -r requirements.txt
Then in this repository
pip install -e .
magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA
or
magnet:xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
First, you need to unshard model checkpoints to a single file. Let's do this for 30B model.
python merge-weights.py --input_dir D:\Downloads\LLaMA --model_size 30B
In this example, D:\Downloads\LLaMA is a root folder of downloaded torrent with weights.
This will create merged.pth file in the root folder of this repo.
Place this file and corresponding (torrentroot)/30B/params.json of model into [/model] folder.
So you should end up with two files in [/model] folder: merged.pth and params.json.
Place (torrentroot)/tokenizer.model file to the [/tokenizer] folder of this repo. Now you are ready to go.
python example-chat.py ./model ./tokenizer/tokenizer.model
Temperature is one of the key parameters of generation. You may wish to play with temperature. The more temperature is, the model will use more "creativity", and the less temperature instruct model to be "less creative", but following your prompt stronger.
Repetition penalty is a feature implemented by Shawn Presser. With this, the model will be fined, when it would like to enter to repetion loop state. Set this parameter to 1.0, if you wish to disable this feature.
Samplers
By default, Meta provided us with top_p sampler only. Again, Shawn added an alternate top_k sampler, which (in my tests) performs pretty well. If you wish to switch to top_k sampler, use the following parameters:
temperature: float = 0.7,
top_p: float = 0.0,
top_k: int = 40,
sampler: str = 'top_k',
For sure, you may play with all the values to get different outputs.
Launch examples
One may modify these hyperparameters straight in the code. But it is better to leave the defaults in code and set the parameters of experiments in the launch line.
# Run with top_p sampler, with temperature 0.75, with top_p value 0.95, repetition penalty disabled
python example-chat.py ./model ./tokenizer/tokenizer.model 0.75 0.95 0 1.0 top_p
# Run with top_k sampler, with temperature 0.7, with top_k value 40, default repetition penalty value
python example-chat.py ./model ./tokenizer/tokenizer.model 0.7 0.0 40 1.17 top_k
Of course, this is also applicable to a [python example.py] as well (see below).
If you wish to stop generation not by "\n" sign, but by another signature, like "User:" (which is also good idea), or any other, make the following modification in the llama/generation.py:
-5 means to remove last 5 chars from resulting context, which is length of your stop signature, "User:" in this example.
Share your best prompts and generations with others here: https://github.com/randaller/llama-chat/issues/7
Simply comment three lines in llama/generation.py to turn it to a generator back.
python example.py ./model ./tokenizer/tokenizer.model
Confirming that 30B model is able to generate code and fix errors in code: https://github.com/randaller/llama-chat/issues/7
Confirming that 30B model is able to generate prompts for Stable Diffusion: https://github.com/randaller/llama-chat/issues/7#issuecomment-1463691554
Confirming that 7B and 30B model support Arduino IDE: https://github.com/randaller/llama-chat/issues/7#issuecomment-1464179944
This repo is heavily based on Meta's original repo: https://github.com/facebookresearch/llama
And on Steve Manuatu's repo: https://github.com/venuatu/llama
And on Shawn Presser's repo: https://github.com/shawwn/llama
Author: Randaller
Source Code: https://github.com/randaller/llama-chat
License: GPL-3.0 license
1678588080
Desktop Application for delta.chat
The application can be downloaded from https://get.delta.chat. Here you'll find binary releases for all supported platforms. See below for platform specific instructions. If you run into any problems please consult the Troubleshooting section below.
The primary distribution-independed way to install is to use the flatpak build. This is maintained in it's own repository, however a pre-built binary can be downloaded and installed from flathub which also has a setup guide for many Linux platforms.
WARNING: Currently the AUR package compiles from latest master. This can be more recent as the latest release, introduce new features but also new bugs.
If you have a AUR helper like yay installed, you can install it by running yay -S deltachat-desktop-git
and following the instruction in your terminal.
Otherwise you can still do it manually:
# Download the latest snapshot of the PKGBUILD
wget https://aur.archlinux.org/cgit/aur.git/snapshot/deltachat-desktop-git.tar.gz
# extract the archive and rm the archive file afterwards
tar xzfv deltachat-desktop-git.tar.gz && rm deltachat-desktop-git.tar.gz
# cd into extracted folder
cd deltachat-desktop-git
# build package
makepkg -si
# install package (you need to replace <version> with whatever version makepkg built)
sudo pacman -U deltachat-desktop-git-<version>.tar.xz
$ brew install --cask deltachat
Simply install the .dmg
file as you do it with all other software on mac.
If you are getting an OpenSSL error message at the first start up you need to install OpenSSL.
$ brew install openssl
⚠ This is mostly for development purposes, this won't install/integrate deltachat into your system. So unless you know what you are doing, we recommend to stick to the methods above if possible.
# Get the code
$ git clone https://github.com/deltachat/deltachat-desktop.git
$ cd deltachat-desktop
# Install dependencies
$ npm install
# Build the app (only needed on the first time or if the code was changed)
$ npm run build
# Start the application:
$ npm start
For development with local deltachat-core read the docs
This module builds on top of deltachat-core-rust
, which in turn has external dependencies. Instructions below assumes a Linux system (e.g. Ubuntu 18.10).
If you get errors when running npm install
, they might be related to the build dependency rust
.
If rust
or cargo
is missing: Follow the instruction on https://rustup.rs/ to install rust and cargo.
Then try running npm install
again.
Make sure that your nodejs version is 16.0.0
or newer.
If you still get errors look at the instructions in the deltchat-node and deltachat-rust-core README files to set things up or write an issue.
The configuration files and database are stored at application-config's default file paths.
Each database is a SQLite file that represents the account for a given email address.
Read docs/DEVELOPMENT.md
For translations see our transifex page: https://www.transifex.com/delta-chat/public/
For other ways to contribute: https://delta.chat/en/contribute
You can access the log folder and the current log file under the View->Developer
menu:
Read docs/LOGGING.md for an explanation about our logging system. (available options, log location and information about the used Log-Format)
Author: Deltachat
Source Code: https://github.com/deltachat/deltachat-desktop
License: GPL-3.0 license
1678540620
This repository contains server & client side code using TypeScript
language
Running Server and Client locally
First, ensure you have the following installed:
After that, use Git bash
to run all commands if you are on Windows platform.
In order to start the project use:
$ git clone https://github.com/luixaviles/socket-io-typescript-chat.git
$ cd socket-io-typescript-chat
To run server locally, just install dependencies and run gulp
task to create a build:
$ cd server
$ npm install -g gulp-cli
$ npm install
$ gulp build
$ npm start
The socket.io
server will be running on port 8080
When you run npm start
, this folder leverages nodemon which will automatically reload the server after you make a change and save your Typescript file. Along with nodemon, there is also a gulp watch
task that you can run to reload the files but it's not necessary and is provided merely as a teaching alternative.
Open other command line window and run following commands:
$ cd client
$ npm install
$ ng serve
Now open your browser in following URL: http://localhost:4200
Server Deployment
Take a look the Wiki Page for more details about deploying on Heroku
and Zeit.co
.
Feel free to update that page and Readme if you add any other platform for deployment!
Forks
The Open Source community is awesome! If you're working in a fork with other tech stack, please add the reference of your project here:
Features | Author | Status |
---|---|---|
React + TypeScript + Material-UI client | nilshartmann | In Progress |
Read the blog post with details about this project: Real Time Apps with TypeScript: Integrating Web Sockets, Node & Angular
Try live demo: https://typescript-chat.firebaseapp.com
Support this project
Author: luixaviles
Source Code: https://github.com/luixaviles/socket-io-typescript-chat
License: MIT license
1678292096
Flutter Realtime Chat App Plugin 📱💬
A Flutter plugin for building a realtime chat application. This plugin provides an easy-to-use API for developers to implement a chat feature into their Flutter app.
`flutter pub add chat_app_plugin`
inituserdatawithphoto(String uid, String name, String email, String photo)
Future inituserdatawithoutphoto(String uid, String name, String email)
Future getuserdata(String email)
finduser(String email)
getusergroups()
Future addgroup(String uid, String name, String groupname, String groupicon)
Future addgroupwithouticon(String uid, String name, String groupname)
Future addreport(String uid, String uidofusertoreport, String messagetoreport,String date)
getgroupchats(String groupid)
getchatchats(String chatid)
getgroupmembers(String groupid)
Future<bool> isjoined(String uid, String groupid, String groupname)
Future leavegroup(String uid, String groupid, String groupname)
addgroupchat(String groupid, Map<String, dynamic> chatmessage)
addchat(String uid1, String firstusername, String uid2, String secondusername,Map<String, dynamic> chatmessage)
startnewchat(String uid, String uid2, Map<String, dynamic> chatmessage)
addnewchatmessage(String chatid, Map<String, dynamic> chat)
Future withphotoregisterwithemailpassword( String email, String password, String name, String photo)
Future<String?> tokenwithphotoregisterwithemailpassword( String email, String password, String name, String photo)
Future withoutphotoregisterwithemailpassword( String email, String password, String name)
Future<String?> tokenwithoutphotoregisterwithemailpassword( String email, String password, String name)
Future customregister( String email, String profilephoto, String name, String uid)
Future loginwithemailandpassword(String email, String password)
Future<String?> tokenloginwithemailpassword(String email, password)
Future<String?> tokenloginwithphonenumber(String phonenumber)
Future loginwithphonenumber(String phonenumber)
Future signout()
Future sendforgotpassword(String email)
Future addgroup(String uid, String adminname, String groupname, String groupicon)
Future addgroupwithouticon( String uid, String adminname, String groupname)
Run this command:
With Flutter:
$ flutter pub add chat_app_plugin
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
chat_app_plugin: ^0.0.2
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:chat_app_plugin/chat_app_plugin.dart';
import 'package:chat_app_plugin_example/Screens/Register.dart';
import 'package:chat_app_plugin_example/User.dart';
import 'package:firebase_core/firebase_core.dart';
import 'package:flutter/material.dart';
import 'apis.dart';
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(
name: "Chatapp",
options: const FirebaseOptions(
apiKey: apiss.akey,
appId: apiss.appId,
messagingSenderId: apiss.messagesender,
projectId: apiss.projectid));
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
// Users users = Users(dp: "", email: "", password: "", uid: "");
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Register(),
);
}
}
Download Details:
Author: author-sanjay/
Source Code: https://github.com/author-sanjay/Realtime-chat-plugin
1678290298
flutter_chat_widgets
This library contains necessary widgets for chat application ui. All the components are customizable.
To use this widget there is not any special requirement. IF you have flutter installed you can directly start using this.
Add This Library:
Add this line to your dependencies:
flutter_chat_widget: ^0.0.3
Then you just have to import the package with
import 'package:flutter_chat_widget/recieved_message_widget.dart';
import 'package:flutter_chat_widget/sent_message_widget.dart';
import 'package:flutter_chat_widget/message_bar_widget.dart';
Create Chat Bubbles
//for sent messages
SentMessage(
message: item.text,
background: Colors.blueAccent,
textColor: Colors.white,
)
//for received messages
ReceivedMessage(
message: item.text,
background: Colors.black12,
textColor: Colors.black,
)
Create Message Bar
MessageBar(onCLicked: (text) {
// send data to server
})
This is an initial release of the package. If you find any issue please let me know I will fix it accordingly.
Run this command:
With Flutter:
$ flutter pub add flutter_chat_widget
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
flutter_chat_widget: ^0.0.3
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_chat_widget/flutter_chat_widget.dart';
Download Details:
Author: evanemran
Source Code: https://github.com/evanemran/flutter_chat_widget
1678176420
I hope that this project can be a chat tool for GitHub. So I will try to make it do some integration with GitHub. At present,it just support logging in with GitHub authorization and look GitHub user public information in ghChat. You can create group in ghChat for your github project and post the group link in the readme to convenient for the users' communication.
If you have anything idea about integration, welcome to create issues about feature suggestion, bug feedback or send pull requests.
Front-End : React+Redux+React-router+axios+scss; Back-end: node(koa2)+mysql+JWT(Json web token); use socket.io to send messages with each other. And get other technology please follow the package.json file.
 to configure. The default won't be able to use.
npm run start
cd ..
npm run start
Premise: pls create secrets.ts file to do configuration inside ghChat/server/ folder
export default {
port: '3000', // server port
dbConnection: {
host: '', // 数据库IP
port: 3306, // 数据库端口
database: 'ghchat', // 数据库名称
user: '', // 数据库用户名
password: '', // 数据库密码
},
client_secret: '', // client_secret of github authorization: github-> settings -> Developer settings to get
jwt_secret: '', // secret of json web token
qiniu: { // qiniu cdn configuration
accessKey: '',
secretKey: '',
bucket: ''
},
robot_key: '', // the key of robot chat api => If you want to use robot chat, pls apply this key from http://www.tuling123.com/
};
1.build front end code
cd src
npm run build:prod
2.build server code
cd sever
npm run build:prod
npm run start:prod
)Welcome to click this link to contact me.
English | 简体中文
Author: aermin
Source Code: https://github.com/aermin/ghChat
License: MIT license
1676638210
iOS Chat SDK in Swift - Build your own app chat experience for iOS using the official Stream Chat API
This is the official iOS SDK for Stream Chat, a service for building chat and messaging applications. This library includes both a low-level SDK and a set of reusable UI components.
The StreamChat SDK is a low level client for Stream chat service that doesn't contain any UI components. It is meant to be used when you want to build a fully custom UI. For the majority of use cases though, we recommend using our highly customizable UI SDK's.
The StreamChatUI SDK is our UI SDK for UIKit components. If your application needs to support iOS 13 and below, this is the right UI SDK for you.
The StreamChatSwiftUI SDK is our UI SDK for SwiftUI components. If your application only needs to support iOS 14 and above, this is the right UI SDK for you. This SDK is available in another repository stream-chat-swiftui.
Since the 4.20.0 release, our SDKs can be built using Xcode 14. Currently, there are no known issues on iOS 16. If you spot one, please create a ticket.
tintColor
, layoutMargins
, light/dark mode, dynamic font sizes, etc.UIKit
patterns and paradigms: The API follows the design of native system SDKs. It makes integration with your existing code easy and familiar.SwiftUI
support: We have developed a brand new SDK to help you have smoother Stream Chat integration in your SwiftUI apps.Combine
: The StreamChat SDK (Low Level Client) has Combine wrappers to make it really easy use in an app that uses Combine
.Stream is free for most side and hobby projects. You can use Stream Chat for free if you have less than five team members and no more than $10,000 in monthly revenue.
Progressive disclosure: The SDK can be used easily with very minimal knowledge of it. As you become more familiar with it, you can dig deeper and start customizing it on all levels.
Highly customizable: Every element is designed to be easily customizable. You can modify the brand color by setting tintColor
, apply appearance changes using custom UI rules, or subclass existing elements and inject them everywhere in the system, no matter how deep is the logic hierarchy.
open
by default: Everything is open
unless there's a strong reason for it to not be. This means you can easily modify almost every behavior of the SDK such that it fits your needs.
Good platform citizen: The UI elements behave like good platform citizens. They use existing iOS patterns; their behavior is predictable and matches system UI components; they respect tintColor
, layourMargins
, dynamic font sizes, and other system-defined UI constants.
This SDK tries to keep the list of external dependencies to a minimum. Starting 4.6.0, and in order to improve the developer experience, dependencies are hidden inside our libraries. (Does not apply to StreamChatSwiftUI's dependencies yet).
Learn more about our dependencies here
You can still integrate our SDKs if your project is using Objective-C. In that case, any customizations would need to be done by subclassing our components in Swift, and then use those directly from the Objective-C code.
We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world. Check out our current openings and apply via Stream's website.
Features | Preview |
---|---|
A list of channels matching provided query | ![]() |
Channel name and image based on the channel members or custom data | |
Unread messages indicator | |
Preview of the last message | |
Online indicator for avatars | |
Create new channel and start right away | |
Features | Preview |
---|---|
A list of message in a channel | ![]() |
Photo preview | |
Message reactions | |
Message grouping based on the send time | |
Link preview | |
Inline replies | |
Message threads | |
GIPHY support | |
Features | Preview |
---|---|
Support for multiline text, expands and shrinks as needed | ![]() |
Image and file attachments | |
Replies to messages | |
Tagging of users | |
Chat commands like mute, ban, giphy | |
Features | Preview |
---|---|
Easily search commands by writing / symbol or tap bolt icon | ![]() |
GIPHY support out of box | |
Supports mute, unmute, ban, unban commands | |
WIP support of custom commands | |
Features | Preview |
---|---|
User mentions preview | ![]() |
Easily search for concrete user | |
Mention as many users as you want | |
Author: GetStream
Source Code: https://github.com/GetStream/stream-chat-swift
License: View license
1676518980
WPPConnect is an open source project developed by the JavaScript community with the aim of exporting functions from WhatsApp Web to the node, which can be used to support the creation of any interaction, such as customer service, media sending, intelligence recognition based on phrases artificial and many other things, use your imagination... 😀🤔💭
Automatic QR Refresh | ✔ |
Send text, image, video, audio and docs | ✔ |
Get contacts, chats, groups, group members, Block List | ✔ |
Send contacts | ✔ |
Send stickers | ✔ |
Send stickers GIF | ✔ |
Multiple Sessions | ✔ |
Forward Messages | ✔ |
Receive message | ✔ |
insert user section | ✔ |
Send location | ✔ |
and much more | ✔ |
See more at WhatsApp methods
The first thing that you had to do is install the npm package
:
npm i --save @wppconnect-team/wppconnect
See more at Getting Started
Building WPPConnect is really simple, to build the entire project just run
> npm run build
Maintainers are needed, I cannot keep with all the updates by myself. If you are interested please open a Pull Request.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Author: wppconnect-team
Source Code: https://github.com/wppconnect-team/wppconnect
License: View license
1676305634
Linen is a Google-searchable community chat tool. Linen was built as an alternative to closed tools like Slack and Discord.
Linen is free and offers unlimited message retention you can sign up at Linen.community.
Linen is in it's early stages of development so we are looking for a ton of feedback.
Our documentation is divided into several sections:
Linen cloud edition: https://linen.dev Join our public community: https://linen.dev/s/linen
Author: Linen-dev
Source Code: https://github.com/Linen-dev/linen.dev
License: AGPL-3.0 license