1679623080
Path Tracing SDK is a code sample that strives to embody years of ray tracing and neural graphics research and experience. It is intended as a starting point for a path tracer integration, as a reference for various integrated SDKs, and/or for learning and experimentation.
The base path tracing implementation derives from NVIDIA’s Falcor Research Path Tracer, ported to approachable C++/HLSL Donut framework.
/bin | default folder for binaries and compiled shaders |
/build | default folder for build files |
/donut | code for a custom version of the Donut framework |
/donut/nvrhi | code for the NVRHI rendering API layer (a git submodule) |
/external | external libraries and SDKs, including NRD, RTXDI, and OMM |
/media | models, textures, scene files |
/tools | optional command line tools (denoiser, texture compressor, etc) |
/pt_sdk | Path Tracing SDK core; Sample.cpp/.h/.hlsl contain entry points |
/pt_sdk/PathTracer | Core path tracing shaders |
At the moment, only Windows builds are supported. We are going to add Linux support in the future.
Clone the repository with all submodules recursively:
git clone --recursive https://github.com/NVIDIAGameWorks/Path-Tracing-SDK.git
Pull the media files from Packman:
cd Path-Tracing-SDK
update_dependencies.bat
Create a build folder.
mkdir build
cd build
Any folder name works, but git is configured to ignore folders named build\*
Use CMake to configure the build and generate the project files.
Use of CMake GUI is recommended but cmake ..
works too. Make sure to select the x64 platform for the generator.
Build the solution generated by CMake in the build folder.
Open the generated solution (i.e. build/PathTracingSDK.sln
) with Visual Studio and build it.
Select and run the pt_sdk
project. Binaries get built to the bin
folder. Media is loaded from media
folder.
If making a binary build, the media
and tools
folders can be placed into bin
and packed up together (i.e. the sample app will search for both media\
and ..\media\
).
Once the application is running, most of the SDK features can be accessed via the UI window on the left hand side and drop-down controls in the top-center.
Camera can be moved using W/S/A/D keys and rotated by dragging with the left mouse cursor.
-debug
to enable the graphics API debug layer or runtime, and the NVRHI validation layer.-fullscreen
to start in full screen mode.-no-vsync
to start without VSync (can be toggled in the GUI).-print-graph
to print the scene graph into the output log on startup.-width
and -height
to set the window size.<FileName>
to load any supported model or scene from the given file.We are working on more detailed SDK developer documentation - watch this space!
Path Tracing SDK is under active development. Please report any issues directly through GitHub issue tracker, and for any information, suggestions or general requests please feel free to contact us at pathtracing-sdk-support@nvidia.com!
Author: NVIDIAGameWorks
Source Code: https://github.com/NVIDIAGameWorks/Path-Tracing-SDK
License: View license
1679277180
ℹ️ NOTE: This project is in early alpha and, just like AI, will evolve quickly. We invite you to join us in developing the Semantic Kernel together! Please contribute by using GitHub Discussions, opening GitHub Issues, sending us PRs.
Semantic Kernel (SK) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI.
SK supports prompt templating, function chaining, vectorized memory, and intelligent planning capabilities out of the box.
Semantic Kernel is designed to support and encapsulate several design patterns from the latest in AI research, such that developers can infuse their applications with complex skills like prompt chaining, recursive reasoning, summarization, zero/few-shot learning, contextual memory, long-term memory, embeddings, semantic indexing, planning, and accessing external knowledge stores as well as your own data.
By joining the SK community, you can build AI-first apps faster and have a front-row peek at how the SDK is being built. SK has been released as open-source so that more pioneering developers can join us in crafting the future of this landmark moment in the history of computing.
If you would like a quick overview about how Semantic Kernel can integrate with your app, start by cloning the repository:
git clone https://github.com/microsoft/semantic-kernel.git
and try these examples:
Simple chat summary | Use ready-to-use skills and get those skills into your app easily. |
Book creator | Use planner to deconstruct a complex goal and envision using the planner in your app. |
Authentication and APIs | Use a basic connector pattern to authenticate and connect to an API and imagine integrating external data into your app's LLM AI. |
For a more hands-on overview, you can also run the Getting Started notebook, looking into the syntax, creating Semantic Functions, working with Memory, and see how the kernel works.
Please note:
Here is a quick example of how to use Semantic Kernel from a C# console app.
Create a new project, targeting .NET 6 or newer, and add the Microsoft.SemanticKernel
nuget package:
dotnet add package Microsoft.SemanticKernel --prerelease
See nuget.org for the latest version and more instructions.
Copy and paste the following code into your project, with your Azure OpenAI key in hand (you can create one here).
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.KernelExtensions;
var kernel = Kernel.Builder.Build();
// For Azure Open AI service endpoint and keys please see
// https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=rest-api
kernel.Config.AddAzureOpenAICompletionBackend(
"davinci-backend", // Alias used by the kernel
"text-davinci-003", // Azure OpenAI *Deployment ID*
"https://contoso.openai.azure.com/", // Azure OpenAI *Endpoint*
"...your Azure OpenAI Key..." // Azure OpenAI *Key*
);
string skPrompt = @"
{{$input}}
Give me the TLDR in 5 words.
";
string textToSummarize = @"
1) A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
2) A robot must obey orders given it by human beings except where
such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
";
var tldrFunction = kernel.CreateSemanticFunction(skPrompt);
var summary = await kernel.RunAsync(textToSummarize, tldrFunction);
Console.WriteLine(summary);
// Output => Protect humans, follow orders, survive.
We welcome your contributions and suggestions to SK community! One of the easiest ways to participate is to engage in discussions in the GitHub repository. Bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss with us before sending a PR. This is to avoid rejection as we might be taking the core in a different direction, but also to consider the impact on the larger ecosystem.
To learn more and get started:
Python developers: Semantic Kernel is coming to Python soon! Check out the work-in-progress and contribute in the python-preview branch.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Author: Microsoft
Source Code: https://github.com/microsoft/semantic-kernel
License: MIT license
1679141476
Intel® RealSense™ SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras (D400 & L500 series and the SR300) and the T265 tracking camera.
📌 For other Intel® RealSense™ devices (F200, R200, LR200 and ZR300), please refer to the latest legacy release.
The SDK allows depth and color streaming, and provides intrinsic and extrinsic calibration information. The library also offers synthetic streams (pointcloud, depth aligned to color and vise-versa), and a built-in support for record and playback of streaming sessions.
Developer kits containing the necessary hardware to use this library are available for purchase at store.intelrealsense.com. Information about the Intel® RealSense™ technology at www.intelrealsense.com
📂 Don't have access to a RealSense camera? Check-out sample data
Intel has EOLed the LiDAR, Facial Authentication, and Tracking product lines. These products have been discontinued and will no longer be available for new orders.
Intel WILL continue to sell and support stereo products including the following: D410, D415, D430, , D401 ,D450 modules and D415, D435, D435i, D435f, D405, D455, D457 depth cameras. We will also continue the work to support and develop our LibRealSense open source SDK.
In the future, Intel and the RealSense team will focus our new development on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy.
You can download and install librealsense using the vcpkg dependency manager:
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install realsense2
The librealsense port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
Download - The latest releases including the Intel RealSense SDK, Viewer and Depth Quality tools are available at: latest releases. Please check the release notes for the supported platforms, new features and capabilities, known issues, how to upgrade the Firmware and more.
Install - You can also install or build from source the SDK (on Linux \ Windows \ Mac OS \ Android \ Docker), connect your D400 depth camera and you are ready to start writing your first application.
Support & Issues: If you need product support (e.g. ask a question about / are having problems with the device), please check the FAQ & Troubleshooting section. If not covered there, please search our Closed GitHub Issues page, Community and Support sites. If you still cannot find an answer to your question, please open a new issue.
What | Description | Download link |
---|---|---|
Intel® RealSense™ Viewer | With this application, you can quickly access your Intel® RealSense™ Depth Camera to view the depth stream, visualize point clouds, record and playback streams, configure your camera settings, modify advanced controls, enable depth visualization and post processing and much more. | Intel.RealSense.Viewer.exe |
Depth Quality Tool | This application allows you to test the camera’s depth quality, including: standard deviation from plane fit, normalized RMS – the subpixel accuracy, distance accuracy and fill rate. You should be able to easily get and interpret several of the depth quality metrics and record and save the data for offline analysis. | Depth.Quality.Tool.exe |
Debug Tools | Device enumeration, FW logger, etc as can be seen at the tools directory | Included in Intel.RealSense.SDK.exe |
Code Samples | These simple examples demonstrate how to easily use the SDK to include code snippets that access the camera into your applications. Check some of the C++ examples including capture, pointcloud and more and basic C examples | Included in Intel.RealSense.SDK.exe |
Wrappers | Python, C#/.NET API, as well as integration with the following 3rd-party technologies: ROS1, ROS2, LabVIEW, OpenCV, PCL, Unity, Matlab, OpenNI, UnrealEngine4 and more to come. |
Our library offers a high level API for using Intel RealSense depth cameras (in addition to lower level ones). The following snippet shows how to start streaming frames and extracting the depth value of a pixel:
// Create a Pipeline - this serves as a top-level API for streaming and processing frames
rs2::pipeline p;
// Configure and start the pipeline
p.start();
while (true)
{
// Block program until frames arrive
rs2::frameset frames = p.wait_for_frames();
// Try to get a frame of a depth image
rs2::depth_frame depth = frames.get_depth_frame();
// Get the depth frame's dimensions
float width = depth.get_width();
float height = depth.get_height();
// Query the distance from the camera to the object in the center of the image
float dist_to_center = depth.get_distance(width / 2, height / 2);
// Print the distance
std::cout << "The camera is facing an object " << dist_to_center << " meters away \r";
}
For more information on the library, please follow our examples, and read the documentation to learn more.
In order to contribute to Intel RealSense SDK, please follow our contribution guidelines.
Author: IntelRealSense
Source Code: https://github.com/IntelRealSense/librealsense
License: Apache-2.0 license
1678263480
The LINE Messaging API SDK for nodejs makes it easy to develop bots using LINE Messaging API, and you can create a sample bot within minutes.
Using npm:
$ npm install @line/bot-sdk --save
See the official API documentation for more information
line-bot-sdk-nodejs documentation: https://line.github.io/line-bot-sdk-nodejs/#getting-started
FAQ: https://developers.line.biz/en/faq/
Community Q&A: https://www.line-community.me/questions
News: https://developers.line.biz/en/news/
Twitter: @LINE_DEV
This project respects semantic versioning
Please check CONTRIBUTING before making a contribution.
Author: line
Source Code: https://github.com/line/line-bot-sdk-nodejs
License: Apache-2.0 license
1678205047
The offical Dropbox SDK for Javascript.
Create an app via the Developer Console
Install via npm
$ npm install --save dropbox
Install from source:
$ git clone https://github.com/dropbox/dropbox-sdk-js.git
$ cd dropbox-sdk-js
$ npm install
After installation, follow one of our Examples or read the Documentation.
You can also view our OAuth guide.
We provide Examples to help get you started with a lot of the basic functionality in the SDK. We provide most examples in both Javascript and Typescript with some having a Node equivalent.
OAuth
Other Examples
If you find a bug, please see CONTRIBUTING.md for information on how to report it.
If you need help that is not specific to this SDK, please reach out to Dropbox Support.
Documentation can be found on GitHub Pages
Author: Dropbox
Source Code: https://github.com/dropbox/dropbox-sdk-js
License: MIT license
1677927720
Install-Package Betalgo.OpenAI.GPT3
Dotnet SDK for OpenAI Chat GPT, GPT-3 and DALL·E
Unofficial.
GPT-3 doesn't have any official .Net SDK.
I know we are all excited about new Chat Gpt APIs, so I tried to rush this version. It's nearly 4 AM here.
Be aware! It might have some bugs, also the next version may have breaking changes. Because I didn't like namings but I don't have time to think about it at the moment. Whisper is coming soon to.
Enjoy your new Methods! Don't forget to star the repo if you like it.
For changelogs please go to end of the document.
Visit https://openai.com/ to get your API key. Also documentation with more detail is avaliable there.
The repository contains a sample project named OpenAI.Playground that you can refer to for a better understanding of how the library works. However, please exercise caution while experimenting with it, as some of the test methods may result in unintended consequences such as file deletion or fine tuning.
Your API Key comes from here --> https://platform.openai.com/account/api-keys
Your Organization ID comes from here --> https://platform.openai.com/account/org-settings
var openAiService = new OpenAIService(new OpenAiOptions()
{
ApiKey = Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY")
});
"OpenAIServiceOptions": {
//"ApiKey":"Your api key goes here"
//,"Organization": "Your Organization Id goes here (optional)"
},
(How to use user secret ?
Right click your project name in "solution explorer" then click "Manage User Secret", it is a good way to keep your api keys)
serviceCollection.AddOpenAIService();
OR
Use it like below but do NOT put your API key directly to your source code.
serviceCollection.AddOpenAIService(settings => { settings.ApiKey = Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY"); });
After injecting your service you will be able to get it from service provider
var openAiService = serviceProvider.GetRequiredService<IOpenAIService>();
You can set default model(optional):
openAiService.SetDefaultModelId(Models.Davinci);
var completionResult = await sdk.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
{
Messages = new List<ChatMessage>
{
ChatMessage.FromSystem("You are a helpful assistant."),
ChatMessage.FromUser("Who won the world series in 2020?"),
ChatMessage.FromAssistance("The Los Angeles Dodgers won the World Series in 2020."),
ChatMessage.FromUser("Where was it played?")
},
Model = Models.ChatGpt3_5Turbo,
MaxTokens = 50//optional
});
if (completionResult.Successful)
{
Console.WriteLine(completionResult.Choices.First().Message.Content);
}
var completionResult = await openAiService.Completions.CreateCompletion(new CompletionCreateRequest()
{
Prompt = "Once upon a time",
Model = Models.TextDavinciV3
});
if (completionResult.Successful)
{
Console.WriteLine(completionResult.Choices.FirstOrDefault());
}
else
{
if (completionResult.Error == null)
{
throw new Exception("Unknown Error");
}
Console.WriteLine($"{completionResult.Error.Code}: {completionResult.Error.Message}");
}
var completionResult = sdk.Completions.CreateCompletionAsStream(new CompletionCreateRequest()
{
Prompt = "Once upon a time",
MaxTokens = 50
}, Models.Davinci);
await foreach (var completion in completionResult)
{
if (completion.Successful)
{
Console.Write(completion.Choices.FirstOrDefault()?.Text);
}
else
{
if (completion.Error == null)
{
throw new Exception("Unknown Error");
}
Console.WriteLine($"{completion.Error.Code}: {completion.Error.Message}");
}
}
Console.WriteLine("Complete");
var imageResult = await sdk.Image.CreateImage(new ImageCreateRequest
{
Prompt = "Laser cat eyes",
N = 2,
Size = StaticValues.ImageStatics.Size.Size256,
ResponseFormat = StaticValues.ImageStatics.ResponseFormat.Url,
User = "TestUser"
});
if (imageResult.Successful)
{
Console.WriteLine(string.Join("\n", imageResult.Results.Select(r => r.Url)));
}
Please note that due to time constraints, I was unable to thoroughly test all of the methods or fully document the library. If you encounter any issues, please do not hesitate to report them or submit a pull request - your contributions are always appreciated.
I initially developed this SDK for my personal use and later decided to share it with the community. As I have not maintained any open-source projects before, any assistance or feedback would be greatly appreciated. If you would like to contribute in any way, please feel free to reach out to me with your suggestions.
I will always be using the latest libraries, and future releases will frequently include breaking changes. Please take this into consideration before deciding to use the library. I want to make it clear that I cannot accept any responsibility for any damage caused by using the library. If you feel that this is not suitable for your purposes, you are free to explore alternative libraries or the OpenAI Web-API.
Breaking Changes
Engine
keyword to Model
in accordance with OpenAI's new naming convention.DefaultEngineId
in favor of DefaultModelId
.DefaultEngineId
and DefaultModelId
is not static anymore.Added support for Azure OpenAI, a big thanks to @copypastedeveloper!
Added support for Tokenizer, inspired by @dluc's https://github.com/dluc/openai-tools repository. Please consider giving the repo a star.
These two changes are recent additions, so please let me know if you encounter any issues.
https://github.com/betalgo/openai/wiki
Author: Betalgo
Source Code: https://github.com/betalgo/openai
License: MIT license
1677919560
Julia interface for Amazon Web Services.
Based on JuliaCloud/AWSCore.jl.
This package provides automatically generated low-level API wrappers and documentation strings for each operation in each Amazon Web Service.
The following high-level packages are also available: AWS S3, AWS SQS, AWS SNS, AWS IAM, AWS EC2, AWS Lambda, AWS SES and AWS SDB. These packages include operation specific result structure parsing, error handling, type convenience functions, iterators, etc.
Full documentation is available here, or see below for some examples of how to get started.
This package is generated by AWSCore.jl/src/AWSAPI.jl.
Option 1: environment variables: AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
(and AWS_DEFAULT_REGION
),
Option 2: ~/.aws/credentials
file:
[default]
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Option 3: run the AWS CLI configure command: aws configure
.
julia> using AWSSDK.SNS
julia> AWSCore.set_debug_level(1)
julia> SNS.publish(PhoneNumber="+61401555555", Message="Hello")
Dict{String,Any} with 1 entry:
"MessageId" => "f0607542-7b54-5c66-b271-27453b0bd979"
julia> using AWSSDK.S3
julia> r = S3.list_buckets()
XMLDict.XMLDictElement with 2 entries:
"Owner" => <Owner>…
"Buckets" => <Buckets>…
julia> v = [b["Name"] for b in r["Buckets"]["Bucket"]]
3-element Array{String,1}:
"bucket1"
"bucket2"
"bucket3"
julia> S3.put_object(Bucket="bucket1", Key="myfile", Body="mydata")
Response(200 OK, 10 headers, 0 bytes in body)
julia> S3.get_object(Bucket="bucket1", Key="myfile") |> String
"mydata"
julia> using AWSSDK.EC2
julia> r = EC2.describe_images(Filter=[
["Name" => "owner-alias", "Value" => "amazon"],
["Name" => "name", "Value" => "amzn-ami-hvm-2015.09.1.x86_64-gp2"]])
XMLDict.XMLDictElement with 2 entries:
"requestId" => "af8cf64c-d5b0-4e2e-959c-3f703eeb362f"
"imagesSet" => <imagesSet>…
julia> r["imagesSet"]["item"]
XMLDict.XMLDictElement with 17 entries:
"imageId" => "ami-48d38c2b"
"imageLocation" => "amazon/amzn-ami-hvm-2015.09.1.x86_64-gp2"
"imageState" => "available"
"imageOwnerId" => "137112412989"
"creationDate" => "2015-10-29T18:16:22.000Z"
"isPublic" => "true"
"architecture" => "x86_64"
"imageType" => "machine"
"sriovNetSupport" => "simple"
"imageOwnerAlias" => "amazon"
"name" => "amzn-ami-hvm-2015.09.1.x86_64-gp2"
"description" => "Amazon Linux AMI 2015.09.1 x86_64 HVM GP2"
"rootDeviceType" => "ebs"
"rootDeviceName" => "/dev/xvda"
"blockDeviceMapping" => <blockDeviceMapping>…
"virtualizationType" => "hvm"
"hypervisor" => "xen"
julia> r = SES.send_email(
Source = "sam@octech.com.au",
Destination = ["ToAddresses" => ["sam@octech.com.au"]],
Message = [
"Subject" => ["Data" => "Hello"],
"Body" => ["Text" => ["Data" => "Hello"]]
])
XMLDict.XMLDictElement with 2 entries:
"SendEmailResult" => <SendEmailResult>…
"ResponseMetadata" => <ResponseMetadata>…
Author: JuliaCloud
Source Code: https://github.com/JuliaCloud/AWSSDK.jl
License: View license
1677750720
OpenTelemetry-Go is the Go implementation of OpenTelemetry. It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.
Signal | Status | Project |
---|---|---|
Traces | Stable | N/A |
Metrics | Alpha | N/A |
Logs | Frozen [1] | N/A |
Progress and status specific to this repository is tracked in our local project boards and milestones.
Project versioning information and stability guarantees can be found in the versioning documentation.
OpenTelemetry-Go ensures compatibility with the current supported versions of the Go language:
Each major Go release is supported until there are two newer major releases. For example, Go 1.5 was supported until the Go 1.7 release, and Go 1.6 was supported until the Go 1.8 release.
For versions of Go that are no longer supported upstream, opentelemetry-go will stop ensuring compatibility with these versions in the following manner:
Currently, this project supports the following environments.
OS | Go Version | Architecture |
---|---|---|
Ubuntu | 1.20 | amd64 |
Ubuntu | 1.19 | amd64 |
Ubuntu | 1.20 | 386 |
Ubuntu | 1.19 | 386 |
MacOS | 1.20 | amd64 |
MacOS | 1.19 | amd64 |
Windows | 1.20 | amd64 |
Windows | 1.19 | amd64 |
Windows | 1.20 | 386 |
Windows | 1.19 | 386 |
While this project should work for other systems, no compatibility guarantees are made for those systems currently.
You can find a getting started guide on opentelemetry.io.
OpenTelemetry's goal is to provide a single set of APIs to capture distributed traces and metrics from your application and send them to an observability platform. This project allows you to do just that for applications written in Go. There are two steps to this process: instrument your application, and configure an exporter.
To start capturing distributed traces and metric events from your application it first needs to be instrumented. The easiest way to do this is by using an instrumentation library for your code. Be sure to check out the officially supported instrumentation libraries.
If you need to extend the telemetry an instrumentation library provides or want to build your own instrumentation for your application directly you will need to use the Go otel package. The included examples are a good way to see some practical uses of this process.
Now that your application is instrumented to collect telemetry, it needs an export pipeline to send that telemetry to an observability platform.
All officially supported exporters for the OpenTelemetry project are contained in the exporters directory.
Exporter | Metrics | Traces |
---|---|---|
Jaeger | ✓ | |
OTLP | ✓ | ✓ |
Prometheus | ✓ | |
stdout | ✓ | ✓ |
Zipkin | ✓ |
See the contributing documentation.
Author: open-telemetry
Source Code: https://github.com/open-telemetry/opentelemetry-go
License: Apache-2.0 license
1677219429
An implementation of Stability AI SDK in Dart. Stability AI is a solution studio dedicated to innovating ideas.
A demonstrable use of stability SDK in Flutter and Dart.
and more sample outputs...
Dogs
"generate an oil painting canvas of a dog, realistic, painted by Leonardo da Vinci"
Output 1 | Output 2 | Output 3 |
---|---|---|
![]() | ![]() | ![]() |
Cats
"generate an oil painting canvas of a cat, realistic, painted by Leonardo da Vinci"
Output 1 | Output 2 | Output 3 |
---|---|---|
![]() | ![]() | ![]() |
Cyberpunk
"generate a cyberpunk scene, in japan, realistic street scene on the night"
Output 1 | Output 2 | Output 3 |
---|---|---|
![]() | ![]() | ![]() |
Upcoming
Stability AI requires you to create your own API key to make calls to the API. You can create one here.
Create a .env
file and set your Stability AI API key
The example provided is using the SDK directly in a Flutter app. In most cases, you're going to use the SDK in the backend using tools like Dart Frog. This is to secure the API key and to have more control of the incoming requests, e.g. controlling rate limits or blocking sensitive content.
// 1. Setup the API client
final client = StabilityApiClient.init("<YOUR_API_KEY_HERE>");
// 2. Create a generation request
final request = RequestBuilder("an oil painting of a dog in the canvas, wearing knight armor, realistic painting by Leonardo da Vinci")
.setHeight(512)
.setWidth(512)
.setEngineType(EngineType.inpainting_v2_0)
.setSampleCount(1)
.build();
// 3. Subscribe to the response
client.generate(request).listen((answer) {
image = answer.artifacts?.first.getImage();
});
# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/
# IntelliJ related
*.iml
*.ipr
*.iws
.idea/
# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/
# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
.packages
build/
# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.
version:
revision: 135454af32477f815a7525073027a3ff9eff1bfd
channel: stable
project_type: package
Author: joshuadeguzman
Source code: https://github.com/joshuadeguzman/stability-sdk-dart
License: BSD-3-Clause license
1676746440
Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user.
Alan is a conversational voice AI platform that lets you create an intelligent voice assistant for your app. It offers all necessary tools to design, embed and host your voice solutions:
A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.
Alan's lightweight SDKs to quickly embed a voice assistant to your app.
Alan's AI-backend powered by the industry’s best Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Speech Synthesis. The Alan Cloud provisions and handles the infrastructure required to maintain your voice deployments and perform all the voice processing tasks.
To get more details on how Alan works, see Alan Platform.
To create a voice assistant for your web app or page:
Sign up for Alan Studio to build voice scripts in JavaScript and test them.
Use the Alan Web SDK to embed a voice assistant to your app or page. For details, see Alan AI documentation for the necessary framework:
Check out our demo.
In the Examples folder, you can find example web apps created with:
To launch the app, follow the instructions in the README file inside the example app folder. Then press the Alan button and try interacting with Alan.
You may also want to try Alan Client SDKs for the following platforms:
If you have any questions or something is missing in the documentation:
Alan Platform • Alan Studio • Docs • FAQ • Blog • Twitter
Author: Alan-ai
Source Code: https://github.com/alan-ai/alan-sdk-web
1676638210
iOS Chat SDK in Swift - Build your own app chat experience for iOS using the official Stream Chat API
This is the official iOS SDK for Stream Chat, a service for building chat and messaging applications. This library includes both a low-level SDK and a set of reusable UI components.
The StreamChat SDK is a low level client for Stream chat service that doesn't contain any UI components. It is meant to be used when you want to build a fully custom UI. For the majority of use cases though, we recommend using our highly customizable UI SDK's.
The StreamChatUI SDK is our UI SDK for UIKit components. If your application needs to support iOS 13 and below, this is the right UI SDK for you.
The StreamChatSwiftUI SDK is our UI SDK for SwiftUI components. If your application only needs to support iOS 14 and above, this is the right UI SDK for you. This SDK is available in another repository stream-chat-swiftui.
Since the 4.20.0 release, our SDKs can be built using Xcode 14. Currently, there are no known issues on iOS 16. If you spot one, please create a ticket.
tintColor
, layoutMargins
, light/dark mode, dynamic font sizes, etc.UIKit
patterns and paradigms: The API follows the design of native system SDKs. It makes integration with your existing code easy and familiar.SwiftUI
support: We have developed a brand new SDK to help you have smoother Stream Chat integration in your SwiftUI apps.Combine
: The StreamChat SDK (Low Level Client) has Combine wrappers to make it really easy use in an app that uses Combine
.Stream is free for most side and hobby projects. You can use Stream Chat for free if you have less than five team members and no more than $10,000 in monthly revenue.
Progressive disclosure: The SDK can be used easily with very minimal knowledge of it. As you become more familiar with it, you can dig deeper and start customizing it on all levels.
Highly customizable: Every element is designed to be easily customizable. You can modify the brand color by setting tintColor
, apply appearance changes using custom UI rules, or subclass existing elements and inject them everywhere in the system, no matter how deep is the logic hierarchy.
open
by default: Everything is open
unless there's a strong reason for it to not be. This means you can easily modify almost every behavior of the SDK such that it fits your needs.
Good platform citizen: The UI elements behave like good platform citizens. They use existing iOS patterns; their behavior is predictable and matches system UI components; they respect tintColor
, layourMargins
, dynamic font sizes, and other system-defined UI constants.
This SDK tries to keep the list of external dependencies to a minimum. Starting 4.6.0, and in order to improve the developer experience, dependencies are hidden inside our libraries. (Does not apply to StreamChatSwiftUI's dependencies yet).
Learn more about our dependencies here
You can still integrate our SDKs if your project is using Objective-C. In that case, any customizations would need to be done by subclassing our components in Swift, and then use those directly from the Objective-C code.
We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world. Check out our current openings and apply via Stream's website.
Features | Preview |
---|---|
A list of channels matching provided query | ![]() |
Channel name and image based on the channel members or custom data | |
Unread messages indicator | |
Preview of the last message | |
Online indicator for avatars | |
Create new channel and start right away | |
Features | Preview |
---|---|
A list of message in a channel | ![]() |
Photo preview | |
Message reactions | |
Message grouping based on the send time | |
Link preview | |
Inline replies | |
Message threads | |
GIPHY support | |
Features | Preview |
---|---|
Support for multiline text, expands and shrinks as needed | ![]() |
Image and file attachments | |
Replies to messages | |
Tagging of users | |
Chat commands like mute, ban, giphy | |
Features | Preview |
---|---|
Easily search commands by writing / symbol or tap bolt icon | ![]() |
GIPHY support out of box | |
Supports mute, unmute, ban, unban commands | |
WIP support of custom commands | |
Features | Preview |
---|---|
User mentions preview | ![]() |
Easily search for concrete user | |
Mention as many users as you want | |
Author: GetStream
Source Code: https://github.com/GetStream/stream-chat-swift
License: View license
1676366220
Chat SDK is a fully featured open source instant messaging framework for iOS. Chat SDK is fully featured, scalable and flexible and follows the following key principles:
Full breakdown is available on the features page.
Learn about the history of Chat SDK and our future plans in this post.
People always ask about how much Chat SDK costs to run. And will it scale to millions of users? So I wrote an article talking about just that.
If you're a freelance developer looking for work, join our Discord server. We often have customers
You can also help us by:
Email us at: team@sdk.chat
We also offer development services we are a team of full stack developers who are Firebase experts. For more information check out our consulting site.
This repository contains a fully functional version of the Chat SDK which is configured using our Firebase account. This is great way to test the features of the Chat SDK before you start itegrating it with your app.
pod install
in the Xcode directoryChat SDK Firebase.xcworkspace
file in XcodeWe are currently updating the Chat SDK to use Swift, this will happen gradually. In the meantime, the Chat SDK API is fully compatible with Swift projects.
The Chat SDK is fully compatible with Swift projects and contains a Swift demo project.
pod install
in the XcodeSwift directoryChatSDKSwift.xcworkspace
file in XcodeQuick start guide - it takes about 10 minutes!
use_frameworks!
pod "ChatSDK"
pod "ChatSDKFirebase/Adapter"
pod "ChatSDKFirebase/Upload"
pod "ChatSDKFirebase/Push"
Optional
pod "ChatSDK/ModAddContactWithQRCode"
Run pod update
to get the latest version of the code.
Open the App Delegate add the following code to initialise the chat
Swift
AppDelegate.swift
import ChatSDK
Add the following code to the start of your didFinishLaunchingWithOptions function:
let config = BConfiguration.init();
config.rootPath = "test"
// Configure other options here...
config.allowUsersToCreatePublicChats = true
// Define the modules you want to use.
var modules = [
FirebaseNetworkAdapterModule.shared(),
FirebasePushModule.shared(),
FirebaseUploadModule.shared(),
// Optional...
AddContactWithQRCodeModule.init(),
]
BChatSDK.initialize(config, app: application, options: launchOptions, modules: modules)
self.window = UIWindow.init(frame: UIScreen.main.bounds)
self.window?.rootViewController = BChatSDK.ui().splashScreenNavigationController()
self.window?.makeKeyAndVisible();
Then add the following methods:
func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
BChatSDK.application(application, didRegisterForRemoteNotificationsWithDeviceToken: deviceToken)
}
func application(_ application: UIApplication, didReceiveRemoteNotification userInfo: [AnyHashable : Any], fetchCompletionHandler completionHandler: @escaping (UIBackgroundFetchResult) -> Void) {
BChatSDK.application(application, didReceiveRemoteNotification: userInfo)
}
func application(_ application: UIApplication, open url: URL, sourceApplication: String?, annotation: Any) -> Bool {
return BChatSDK.application(application, open: url, sourceApplication: sourceApplication, annotation: annotation)
}
func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
return BChatSDK.application(app, open: url, options: options)
}
Objective C
Check the demo project.
The Root Path
The root path variable allows you to run multiple Chat SDK instances on one Firebase account. Each different root path will represent a completely separate set of Firebase data. This can be useful for testing because you could have separate test and prod root paths.
Note:
It is worth opening your downloadedGoogleService-Info.plist
and checking there is anAPI_KEY
field included. Sometimes Firebase's automatic download doesn’t include this in the plist. To rectify, just re-download the plist from the project settings menu.
Copy the following rows from the demo ChatSDK Info.plist file to your project's Info.plist
App Transport Security Settings
URL types
Make sure that the URL types are all set correctly. The URL type for your app should be set to your bundle id
All the privacy rows. These will allow the app to access the camera, location and address book
In the Firebase dashboard click Authentication -> Sign-in method and enable all the appropriate methods
Add the security rules. The rules also enable optimized user search so this step is very important!
Enable file storage - Click Storage -> Get Started
Enable push notifications
Enable location messages. Get a Google Maps API key. Then add it during the Chat SDK configuration
Objective C
config.googleMapsApiKey = @"YOUR API KEY";
Swift
config.googleMapsApiKey = "YOUR API KEY"
The Push Notification module allows you to send free push notifications using Firebase Cloud Messaging.
Setup Firebase Cloud Functions
Follow the instructions on our Chat SDK Firebase repository
Firebase secures your data by allowing you to write rules to govern who can access the database and what can be written. The rules are also needed to enable user search. To enable the rules see the guide Enabling Security Rules.
Congratulations! 🎉🎉 You've just turned your app into a fully featured instant messenger! Keep reading below to learn how to further customize the Chat SDK.
To go deeper, checkout the API Guide for help with:
View the API documentation here.
Next Steps
There are a number of configuration options available. Check out the BConfiguration class. Using this class you can do things like:
To customize the UI, you can register subclasses for different views. You can do that using the UI service BChatSDK.ui
. For example, to register a new login view controller you would use:
BChatSDK.ui.loginViewController = [[YourViewController alloc] initWithNibName:Nil bundle: Nil];
To modify the chat view you would register a provider:
[BChatSDK.ui setChatViewController:^BChatViewController *(id<PThread> thread) {
return [[YourChatViewController alloc] initWithThread:thread];
}];
Every view controller in the app can be customized this way.
Any of the Chat SDK views can be added into your app. Checkout the PInterfaceFacade for options. You can add a any view using the following pattern. Here we are using the interface service to get the particular view.
Objective-C
UIViewController * privateThreadsViewController = [BChatSDK.ui privateThreadsViewController];
Swift
let privateThreadsViewController = BChatSDK.ui().a.privateThreadsViewController()
To do that, you can take advantage of the BIntegrationHelper
class. This makes provides some helper methods to make it easier to integrate the Chat SDK with your app.
At the most basic level, you need to do the following:
Objective-C
[BIntegrationHelper authenticateWithToken:@"your token"];
Swift
BIntegrationHelper.authenticate(withToken: "your token")
Objective-C
[BIntegrationHelper updateUserWithName:@"Name" image: image url: imageURL];
Swift
BIntegrationHelper.updateUser(withName: "Name", image: image, url: imageURL)
Objective-C
[BIntegrationHelper logout];
Swift
BIntegrationHelper.logout()
There are a number of free and premium extensions that can be added to the Chat SDK.
For the following modules:
The free modules are located in the chat-sdk-ios/ChatSDKFirebase folder. The premium modules can be purchased and downloaded from the links provided above.
To install a module you should use the following steps:
The File UI module allows you to use the native Firebase user interface for authentication.
After adding the files to your Xcode project, add the following to the App Delegate to enable the module.
Objective C
AppDelegate.m -> application: didFinishLaunchingWithOptions:
#import "BFirebaseUIModule.h"
[[[BFirebaseUIModule alloc] init] activateWithProviders: @[]];
Swift
[YourProject]-Bridging-Header.h
#import "BFirebaseUIModule.h"
AppDelegate.swift
BFirebaseUIModule.init().activate(withProviders: []);
You should pass in array of the FUIAuthProvider
objects you want to support.
Also add the following to your Podfile depending on which authentication methods you want to support:
pod 'FirebaseUI/Facebook', '~> 4.0'
pod 'FirebaseUI/Google', '~> 4.0'
pod 'FirebaseUI/Twitter', '~> 4.0'
pod 'FirebaseUI/Phone', '~> 4.0'
Then run pod install
.
Note If you want to Firebase Auth UI make sure you comment out the following line:
BNetworkManager.shared().a.auth().setChallenge(BLoginViewController.init(nibName: nil, bundle: nil));
For the following modules:
These modules are distributed as development pods. After you've downloaded the module, unzip it and add it to the ChatSDKModules folder. Then:
pod "ChatSDKModules/[ModuleName]", :path => "[Path to ChatSDKModules folder]"
pod install
The Chat SDK API is based around the network manager and a series of handlers. A good place to start is by looking at the handlers Pods/Development Pods/ChatSDK/Core/Core/Classes/Interfaces
. Here you can review the handler interfaces which are well documented. To use a handler you would use the following code:
Objective C
[[BChatSDK.handler_name function: to: call:]
Swift
BNetworkManager.shared().a.handler_name() function: to: call:]
Searching for a user
For example, to search for a user you could use the search handler:
-(RXPromise *) usersForIndexes: (NSArray *) indexes withValue: (NSString *) value limit: (int) limit userAdded: (void(^)(id<PUser> user)) userAdded;
Here you pass in a series of indexes to be used in the search i.e. name, email etc... and a value. It will then return a series of user objects.
You can also see example implementations of these handlers by looking at the BFirebaseSearchHandler
class. And also seeing how the method is used in the Chat SDK.
Starting a chat
To start a chat you can use the core handler.
-(RXPromise *) createThreadWithUsers: (NSArray *) users
threadCreated: (void(^)(NSError * error, id<PThread> thread)) thread;
When this method completes, the thread will have been created on Firebase and all the users will have been added. You could then open the thread using the interface adapter.
UIViewController * chatViewController = [BChatSDK.ui chatViewControllerWithThread:thread];
So a more complete example would look like this:
-(void) startChatWithUser {
MBProgressHUD * hud = [MBProgressHUD showHUDAddedTo:self.view animated:YES];
hud.label.text = [NSBundle t:bCreatingThread];
[[BChatSDK.core createThreadWithUsers:@[_user] threadCreated:^(NSError * error, id<PThread> thread) {
if (!error) {
[self pushChatViewControllerWithThread:thread];
}
else {
[UIView alertWithTitle:[NSBundle t:bErrorTitle] withMessage:[NSBundle t:bThreadCreationError]];
}
[MBProgressHUD hideHUDForView:self.view animated:YES];
}];
}
-(void) pushChatViewControllerWithThread: (id<PThread>) thread {
if (thread) {
UIViewController * chatViewController = [BChatSDK.ui chatViewControllerWithThread:thread];
[self.navigationController pushViewController:chatViewController animated:YES];
}
}
We offer a choice of two license for this app. You can either use the Chat SDK license or the GPLv3 license.
Most Chat SDK users either want to add the Chat SDK to an app that will be released to the App Store or they want to use the Chat SDK in a project for their client. The Chat SDK license gives you complete flexibility to do this for free.
Chat SDK License Summary
If a user wants to distribute the Chat SDK source code, we feel that any additions or modifications they make to the code should be contributed back to the project. The GPLv3 license ensures that if source code is distributed, it must remain open source and available to the community.
GPLv3 License Summary
What does this mean?
Please check out the Licensing FAQ for more information.
Author: Chat-sdk
Source Code: https://github.com/chat-sdk/chat-sdk-ios
License: View license
1676007960
Chat SDK is a fully featured open source instant messaging framework for Android. Chat SDK is fully featured, scalable and flexible and follows the following key principles:
Please bear in mind that this version is a major update. As a result we are making new releases every few days to fix bugs and crashes. If you see an issue, please report it on the Github bug tracker and we will fix it.
Sponsor us on either Github sponsors or Paetron and get these features. For full details visit our Modules page.
When you support us on Patreon, you get: extra modules, code updates, support as well as special access to the Discord Server.
Visit our Animated GIF Gallery to see all the features.
Learn about the history of Chat SDK and our future plans in this post.
People always ask about how much Chat SDK costs to run. And will it scale to millions of users? So I wrote an article talking about just that.
The Chat SDK library with ALL modules is around 20mb
You can also help us by:
Email us at: team@sdk.chat
We also offer development services we are a team of full stack developers who are Firebase experts. For more information check out our consulting site.
If you are looking for something that is more-light weight than Chat SDK, we also have a library which only provides instant messaging functionality.
You can check out the project: Firestream on Github.
Bear in mind that the video is not updated frequently. Please cross reference with with the text based instructions for the latest gradle dependencies.
We provide extensive documentation on Github but if you’re a non-technical user or want to save yourself some work you can take advantage of our setup and integration service.
Author: Chat-sdk
Source Code: https://github.com/chat-sdk/chat-sdk-android
License: Apache-2.0 license
1675878300
Bad software is everywhere, and we're tired of it. Sentry is on a mission to help developers write better software faster, so we can get back to enjoying technology. If you want to join us Check out our open positions
Sentry SDK for React Native
react-native >= 0.56.0
To install the package:
npm install --save @sentry/react-native
# OR
yarn add @sentry/react-native
If you are using a version of React Native <= 0.60.x link the package using react-native
.
react-native link @sentry/react-native
# OR, if self hosting
SENTRY_WIZARD_URL=http://sentry.acme.com/ react-native link @sentry/react-native
How to use it:
import * as Sentry from "@sentry/react-native";
Sentry.init({
dsn: "__DSN__",
});
Sentry.setTag("myTag", "tag-value");
Sentry.setExtra("myExtra", "extra-value");
Sentry.addBreadcrumb({ message: "test" });
Sentry.captureMessage("Hello Sentry!");
If you are coming from react-native-sentry
which was our SDK < 1.0
you should follow the upgrade guide and then follow the install steps.
Mobile Vitals - Four Metrics Every Mobile Developer Should Care About.
Performance Monitoring Support for React Native.
Author: Getsentry
Source Code: https://github.com/getsentry/sentry-react-native
License: MIT license
1675119780
Soto is a Swift language SDK for Amazon Web Services (AWS), working on Linux, macOS and iOS. This library provides access to all AWS services. The service APIs it provides are a direct mapping of the REST APIs Amazon publishes for each of its services. Soto is a community supported project and is in no way affiliated with AWS.
The library consists of three parts
Soto uses the Swift Package Manager to manage its code dependencies. To use Soto in your codebase it is recommended you do the same. Add a dependency to the package in your own Package.swift dependencies.
dependencies: [
.package(url: "https://github.com/soto-project/soto.git", from: "6.0.0")
],
Then add target dependencies for each of the Soto targets you want to use.
targets: [
.target(name: "MyApp", dependencies: [
.product(name: "SotoS3", package: "soto"),
.product(name: "SotoSES", package: "soto"),
.product(name: "SotoIAM", package: "soto")
]),
]
)
Alternatively if you are using Xcode 11 or later you can use the Swift Package Manager integration and add a dependency to Soto through that.
Soto works on Linux, macOS and iOS. It requires v2.0 of Swift NIO. Below is a compatibility table for different Soto versions.
Version | Swift | MacOS | iOS | Linux | Vapor |
---|---|---|---|---|---|
6.x | 5.4 - | ✓ | 12.0 - | Ubuntu 18.04-22.04 | 4.0 |
5.x | 5.2 - | ✓ | 12.0 - | Ubuntu 18.04-20.04 | 4.0 |
4.x | 5.0 - | ✓ | 12.0 - | Ubuntu 18.04-20.04 | 4.0 |
Before using the SDK, you will need AWS credentials to sign all your requests. Credentials can be provided to the library in the following ways.
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
You can find out more about credential providers here
To use Soto you need to create an AWSClient
and a service object for the AWS service you want to work with. The AWSClient
provides all the communication with AWS and the service object provides the configuration and APIs for communicating with a specific AWS service. More can be found out about AWSClient
here and the AWS service objects here.
Each Soto command returns a Swift NIO EventLoopFuture
. An EventLoopFuture
is not the response of the command, but rather a container object that will be populated with the response at a later point. In this manner calls to AWS do not block the main thread. It is recommended you familiarise yourself with the Swift NIO documentation, specifically EventLoopFuture if you want to take full advantage of Soto.
The recommended manner to interact with EventLoopFutures
is chaining. The following function returns an EventLoopFuture
that creates an S3 bucket, puts a file in the bucket, reads the file back from the bucket and finally prints the contents of the file. Each of these operations are chained together. The output of one being the input of the next.
import SotoS3 //ensure this module is specified as a dependency in your package.swift
let bucket = "my-bucket"
let client = AWSClient(
credentialProvider: .static(accessKeyId: "Your-Access-Key", secretAccessKey: "Your-Secret-Key"),
httpClientProvider: .createNew
)
let s3 = S3(client: client, region: .uswest2)
func createBucketPutGetObject() -> EventLoopFuture<S3.GetObjectOutput> {
// Create Bucket, Put an Object, Get the Object
let createBucketRequest = S3.CreateBucketRequest(bucket: bucket)
s3.createBucket(createBucketRequest)
.flatMap { response -> EventLoopFuture<S3.PutObjectOutput> in
// Upload text file to the s3
let bodyData = "hello world".data(using: .utf8)!
let putObjectRequest = S3.PutObjectRequest(
acl: .publicRead,
body: bodyData,
bucket: bucket,
key: "hello.txt"
)
return s3.putObject(putObjectRequest)
}
.flatMap { response -> EventLoopFuture<S3.GetObjectOutput> in
let getObjectRequest = S3.GetObjectRequest(bucket: bucket, key: "hello.txt")
return s3.getObject(getObjectRequest)
}
.whenSuccess { response in
if let body = response.body {
print(String(data: body, encoding: .utf8)!)
}
}
}
Soto is a vary large package. If you would rather not include it in your package dependencies you can instead use the SotoCodeGenerator Swift Package Manager build plugin to generate the Swift source code for only the services/operations you actually need. Find out more here.
Visit soto.codes to browse the user guides and api reference. As there is a one-to-one correspondence with AWS REST api calls and the Soto api calls, you can also use the official AWS documentation for more detailed information about AWS commands.
Additional user guides for specific elements of Soto are available
We welcome and encourage contributions from all developers. Please read CONTRIBUTING.md for our contributing guidelines.
Author: Soto-project
Source Code: https://github.com/soto-project/soto
License: Apache-2.0 license