Path-Tracing-SDK: Real-time Path Tracing Library and Sample

Path Tracing SDK v1.0.0


Overview

Path Tracing SDK is a code sample that strives to embody years of ray tracing and neural graphics research and experience. It is intended as a starting point for a path tracer integration, as a reference for various integrated SDKs, and/or for learning and experimentation.

The base path tracing implementation derives from NVIDIA’s Falcor Research Path Tracer, ported to approachable C++/HLSL Donut framework.

Title

Features

  • DirectX 12 and Vulkan back-ends
  • Reference and real-time modes
  • Simple BSDF model that is easy to extend
  • Simple asset pipeline based on glTF 2.0 (support for a subset of glTF extensions including animation)
  • NEE/visibility rays and importance sampling for environment maps with MIS
  • Basic volumes and nested dielectrics with priority
  • RayCone for texture MIP selection
  • Basic analytic lights (directional, spot, point)
  • RTXDI integration for ReSTIR DI (light importance sampling) and and ReSTIR GI (indirect lighting)
  • OMM integration for fast ray traced alpha testing
  • NRD ReLAX and ReBLUR denoiser integration with up to 3-layer path space decomposition (Stable Planes)
  • Reference mode 'photo-mode screenshot' with basic OptiX denoiser integration
  • Basic TAA, tone mapping, etc.
  • Streamline + DLSS integration (coming very soon)

Known Issues

  • DLSS is currently not enabled due to upgrade to Streamline 2.0; integration is work in progress
  • SER support on Vulkan is currently work in progress

Requirements

  • Windows 10 20H1 (version 2004-10.0.19041) or newer
  • DXR Capable GPU
  • GeForce Game Ready Driver 531.18 or newer
  • DirectX 12 or Vulkan API
  • DirectX Raytracing 1.1 API, or higher
  • Visual Studio 2019 or later

Folder Structure

  
/bindefault folder for binaries and compiled shaders
/builddefault folder for build files
/donutcode for a custom version of the Donut framework
/donut/nvrhicode for the NVRHI rendering API layer (a git submodule)
/externalexternal libraries and SDKs, including NRD, RTXDI, and OMM
/mediamodels, textures, scene files
/toolsoptional command line tools (denoiser, texture compressor, etc)
/pt_sdkPath Tracing SDK core; Sample.cpp/.h/.hlsl contain entry points
/pt_sdk/PathTracerCore path tracing shaders

Build

At the moment, only Windows builds are supported. We are going to add Linux support in the future.

Clone the repository with all submodules recursively:

git clone --recursive https://github.com/NVIDIAGameWorks/Path-Tracing-SDK.git

Pull the media files from Packman:

cd Path-Tracing-SDK
update_dependencies.bat

Create a build folder.

mkdir build
cd build

Any folder name works, but git is configured to ignore folders named build\*

Use CMake to configure the build and generate the project files.

Use of CMake GUI is recommended but cmake .. works too. Make sure to select the x64 platform for the generator.

Build the solution generated by CMake in the build folder.

Open the generated solution (i.e. build/PathTracingSDK.sln) with Visual Studio and build it.

Select and run the pt_sdk project. Binaries get built to the bin folder. Media is loaded from media folder.

If making a binary build, the media and tools folders can be placed into bin and packed up together (i.e. the sample app will search for both media\ and ..\media\).

User Interface

Once the application is running, most of the SDK features can be accessed via the UI window on the left hand side and drop-down controls in the top-center.

UI

Camera can be moved using W/S/A/D keys and rotated by dragging with the left mouse cursor.

Command Line

  • -debug to enable the graphics API debug layer or runtime, and the NVRHI validation layer.
  • -fullscreen to start in full screen mode.
  • -no-vsync to start without VSync (can be toggled in the GUI).
  • -print-graph to print the scene graph into the output log on startup.
  • -width and -height to set the window size.
  • <FileName> to load any supported model or scene from the given file.

Developer Documentation

We are working on more detailed SDK developer documentation - watch this space!

Contact

Path Tracing SDK is under active development. Please report any issues directly through GitHub issue tracker, and for any information, suggestions or general requests please feel free to contact us at pathtracing-sdk-support@nvidia.com!


Download Details:

Author: NVIDIAGameWorks
Source Code: https://github.com/NVIDIAGameWorks/Path-Tracing-SDK 
License: View license

#cpluplus #sdk #realtime #path 

Path-Tracing-SDK: Real-time Path Tracing Library and Sample
Gordon  Matlala

Gordon Matlala

1679277180

Integrate Cutting-edge LLM Technology Quickly & Easily Into Your Apps

Semantic Kernel

ℹ️ NOTE: This project is in early alpha and, just like AI, will evolve quickly. We invite you to join us in developing the Semantic Kernel together! Please contribute by using GitHub Discussions, opening GitHub Issues, sending us PRs.

Semantic Kernel (SK) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI.

SK supports prompt templating, function chaining, vectorized memory, and intelligent planning capabilities out of the box.

image

Semantic Kernel is designed to support and encapsulate several design patterns from the latest in AI research, such that developers can infuse their applications with complex skills like prompt chaining, recursive reasoning, summarization, zero/few-shot learning, contextual memory, long-term memory, embeddings, semantic indexing, planning, and accessing external knowledge stores as well as your own data.

By joining the SK community, you can build AI-first apps faster and have a front-row peek at how the SDK is being built. SK has been released as open-source so that more pioneering developers can join us in crafting the future of this landmark moment in the history of computing.

Samples ⚡

If you would like a quick overview about how Semantic Kernel can integrate with your app, start by cloning the repository:

git clone https://github.com/microsoft/semantic-kernel.git

and try these examples:

  
Simple chat summaryUse ready-to-use skills and get those skills into your app easily.
Book creatorUse planner to deconstruct a complex goal and envision using the planner in your app.
Authentication and APIsUse a basic connector pattern to authenticate and connect to an API and imagine integrating external data into your app's LLM AI.

For a more hands-on overview, you can also run the Getting Started notebook, looking into the syntax, creating Semantic Functions, working with Memory, and see how the kernel works.

Please note:

  • You will need an Open AI API Key or Azure Open AI service key to get started.
  • There are a few software requirements you may need to satisfy before running examples and notebooks:
    1. Azure Functions Core Tools used for running the kernel as a local API, required by the web apps.
    2. Yarn used for installing web apps' dependencies.
    3. Semantic Kernel supports .NET Standard 2.1 and it's recommended using .NET 6+. However, some of the examples in the repository require .NET 7 and the VS Code Polyglot extension to run the notebooks.

Get Started with Semantic Kernel ⚡

Here is a quick example of how to use Semantic Kernel from a C# console app.

Create a new project, targeting .NET 6 or newer, and add the Microsoft.SemanticKernel nuget package:

dotnet add package Microsoft.SemanticKernel --prerelease

See nuget.org for the latest version and more instructions.

Copy and paste the following code into your project, with your Azure OpenAI key in hand (you can create one here).

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.KernelExtensions;

var kernel = Kernel.Builder.Build();

// For Azure Open AI service endpoint and keys please see
// https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=rest-api
kernel.Config.AddAzureOpenAICompletionBackend(
    "davinci-backend",                   // Alias used by the kernel
    "text-davinci-003",                  // Azure OpenAI *Deployment ID*
    "https://contoso.openai.azure.com/", // Azure OpenAI *Endpoint*
    "...your Azure OpenAI Key..."        // Azure OpenAI *Key*
);

string skPrompt = @"
{{$input}}

Give me the TLDR in 5 words.
";

string textToSummarize = @"
1) A robot may not injure a human being or, through inaction,
allow a human being to come to harm.

2) A robot must obey orders given it by human beings except where
such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
";

var tldrFunction = kernel.CreateSemanticFunction(skPrompt);

var summary = await kernel.RunAsync(textToSummarize, tldrFunction);

Console.WriteLine(summary);

// Output => Protect humans, follow orders, survive.

Contributing and Community

We welcome your contributions and suggestions to SK community! One of the easiest ways to participate is to engage in discussions in the GitHub repository. Bug reports and fixes are welcome!

For new features, components, or extensions, please open an issue and discuss with us before sending a PR. This is to avoid rejection as we might be taking the core in a different direction, but also to consider the impact on the larger ecosystem.

To learn more and get started:

Python developers: Semantic Kernel is coming to Python soon! Check out the work-in-progress and contribute in the python-preview branch. 
 

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.


Download Details:

Author: Microsoft
Source Code: https://github.com/microsoft/semantic-kernel 
License: MIT license

#ai #sdk #artificialintelligence #openai

Integrate Cutting-edge LLM Technology Quickly & Easily Into Your Apps
Gordon  Murray

Gordon Murray

1679141476

Librealsense: Intel® RealSense™ SDK

Intel® RealSense™ SDK


Overview

Intel® RealSense™ SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras (D400 & L500 series and the SR300) and the T265 tracking camera.

📌 For other Intel® RealSense™ devices (F200, R200, LR200 and ZR300), please refer to the latest legacy release.

The SDK allows depth and color streaming, and provides intrinsic and extrinsic calibration information. The library also offers synthetic streams (pointcloud, depth aligned to color and vise-versa), and a built-in support for record and playback of streaming sessions.

Developer kits containing the necessary hardware to use this library are available for purchase at store.intelrealsense.com. Information about the Intel® RealSense™ technology at www.intelrealsense.com

📂 Don't have access to a RealSense camera? Check-out sample data

Update on Recent Changes to the RealSense Product Line

Intel has EOLed the LiDAR, Facial Authentication, and Tracking product lines. These products have been discontinued and will no longer be available for new orders.

Intel WILL continue to sell and support stereo products including the following: D410, D415, D430, , D401 ,D450 modules and D415, D435, D435i, D435f, D405, D455, D457 depth cameras. We will also continue the work to support and develop our LibRealSense open source SDK.

In the future, Intel and the RealSense team will focus our new development on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy.

Building librealsense - Using vcpkg

You can download and install librealsense using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install realsense2

The librealsense port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

Download and Install

Download - The latest releases including the Intel RealSense SDK, Viewer and Depth Quality tools are available at: latest releases. Please check the release notes for the supported platforms, new features and capabilities, known issues, how to upgrade the Firmware and more.

Install - You can also install or build from source the SDK (on Linux \ Windows \ Mac OS \ Android \ Docker), connect your D400 depth camera and you are ready to start writing your first application.

Support & Issues: If you need product support (e.g. ask a question about / are having problems with the device), please check the FAQ & Troubleshooting section. If not covered there, please search our Closed GitHub Issues page, Community and Support sites. If you still cannot find an answer to your question, please open a new issue.

What’s included in the SDK:

WhatDescriptionDownload link
Intel® RealSense™ ViewerWith this application, you can quickly access your Intel® RealSense™ Depth Camera to view the depth stream, visualize point clouds, record and playback streams, configure your camera settings, modify advanced controls, enable depth visualization and post processing and much more.Intel.RealSense.Viewer.exe
Depth Quality ToolThis application allows you to test the camera’s depth quality, including: standard deviation from plane fit, normalized RMS – the subpixel accuracy, distance accuracy and fill rate. You should be able to easily get and interpret several of the depth quality metrics and record and save the data for offline analysis.Depth.Quality.Tool.exe
Debug ToolsDevice enumeration, FW logger, etc as can be seen at the tools directoryIncluded in Intel.RealSense.SDK.exe
Code SamplesThese simple examples demonstrate how to easily use the SDK to include code snippets that access the camera into your applications. Check some of the C++ examples including capture, pointcloud and more and basic C examplesIncluded in Intel.RealSense.SDK.exe
WrappersPython, C#/.NET API, as well as integration with the following 3rd-party technologies: ROS1, ROS2, LabVIEW, OpenCV, PCL, Unity, Matlab, OpenNI, UnrealEngine4 and more to come. 

Ready to Hack!

Our library offers a high level API for using Intel RealSense depth cameras (in addition to lower level ones). The following snippet shows how to start streaming frames and extracting the depth value of a pixel:

// Create a Pipeline - this serves as a top-level API for streaming and processing frames
rs2::pipeline p;

// Configure and start the pipeline
p.start();

while (true)
{
    // Block program until frames arrive
    rs2::frameset frames = p.wait_for_frames();

    // Try to get a frame of a depth image
    rs2::depth_frame depth = frames.get_depth_frame();

    // Get the depth frame's dimensions
    float width = depth.get_width();
    float height = depth.get_height();

    // Query the distance from the camera to the object in the center of the image
    float dist_to_center = depth.get_distance(width / 2, height / 2);

    // Print the distance
    std::cout << "The camera is facing an object " << dist_to_center << " meters away \r";
}

For more information on the library, please follow our examples, and read the documentation to learn more.

Contributing

In order to contribute to Intel RealSense SDK, please follow our contribution guidelines.


Download Details:

Author: IntelRealSense
Source Code: https://github.com/IntelRealSense/librealsense 
License: Apache-2.0 license

#cpluplus #library #sdk #computervision #hardware 

Librealsense: Intel® RealSense™ SDK
Lawrence  Lesch

Lawrence Lesch

1678263480

Line-bot-sdk-nodejs: LINE Messaging API SDK for Node.js

LINE Messaging API SDK for nodejs


Introduction

The LINE Messaging API SDK for nodejs makes it easy to develop bots using LINE Messaging API, and you can create a sample bot within minutes.

Installation

Using npm:

$ npm install @line/bot-sdk --save

Documentation

See the official API documentation for more information

line-bot-sdk-nodejs documentation: https://line.github.io/line-bot-sdk-nodejs/#getting-started

Requirements

  • Node.js 10 or higher

Help and media

FAQ: https://developers.line.biz/en/faq/

Community Q&A: https://www.line-community.me/questions

News: https://developers.line.biz/en/news/

Twitter: @LINE_DEV

Versioning

This project respects semantic versioning

See http://semver.org/

Contributing

Please check CONTRIBUTING before making a contribution.


Download Details:

Author: line
Source Code: https://github.com/line/line-bot-sdk-nodejs 
License: Apache-2.0 license

#typescript #javascript #bot #sdk #nodejs 

Line-bot-sdk-nodejs: LINE Messaging API SDK for Node.js
Lawrence  Lesch

Lawrence Lesch

1678205047

Dropbox-sdk-js: The Official Dropbox API V2 SDK for Javascript

Dropbox-sdk-js

The offical Dropbox SDK for Javascript.

Installation

Create an app via the Developer Console

Install via npm

$ npm install --save dropbox

Install from source:

$ git clone https://github.com/dropbox/dropbox-sdk-js.git
$ cd dropbox-sdk-js
$ npm install

After installation, follow one of our Examples or read the Documentation.

You can also view our OAuth guide.

Examples

We provide Examples to help get you started with a lot of the basic functionality in the SDK. We provide most examples in both Javascript and Typescript with some having a Node equivalent.

OAuth

  • Auth - [ JS ] - A simple auth example to get an access token and list the files in the root of your Dropbox account.
  • Simple Backend [ JS ] - A simple example of a node backend doing a multi-step auth flow for Short Lived Tokens.
  • PKCE Backend [ JS ] - A simple example of a node backend doing a multi-step auth flow using PKCE and Short Lived Tokens.

Other Examples

  • Basic - [ TS, JS ] - A simple example that takes in a token and fetches files from your Dropbox account.
  • Download - [ TS, JS ] - An example showing how to download a shared file.
  • Team As User - [ TS, JS ] - An example showing how to act as a user.
  • Team - [ TS, JS ] - An example showing how to use the team functionality and list team devices.
  • Upload [ TS, JS ] - An example showing how to upload a file to Dropbox.

Getting Help

If you find a bug, please see CONTRIBUTING.md for information on how to report it.

If you need help that is not specific to this SDK, please reach out to Dropbox Support.


Documentation can be found on GitHub Pages


Download Details:

Author: Dropbox
Source Code: https://github.com/dropbox/dropbox-sdk-js 
License: MIT license

#typescript #javascript #sdk 

Dropbox-sdk-js: The Official Dropbox API V2 SDK for Javascript

Openai: Open AI ChatGPT, GPT-3 and DALL-E Dotnet SDK

Dotnet SDK for OpenAI Chat GPT, GPT-3 and DALL·E

Install-Package Betalgo.OpenAI.GPT3

Dotnet SDK for OpenAI Chat GPT, GPT-3 and DALL·E
Unofficial.
GPT-3 doesn't have any official .Net SDK.

NOTE for v6.7.0

I know we are all excited about new Chat Gpt APIs, so I tried to rush this version. It's nearly 4 AM here.
Be aware! It might have some bugs, also the next version may have breaking changes. Because I didn't like namings but I don't have time to think about it at the moment. Whisper is coming soon to.

Enjoy your new Methods! Don't forget to star the repo if you like it.

Features

For changelogs please go to end of the document.

Visit https://openai.com/ to get your API key. Also documentation with more detail is avaliable there.

Sample Usages

The repository contains a sample project named OpenAI.Playground that you can refer to for a better understanding of how the library works. However, please exercise caution while experimenting with it, as some of the test methods may result in unintended consequences such as file deletion or fine tuning.

!! It is highly recommended that you use a separate account instead of your primary account while using the playground. This is because some test methods may add or delete your files and models, which could potentially cause unwanted issues. !!

Your API Key comes from here --> https://platform.openai.com/account/api-keys

Your Organization ID comes from here --> https://platform.openai.com/account/org-settings

Without using dependency injection:

var openAiService = new OpenAIService(new OpenAiOptions()
{
    ApiKey =  Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY")
});

Using dependency injection:

secrets.json:

 "OpenAIServiceOptions": {
    //"ApiKey":"Your api key goes here"
    //,"Organization": "Your Organization Id goes here (optional)"
  },

(How to use user secret ?
Right click your project name in "solution explorer" then click "Manage User Secret", it is a good way to keep your api keys)

Program.cs

serviceCollection.AddOpenAIService();

OR
Use it like below but do NOT put your API key directly to your source code.

Program.cs

serviceCollection.AddOpenAIService(settings => { settings.ApiKey = Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY"); });

After injecting your service you will be able to get it from service provider

var openAiService = serviceProvider.GetRequiredService<IOpenAIService>();

You can set default model(optional):

openAiService.SetDefaultModelId(Models.Davinci);

Chat Gpt Sample

var completionResult = await sdk.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
{
    Messages = new List<ChatMessage>
    {
        ChatMessage.FromSystem("You are a helpful assistant."),
        ChatMessage.FromUser("Who won the world series in 2020?"),
        ChatMessage.FromAssistance("The Los Angeles Dodgers won the World Series in 2020."),
        ChatMessage.FromUser("Where was it played?")
    },
    Model = Models.ChatGpt3_5Turbo,
    MaxTokens = 50//optional
});
if (completionResult.Successful)
{
   Console.WriteLine(completionResult.Choices.First().Message.Content);
}

Completions Sample

var completionResult = await openAiService.Completions.CreateCompletion(new CompletionCreateRequest()
{
    Prompt = "Once upon a time",
    Model = Models.TextDavinciV3
});

if (completionResult.Successful)
{
    Console.WriteLine(completionResult.Choices.FirstOrDefault());
}
else
{
    if (completionResult.Error == null)
    {
        throw new Exception("Unknown Error");
    }
    Console.WriteLine($"{completionResult.Error.Code}: {completionResult.Error.Message}");
}

Completions Stream Sample

var completionResult = sdk.Completions.CreateCompletionAsStream(new CompletionCreateRequest()
   {
      Prompt = "Once upon a time",
      MaxTokens = 50
   }, Models.Davinci);

   await foreach (var completion in completionResult)
   {
      if (completion.Successful)
      {
         Console.Write(completion.Choices.FirstOrDefault()?.Text);
      }
      else
      {
         if (completion.Error == null)
         {
            throw new Exception("Unknown Error");
         }

         Console.WriteLine($"{completion.Error.Code}: {completion.Error.Message}");
      }
   }
   Console.WriteLine("Complete");

DALL·E Sample

var imageResult = await sdk.Image.CreateImage(new ImageCreateRequest
{
    Prompt = "Laser cat eyes",
    N = 2,
    Size = StaticValues.ImageStatics.Size.Size256,
    ResponseFormat = StaticValues.ImageStatics.ResponseFormat.Url,
    User = "TestUser"
});


if (imageResult.Successful)
{
    Console.WriteLine(string.Join("\n", imageResult.Results.Select(r => r.Url)));
}

Notes:

Please note that due to time constraints, I was unable to thoroughly test all of the methods or fully document the library. If you encounter any issues, please do not hesitate to report them or submit a pull request - your contributions are always appreciated.

I initially developed this SDK for my personal use and later decided to share it with the community. As I have not maintained any open-source projects before, any assistance or feedback would be greatly appreciated. If you would like to contribute in any way, please feel free to reach out to me with your suggestions.

I will always be using the latest libraries, and future releases will frequently include breaking changes. Please take this into consideration before deciding to use the library. I want to make it clear that I cannot accept any responsibility for any damage caused by using the library. If you feel that this is not suitable for your purposes, you are free to explore alternative libraries or the OpenAI Web-API.

Changelog

6.7.0

  • We all beeen waiting for this moment. Please enjoy Chat GPT API
  • Added support for Chat GPT API
  • Fixed Tokenizer Bug, it was not working properly.

6.6.8

Breaking Changes

  • Renamed Engine keyword to Model in accordance with OpenAI's new naming convention.
  • Deprecated DefaultEngineId in favor of DefaultModelId.
  • DefaultEngineId and DefaultModelId is not static anymore.

Added support for Azure OpenAI, a big thanks to @copypastedeveloper!

Added support for Tokenizer, inspired by @dluc's https://github.com/dluc/openai-tools repository. Please consider giving the repo a star.

These two changes are recent additions, so please let me know if you encounter any issues.

  • Updated documentation links from beta.openai.com to platform.openai.com.

6.6.7

  • Added Cancellation Token support, thanks to @robertlyson
  • Updated readme file, thanks to @qbm5, @gotmike, @SteveMCarroll

Checkout the wiki page:

https://github.com/betalgo/openai/wiki


Download Details:

Author: Betalgo
Source Code: https://github.com/betalgo/openai 
License: MIT license

#chatgpt #sdk #csharp #dotnet #openai #gpt3

Openai: Open AI ChatGPT, GPT-3 and DALL-E Dotnet SDK

AWSSDK.jl: Julia APIs for All Public Amazon Web Services

AWSSDK.jl

Julia interface for Amazon Web Services.

Based on JuliaCloud/AWSCore.jl.

This package provides automatically generated low-level API wrappers and documentation strings for each operation in each Amazon Web Service.

The following high-level packages are also available: AWS S3, AWS SQS, AWS SNS, AWS IAM, AWS EC2, AWS Lambda, AWS SES and AWS SDB. These packages include operation specific result structure parsing, error handling, type convenience functions, iterators, etc.

Full documentation is available here, or see below for some examples of how to get started.

This package is generated by AWSCore.jl/src/AWSAPI.jl.

Configuration

Option 1: environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY (and AWS_DEFAULT_REGION),

Option 2: ~/.aws/credentials file:

[default]
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Option 3: run the AWS CLI configure command: aws configure.

SNS Example

julia> using AWSSDK.SNS

julia> AWSCore.set_debug_level(1)

julia> SNS.publish(PhoneNumber="+61401555555", Message="Hello")
Dict{String,Any} with 1 entry:
  "MessageId" => "f0607542-7b54-5c66-b271-27453b0bd979"

S3 Example

julia> using AWSSDK.S3

julia> r = S3.list_buckets()
XMLDict.XMLDictElement with 2 entries:
  "Owner"   => <Owner>…
  "Buckets" => <Buckets>…

julia> v = [b["Name"] for b in r["Buckets"]["Bucket"]]
3-element Array{String,1}:
 "bucket1"
 "bucket2"
 "bucket3"

julia> S3.put_object(Bucket="bucket1", Key="myfile", Body="mydata")
Response(200 OK, 10 headers, 0 bytes in body)

julia> S3.get_object(Bucket="bucket1", Key="myfile") |> String
"mydata"

EC2 Example

julia> using AWSSDK.EC2

julia> r = EC2.describe_images(Filter=[
    ["Name" => "owner-alias", "Value" => "amazon"],
    ["Name" => "name", "Value" => "amzn-ami-hvm-2015.09.1.x86_64-gp2"]])

XMLDict.XMLDictElement with 2 entries:
  "requestId" => "af8cf64c-d5b0-4e2e-959c-3f703eeb362f"
  "imagesSet" => <imagesSet>…

julia> r["imagesSet"]["item"]
XMLDict.XMLDictElement with 17 entries:
  "imageId"            => "ami-48d38c2b"
  "imageLocation"      => "amazon/amzn-ami-hvm-2015.09.1.x86_64-gp2"
  "imageState"         => "available"
  "imageOwnerId"       => "137112412989"
  "creationDate"       => "2015-10-29T18:16:22.000Z"
  "isPublic"           => "true"
  "architecture"       => "x86_64"
  "imageType"          => "machine"
  "sriovNetSupport"    => "simple"
  "imageOwnerAlias"    => "amazon"
  "name"               => "amzn-ami-hvm-2015.09.1.x86_64-gp2"
  "description"        => "Amazon Linux AMI 2015.09.1 x86_64 HVM GP2"
  "rootDeviceType"     => "ebs"
  "rootDeviceName"     => "/dev/xvda"
  "blockDeviceMapping" => <blockDeviceMapping>…
  "virtualizationType" => "hvm"
  "hypervisor"         => "xen"

SES Example

julia> r = SES.send_email(
    Source = "sam@octech.com.au",
    Destination = ["ToAddresses" => ["sam@octech.com.au"]],
    Message = [
        "Subject" => ["Data" => "Hello"],
        "Body" => ["Text" =>  ["Data" => "Hello"]]
    ])
XMLDict.XMLDictElement with 2 entries:
  "SendEmailResult"  => <SendEmailResult>…
  "ResponseMetadata" => <ResponseMetadata>…

Download Details:

Author: JuliaCloud
Source Code: https://github.com/JuliaCloud/AWSSDK.jl 
License: View license

#julia #cloud #aws #sdk 

AWSSDK.jl: Julia APIs for All Public Amazon Web Services
Nat  Grady

Nat Grady

1677750720

Opentelemetry-go: OpenTelemetry Go API and SDK

OpenTelemetry-Go

OpenTelemetry-Go is the Go implementation of OpenTelemetry. It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.

Project Status

SignalStatusProject
TracesStableN/A
MetricsAlphaN/A
LogsFrozen [1]N/A
  • [1]: The Logs signal development is halted for this project while we develop both Traces and Metrics. No Logs Pull Requests are currently being accepted.

Progress and status specific to this repository is tracked in our local project boards and milestones.

Project versioning information and stability guarantees can be found in the versioning documentation.

Compatibility

OpenTelemetry-Go ensures compatibility with the current supported versions of the Go language:

Each major Go release is supported until there are two newer major releases. For example, Go 1.5 was supported until the Go 1.7 release, and Go 1.6 was supported until the Go 1.8 release.

For versions of Go that are no longer supported upstream, opentelemetry-go will stop ensuring compatibility with these versions in the following manner:

  • A minor release of opentelemetry-go will be made to add support for the new supported release of Go.
  • The following minor release of opentelemetry-go will remove compatibility testing for the oldest (now archived upstream) version of Go. This, and future, releases of opentelemetry-go may include features only supported by the currently supported versions of Go.

Currently, this project supports the following environments.

OSGo VersionArchitecture
Ubuntu1.20amd64
Ubuntu1.19amd64
Ubuntu1.20386
Ubuntu1.19386
MacOS1.20amd64
MacOS1.19amd64
Windows1.20amd64
Windows1.19amd64
Windows1.20386
Windows1.19386

While this project should work for other systems, no compatibility guarantees are made for those systems currently.

Getting Started

You can find a getting started guide on opentelemetry.io.

OpenTelemetry's goal is to provide a single set of APIs to capture distributed traces and metrics from your application and send them to an observability platform. This project allows you to do just that for applications written in Go. There are two steps to this process: instrument your application, and configure an exporter.

Instrumentation

To start capturing distributed traces and metric events from your application it first needs to be instrumented. The easiest way to do this is by using an instrumentation library for your code. Be sure to check out the officially supported instrumentation libraries.

If you need to extend the telemetry an instrumentation library provides or want to build your own instrumentation for your application directly you will need to use the Go otel package. The included examples are a good way to see some practical uses of this process.

Export

Now that your application is instrumented to collect telemetry, it needs an export pipeline to send that telemetry to an observability platform.

All officially supported exporters for the OpenTelemetry project are contained in the exporters directory.

ExporterMetricsTraces
Jaeger 
OTLP
Prometheus 
stdout
Zipkin 

Contributing

See the contributing documentation.


Download Details:

Author: open-telemetry
Source Code: https://github.com/open-telemetry/opentelemetry-go 
License: Apache-2.0 license

#go #golang #api #sdk 

Opentelemetry-go: OpenTelemetry Go API and SDK
Tamale  Moses

Tamale Moses

1677219429

An Implementation Of Stability AI SDK in Dart

stability_sdk

An implementation of Stability AI SDK in Dart. Stability AI is a solution studio dedicated to innovating ideas.

Brush AI

A demonstrable use of stability SDK in Flutter and Dart.

brush-ai demo

and more sample outputs...

Dogs

"generate an oil painting canvas of a dog, realistic, painted by Leonardo da Vinci"

Output 1Output 2Output 3
dog-3dog-2dog-3

Cats

"generate an oil painting canvas of a cat, realistic, painted by Leonardo da Vinci"

Output 1Output 2Output 3
cat-3cat-2cat-3

Cyberpunk

"generate a cyberpunk scene, in japan, realistic street scene on the night"

Output 1Output 2Output 3
cyberpunk-3cyberpunk-2cyberpunk-3

Features

  •  Text-to-image

Upcoming

  •  Image-to-image
  •  Inpainting + Masking
  •  CLIP guidance
  •  Multi-prompting

Setup

Prerequisites

Stability AI requires you to create your own API key to make calls to the API. You can create one here.

Create a .env file and set your Stability AI API key

Usage

The example provided is using the SDK directly in a Flutter app. In most cases, you're going to use the SDK in the backend using tools like Dart Frog. This is to secure the API key and to have more control of the incoming requests, e.g. controlling rate limits or blocking sensitive content.

// 1. Setup the API client
final client = StabilityApiClient.init("<YOUR_API_KEY_HERE>");

// 2. Create a generation request
final request = RequestBuilder("an oil painting of a dog in the canvas, wearing knight armor, realistic painting by Leonardo da Vinci")
    .setHeight(512)
    .setWidth(512)
    .setEngineType(EngineType.inpainting_v2_0)
    .setSampleCount(1)
    .build();

// 3. Subscribe to the response
client.generate(request).listen((answer) {
    image = answer.artifacts?.first.getImage();
});

.gitignore

# Miscellaneous
*.class
*.log
*.pyc
*.swp
.DS_Store
.atom/
.buildlog/
.history
.svn/
migrate_working_dir/

# IntelliJ related
*.iml
*.ipr
*.iws
.idea/

# The .vscode folder contains launch configuration and tasks you configure in
# VS Code which you may wish to be included in version control, so this line
# is commented out by default.
#.vscode/

# Flutter/Dart/Pub related
# Libraries should not include pubspec.lock, per https://dart.dev/guides/libraries/private-files#pubspeclock.
/pubspec.lock
**/doc/api/
.dart_tool/
.packages
build/

.metadata

# This file tracks properties of this Flutter project.
# Used by Flutter tool to assess capabilities and perform upgrades etc.
#
# This file should be version controlled and should not be manually edited.

version:
  revision: 135454af32477f815a7525073027a3ff9eff1bfd
  channel: stable

project_type: package

Download details:

Author:  joshuadeguzman
Source code: https://github.com/joshuadeguzman/stability-sdk-dart

License: BSD-3-Clause license

#flutter #dart #sdk #AI 

An Implementation Of Stability AI SDK in Dart
Royce  Reinger

Royce Reinger

1676746440

Alan AI: In-app voice assistant SDK for Web

Alan AI: In-app voice assistant SDK for Web

Quickly add voice to your app with the Alan Platform. Create an in-app voice assistant to enable human-like conversations and provide a personalized voice experience for every user.

Alan is a Voice AI Platform

Alan is a conversational voice AI platform that lets you create an intelligent voice assistant for your app. It offers all necessary tools to design, embed and host your voice solutions:

Alan Studio

A powerful web-based IDE where you can write, test and debug dialog scenarios for your voice assistant or chatbot.

Alan Client SDKs

Alan's lightweight SDKs to quickly embed a voice assistant to your app.

Alan Cloud

Alan's AI-backend powered by the industry’s best Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Speech Synthesis. The Alan Cloud provisions and handles the infrastructure required to maintain your voice deployments and perform all the voice processing tasks.

To get more details on how Alan works, see Alan Platform.

Why Alan?

  • No or minimum changes to your UI: To voice enable your app, you only need to get the Alan Client SDK and drop it to your app.
  • Serverless environment: No need to plan for, deploy and maintain any infrastructure or speech components - the Alan Platform does the bulk of the work.
  • On-the-fly updates: All changes to the dialogs become available immediately.
  • Voice flow testing and analytics: Alan Studio provides advanced tools for testing your dialog flows and getting the analytics data on users' interactions, all in the same console.

How to start

To create a voice assistant for your web app or page:

Sign up for Alan Studio to build voice scripts in JavaScript and test them.

Use the Alan Web SDK to embed a voice assistant to your app or page. For details, see Alan AI documentation for the necessary framework:

Check out our demo.

Downloads

Example apps

In the Examples folder, you can find example web apps created with:

  • React
  • Angular
  • Vue
  • Ember
  • Electron

To launch the app, follow the instructions in the README file inside the example app folder. Then press the Alan button and try interacting with Alan.

Other platforms

You may also want to try Alan Client SDKs for the following platforms:

Have questions?

If you have any questions or something is missing in the documentation:


Alan PlatformAlan StudioDocsFAQBlogTwitter


Download Details:

Author: Alan-ai
Source Code: https://github.com/alan-ai/alan-sdk-web 

#machinelearning #sdk #chatbot #voice 

Alan AI: In-app voice assistant SDK for Web
Rupert  Beatty

Rupert Beatty

1676638210

Stream-chat-swift: iOS Chat SDK in Swift

Stream-chat-swift

iOS Chat SDK in Swift - Build your own app chat experience for iOS using the official Stream Chat API

This is the official iOS SDK for Stream Chat, a service for building chat and messaging applications. This library includes both a low-level SDK and a set of reusable UI components.

Low Level Client (LLC)

The StreamChat SDK is a low level client for Stream chat service that doesn't contain any UI components. It is meant to be used when you want to build a fully custom UI. For the majority of use cases though, we recommend using our highly customizable UI SDK's.

UIKit SDK

The StreamChatUI SDK is our UI SDK for UIKit components. If your application needs to support iOS 13 and below, this is the right UI SDK for you.

SwiftUI SDK

The StreamChatSwiftUI SDK is our UI SDK for SwiftUI components. If your application only needs to support iOS 14 and above, this is the right UI SDK for you. This SDK is available in another repository stream-chat-swiftui.

iOS 16 and Xcode 14 support

Since the 4.20.0 release, our SDKs can be built using Xcode 14. Currently, there are no known issues on iOS 16. If you spot one, please create a ticket.


Main Features

  • Offline support: Browse channels and send messages while offline.
  • Familiar behavior: The UI elements are good platform citizens and behave like native elements; they respect tintColor, layoutMargins, light/dark mode, dynamic font sizes, etc.
  • Swift native API: Uses Swift's powerful language features to make the SDK usage easy and type-safe.
  • Uses UIKit patterns and paradigms: The API follows the design of native system SDKs. It makes integration with your existing code easy and familiar.
  • SwiftUI support: We have developed a brand new SDK to help you have smoother Stream Chat integration in your SwiftUI apps.
  • First-class support for Combine: The StreamChat SDK (Low Level Client) has Combine wrappers to make it really easy use in an app that uses Combine.
  • Fully open-source implementation: You have access to the complete source code of the SDK here on GitHub.
  • Supports iOS 11+: We proudly support older versions of iOS, so your app can stay available to almost everyone.

Quick Links

  • iOS/Swift Chat Tutorial: Learn how to use the SDK by following our simple tutorial with UIKit (or SwiftUI).
  • Register: Register to get an API key for Stream Chat.
  • Installation: Learn more about how to install the SDK using CocoaPods, SPM or Carthage.
  • Documentation: An extensive documentation is available to help with you integration.
  • SwiftUI: Check our SwiftUI SDK if you are developing with SwiftUI.
  • Demo app: This repo includes a fully functional demo app with example usage of the SDK.
  • Example apps: This section of the repo includes fully functional sample apps that you can use as reference.

Free for Makers

Stream is free for most side and hobby projects. You can use Stream Chat for free if you have less than five team members and no more than $10,000 in monthly revenue.

Main Principles

Progressive disclosure: The SDK can be used easily with very minimal knowledge of it. As you become more familiar with it, you can dig deeper and start customizing it on all levels.

Highly customizable: Every element is designed to be easily customizable. You can modify the brand color by setting tintColor, apply appearance changes using custom UI rules, or subclass existing elements and inject them everywhere in the system, no matter how deep is the logic hierarchy.

open by default: Everything is open unless there's a strong reason for it to not be. This means you can easily modify almost every behavior of the SDK such that it fits your needs.

Good platform citizen: The UI elements behave like good platform citizens. They use existing iOS patterns; their behavior is predictable and matches system UI components; they respect tintColor, layourMargins, dynamic font sizes, and other system-defined UI constants.

Dependencies

This SDK tries to keep the list of external dependencies to a minimum. Starting 4.6.0, and in order to improve the developer experience, dependencies are hidden inside our libraries. (Does not apply to StreamChatSwiftUI's dependencies yet).

Learn more about our dependencies here

Using Objective-C

You can still integrate our SDKs if your project is using Objective-C. In that case, any customizations would need to be done by subclassing our components in Swift, and then use those directly from the Objective-C code.


We are hiring

We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world. Check out our current openings and apply via Stream's website.


Quick Overview

Channel List

FeaturesPreview
A list of channels matching provided query
Channel name and image based on the channel members or custom data
Unread messages indicator
Preview of the last message
Online indicator for avatars
Create new channel and start right away
 

Message List

FeaturesPreview
A list of message in a channel
Photo preview
Message reactions
Message grouping based on the send time
Link preview
Inline replies
Message threads
GIPHY support
 

Message Composer

FeaturesPreview
Support for multiline text, expands and shrinks as needed
Image and file attachments
Replies to messages
Tagging of users
Chat commands like mute, ban, giphy
 

Chat Commands

FeaturesPreview
Easily search commands by writing / symbol or tap bolt icon
GIPHY support out of box
Supports mute, unmute, ban, unban commands
WIP support of custom commands
 

User Tagging Suggestion

FeaturesPreview
User mentions preview
Easily search for concrete user
Mention as many users as you want
 

Download Details:

Author: GetStream
Source Code: https://github.com/GetStream/stream-chat-swift 
License: View license

#swift #chat #ios #sdk #messaging 

Stream-chat-swift: iOS Chat SDK in Swift
Bongani  Ngema

Bongani Ngema

1676366220

Chat SDK iOS: Open Source Mobile Messenger

Chat SDK

Open Source Messaging framework for iOS

Chat SDK is a fully featured open source instant messaging framework for iOS. Chat SDK is fully featured, scalable and flexible and follows the following key principles:

  • Open Source. The Chat SDK is open source and free for commerical apps (see license)
  • Full data control. You have full and exclusive access to the user's chat data
  • Quick integration. Chat SDK is fully featured out of the box
  • Firebase Powered by Google Firebase

Features

Full breakdown is available on the features page.

Quick Start

Modules

About Us

Learn about the history of Chat SDK and our future plans in this post.

Scalability and Cost

People always ask about how much Chat SDK costs to run. And will it scale to millions of users? So I wrote an article talking about just that.

Looking for Freelance Developers

If you're a freelance developer looking for work, join our Discord server. We often have customers

Community

  • Discord: If you need support, join our Server
  • Support the project: Patreon or Github Sponsors 🙏 and get access to premium modules
  • Upvote: our advert on StackOverflow
  • Contribute by writing code: Email the Contributing Document to team@sdk.chat
  • Give us a star on Github ⭐
  • Upvoting us: Product Hunt
  • Tweet: about your Chat SDK project using @chat_sdk
  • Live Stream Join us every Saturday 18:00 CEST for a live stream where I answer questions about Chat SDK. For more details please join the Discord Server

You can also help us by:

  • Providing feedback and feature requests
  • Reporting bugs
  • Fixing bugs
  • Writing documentation

Email us at: team@sdk.chat

We also offer development services we are a team of full stack developers who are Firebase experts. For more information check out our consulting site.

Running the demo project

This repository contains a fully functional version of the Chat SDK which is configured using our Firebase account. This is great way to test the features of the Chat SDK before you start itegrating it with your app.

  1. Clone Chat SDK
  2. Run pod install in the Xcode directory
  3. Open the Chat SDK Firebase.xcworkspace file in Xcode
  4. Compile and run

Swift Version

We are currently updating the Chat SDK to use Swift, this will happen gradually. In the meantime, the Chat SDK API is fully compatible with Swift projects.

The Chat SDK is fully compatible with Swift projects and contains a Swift demo project.

  1. Clone Chat SDK
  2. Run pod install in the XcodeSwift directory
  3. Open the ChatSDKSwift.xcworkspace file in Xcode
  4. Compile and run

Adding the Chat SDK to your project

Quick start guide - it takes about 10 minutes!

Adding the Chat SDK to your project

  1. Add the Chat SDK development pods to your Podfile
use_frameworks!
pod "ChatSDK"
pod "ChatSDKFirebase/Adapter"
pod "ChatSDKFirebase/Upload"
pod "ChatSDKFirebase/Push"

Optional

pod "ChatSDK/ModAddContactWithQRCode"

Run pod update to get the latest version of the code.

Open the App Delegate add the following code to initialise the chat

Swift

AppDelegate.swift

import ChatSDK

Add the following code to the start of your didFinishLaunchingWithOptions function:

let config = BConfiguration.init();
config.rootPath = "test"
// Configure other options here...
config.allowUsersToCreatePublicChats = true

// Define the modules you want to use. 
var modules = [
    FirebaseNetworkAdapterModule.shared(),
    FirebasePushModule.shared(),
    FirebaseUploadModule.shared(),
    // Optional...
    AddContactWithQRCodeModule.init(),
]

BChatSDK.initialize(config, app: application, options: launchOptions, modules: modules)

    
self.window = UIWindow.init(frame: UIScreen.main.bounds)
self.window?.rootViewController = BChatSDK.ui().splashScreenNavigationController()
self.window?.makeKeyAndVisible();

Then add the following methods:

  func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
      BChatSDK.application(application, didRegisterForRemoteNotificationsWithDeviceToken: deviceToken)
  }

  func application(_ application: UIApplication, didReceiveRemoteNotification userInfo: [AnyHashable : Any], fetchCompletionHandler completionHandler: @escaping (UIBackgroundFetchResult) -> Void) {
      BChatSDK.application(application, didReceiveRemoteNotification: userInfo)
  }

  func application(_ application: UIApplication, open url: URL, sourceApplication: String?, annotation: Any) -> Bool {
      return BChatSDK.application(application, open: url, sourceApplication: sourceApplication, annotation: annotation)
  }

  func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
      return BChatSDK.application(app, open: url, options: options)
  }

Objective C

Check the demo project.

The Root Path

The root path variable allows you to run multiple Chat SDK instances on one Firebase account. Each different root path will represent a completely separate set of Firebase data. This can be useful for testing because you could have separate test and prod root paths.

  1. The Chat SDK is now added to your project

Firebase Setup

  1. Go to the Firebase website and sign up
  2. Go to the Firebase console and make a new project
  3. Click Add project
  4. Choose a name and a location
  5. Click Settings (the gear icon). On the General tab, click Add Firebase to your iOS app
  6. Enter your bundle ID
  7. Download the GoogleServices file and add it to the root of your Xcode project

Note:
It is worth opening your downloaded GoogleService-Info.plist and checking there is an API_KEY field included. Sometimes Firebase's automatic download doesn’t include this in the plist. To rectify, just re-download the plist from the project settings menu.

Copy the following rows from the demo ChatSDK Info.plist file to your project's Info.plist

App Transport Security Settings

URL types

Make sure that the URL types are all set correctly. The URL type for your app should be set to your bundle id

All the privacy rows. These will allow the app to access the camera, location and address book

In the Firebase dashboard click Authentication -> Sign-in method and enable all the appropriate methods

Add the security rules. The rules also enable optimized user search so this step is very important!

Enable file storage - Click Storage -> Get Started

Enable push notifications

Enable location messages. Get a Google Maps API key. Then add it during the Chat SDK configuration

Objective C

config.googleMapsApiKey = @"YOUR API KEY";

Swift

config.googleMapsApiKey = "YOUR API KEY"

Push Notifications

The Push Notification module allows you to send free push notifications using Firebase Cloud Messaging.

  1. Setup an APN key.
  2. Inside your project in the Firebase console, select the gear icon, select Project Settings, and then select the Cloud Messaging tab.
  3. In APNs authentication key under iOS app configuration, click the Upload button.
  4. Browse to the location where you saved your key, select it, and click Open. Add the key ID for the key (available in Certificates, Identifiers & Profiles in the Apple Developer Member Center) and click Upload.
  5. Enable the push notifications Capability in your Xcode project Project -> Capabilities -> Push Notifications
  6. In Xcode open the Capabilities tab. Enable Push Notifications and the following Background Modes: Location updates, Background fetch, Remote notifications.

Setup Firebase Cloud Functions

Follow the instructions on our Chat SDK Firebase repository

Security Rules

Firebase secures your data by allowing you to write rules to govern who can access the database and what can be written. The rules are also needed to enable user search. To enable the rules see the guide Enabling Security Rules.

Conclusion

Congratulations! 🎉🎉 You've just turned your app into a fully featured instant messenger! Keep reading below to learn how to further customize the Chat SDK.

To go deeper, checkout the API Guide for help with:

  1. Interacting with the Firebase server
  2. Creating and updating entities
  3. Custom authentication
  4. Common code examples
  5. Customizing the user interface

View the API documentation here.

Next Steps

Documentation

Configuration

There are a number of configuration options available. Check out the BConfiguration class. Using this class you can do things like:

  • Changing the chat bubble colors
  • Changing the default user name
  • Enable or disable different types of login
  • Show or hide empty chats
  • etc...

Customize the UI

To customize the UI, you can register subclasses for different views. You can do that using the UI service BChatSDK.ui. For example, to register a new login view controller you would use:

BChatSDK.ui.loginViewController = [[YourViewController alloc] initWithNibName:Nil bundle: Nil];

To modify the chat view you would register a provider:

[BChatSDK.ui setChatViewController:^BChatViewController *(id<PThread> thread) {
        return [[YourChatViewController alloc] initWithThread:thread];
}];

Every view controller in the app can be customized this way.

Use Chat SDK views in your app

Any of the Chat SDK views can be added into your app. Checkout the PInterfaceFacade for options. You can add a any view using the following pattern. Here we are using the interface service to get the particular view.

Objective-C

UIViewController * privateThreadsViewController = [BChatSDK.ui privateThreadsViewController];

Swift

let privateThreadsViewController = BChatSDK.ui().a.privateThreadsViewController()

Integrate the Chat SDK with your existing app

To do that, you can take advantage of the BIntegrationHelper class. This makes provides some helper methods to make it easier to integrate the Chat SDK with your app.

At the most basic level, you need to do the following:

  • Authenticate the Chat SDK when your app authenticates. The best way to do this is to generate a custom token on your server following this guide. Then use this method to initialize the Chat SDK:

Objective-C

[BIntegrationHelper authenticateWithToken:@"your token"];

Swift

BIntegrationHelper.authenticate(withToken: "your token")
  • Update the Chat SDK user's name and image whenever your user's name or image changes. You can do this using the following method:

Objective-C

[BIntegrationHelper updateUserWithName:@"Name" image: image url: imageURL];

Swift

BIntegrationHelper.updateUser(withName: "Name", image: image, url: imageURL)
  • Logout of the Chat SDK whenever your app logs out. A good place to do this is whenever your login screen is displayed:

Objective-C

[BIntegrationHelper logout];

Swift

BIntegrationHelper.logout()
  • Now the Chat SDK is integrated with your app.

Module Setup

There are a number of free and premium extensions that can be added to the Chat SDK.

Firebase Modules

For the following modules:

The free modules are located in the chat-sdk-ios/ChatSDKFirebase folder. The premium modules can be purchased and downloaded from the links provided above.

To install a module you should use the following steps:

  1. Copy the module code into your Xcode source code folder and add the files to your project from inside Xcode. If you are using a symlink you can use the symlink script (mentioned above) and then just add a link to the ChatSDKFirebase folder to Xcode.
  2. Add any necessary dependencies to your Podfile
  3. Add the modules to the array of modules during configuration.

Firebase UI

The File UI module allows you to use the native Firebase user interface for authentication.

After adding the files to your Xcode project, add the following to the App Delegate to enable the module.

Objective C

AppDelegate.m -> application: didFinishLaunchingWithOptions:

 #import "BFirebaseUIModule.h"

[[[BFirebaseUIModule alloc] init] activateWithProviders: @[]];

Swift

[YourProject]-Bridging-Header.h

 #import "BFirebaseUIModule.h"

AppDelegate.swift

BFirebaseUIModule.init().activate(withProviders: []);

You should pass in array of the FUIAuthProvider objects you want to support.

Also add the following to your Podfile depending on which authentication methods you want to support:

pod 'FirebaseUI/Facebook', '~> 4.0'
pod 'FirebaseUI/Google', '~> 4.0'
pod 'FirebaseUI/Twitter', '~> 4.0'
pod 'FirebaseUI/Phone', '~> 4.0'

Then run pod install.

Note If you want to Firebase Auth UI make sure you comment out the following line:

BNetworkManager.shared().a.auth().setChallenge(BLoginViewController.init(nibName: nil, bundle: nil));

Other Modules

For the following modules:

These modules are distributed as development pods. After you've downloaded the module, unzip it and add it to the ChatSDKModules folder. Then:

  1. Open your Podfile
  2. Add the line:
pod "ChatSDKModules/[ModuleName]", :path => "[Path to ChatSDKModules folder]"
  1. Run pod install
  2. The module is now active

Using the Chat SDK API

The Chat SDK API is based around the network manager and a series of handlers. A good place to start is by looking at the handlers Pods/Development Pods/ChatSDK/Core/Core/Classes/Interfaces. Here you can review the handler interfaces which are well documented. To use a handler you would use the following code:

Objective C

[[BChatSDK.handler_name function: to: call:]

Swift

BNetworkManager.shared().a.handler_name() function: to: call:]

Searching for a user

For example, to search for a user you could use the search handler:

-(RXPromise *) usersForIndexes: (NSArray *) indexes withValue: (NSString *) value limit: (int) limit userAdded: (void(^)(id<PUser> user)) userAdded;

Here you pass in a series of indexes to be used in the search i.e. name, email etc... and a value. It will then return a series of user objects.

You can also see example implementations of these handlers by looking at the BFirebaseSearchHandler class. And also seeing how the method is used in the Chat SDK.

Starting a chat

To start a chat you can use the core handler.

-(RXPromise *) createThreadWithUsers: (NSArray *) users
                       threadCreated: (void(^)(NSError * error, id<PThread> thread)) thread;

When this method completes, the thread will have been created on Firebase and all the users will have been added. You could then open the thread using the interface adapter.

UIViewController * chatViewController = [BChatSDK.ui chatViewControllerWithThread:thread];

So a more complete example would look like this:

-(void) startChatWithUser {
    MBProgressHUD * hud = [MBProgressHUD showHUDAddedTo:self.view animated:YES];
    hud.label.text = [NSBundle t:bCreatingThread];
    
    [[BChatSDK.core createThreadWithUsers:@[_user] threadCreated:^(NSError * error, id<PThread> thread) {
        if (!error) {
            [self pushChatViewControllerWithThread:thread];
        }
        else {
            [UIView alertWithTitle:[NSBundle t:bErrorTitle] withMessage:[NSBundle t:bThreadCreationError]];
        }
        [MBProgressHUD hideHUDForView:self.view animated:YES];
    }];
}

-(void) pushChatViewControllerWithThread: (id<PThread>) thread {
    if (thread) {
        UIViewController * chatViewController = [BChatSDK.ui chatViewControllerWithThread:thread];
        [self.navigationController pushViewController:chatViewController animated:YES];
    }
}

Troubleshooting Cocoapods

  1. Always open the .xcworkspace file rather than .xcodeproj
  2. Check CocoaPod warnings - make sure to fix any warnings before proceeding
  3. Make sure that your base configuration isn’t set: Project -> project name -> Info -> Configuration
  4. Make sure that the “Build Active Architecture Only” setting is the same for both the main project and the pods project.
  5. Check the build settings in the Xcode project and check which fields are in bold (this means that their value has been overridden and CocoaPods can't access them). If you press backspace while selecting those fields, their values will be set to the default value.

The license

We offer a choice of two license for this app. You can either use the Chat SDK license or the GPLv3 license.

Most Chat SDK users either want to add the Chat SDK to an app that will be released to the App Store or they want to use the Chat SDK in a project for their client. The Chat SDK license gives you complete flexibility to do this for free.

Chat SDK License Summary

  • License does not expire.
  • Can be used for creating unlimited applications
  • Can be distributed in binary or object form only
  • Commercial use allowed
  • Can modify source-code but cannot distribute modifications (derivative works)

If a user wants to distribute the Chat SDK source code, we feel that any additions or modifications they make to the code should be contributed back to the project. The GPLv3 license ensures that if source code is distributed, it must remain open source and available to the community.

GPLv3 License Summary

  • Can modify and distribute source code
  • Commerical use allowed
  • Cannot sublicense or hold liable
  • Must include original license
  • Must disclose source

What does this mean?

Please check out the Licensing FAQ for more information.

Download Details:

Author: Chat-sdk
Source Code: https://github.com/chat-sdk/chat-sdk-ios 
License: View license

#firebase #swift #ios #sdk #objective-c #messaging 

Chat SDK iOS: Open Source Mobile Messenger
Bongani  Ngema

Bongani Ngema

1676007960

Chat-sdk-android: Chat SDK Android - Open Source Mobile Messenger

Chat SDK for Android v5

Open Source Messaging framework for Android

Chat SDK is a fully featured open source instant messaging framework for Android. Chat SDK is fully featured, scalable and flexible and follows the following key principles:

  • Free. Chat SDK uses the Apache 2.0 license
  • Open Source. Chat SDK is open source
  • Full control of the data. You have full and exclusive access to the user's chat data
  • Quick integration. Chat SDK is fully featured out of the box
  • Scalable. Supports millons of daily users [1, 2]
  • Backend agnostic. Chat SDK can be customized to support any backend

Main Image

Technical details

Please bear in mind that this version is a major update. As a result we are making new releases every few days to fix bugs and crashes. If you see an issue, please report it on the Github bug tracker and we will fix it.

Features

  • Powered by Firebase Firestore, Realtime database or XMPP
  • Private and group messages ⇘GIF
  • Public chat rooms
  • Username / password, Facebook, Twitter, Anonymous and custom login
  • Phone number authentication
  • Push notifications (using FCM)
  • Text, Image ⇘GIF and Location ⇘GIF messages
  • Forward, Reply ⇘GIF, Copy and Delete ⇘GIF messages
  • Tabbar ⇘GIF or Drawer ⇘GIF layout
  • User Profiles ⇘GIF
  • User Search ⇘GIF
  • Contacts ⇘GIF
  • Add contact by QR code ⇘GIF
  • Firebase UI ⇘GIF
  • iOS Version
  • Web Version

Extras

Sponsor us on either Github sponsors or Paetron and get these features. For full details visit our Modules page.

When you support us on Patreon, you get: extra modules, code updates, support as well as special access to the Discord Server.

  • Typing indicator ⇘GIF
  • Read receipts
  • Last online indicator
  • Audio messages ⇘GIF
  • Video messages ⇘GIF
  • Sticker messages ⇘GIF
  • User blocking ⇘GIF
  • File Messages ⇘GIF
  • End-to-end encryption
  • Nearby Users
  • Contact book integration ⇘GIF
  • Location based chat ⇘GIF
  • XMPP Server Support
    • ejabberd
    • Prosody
    • OpenFire
    • Tigase
    • MongooseIM

Visit our Animated GIF Gallery to see all the features.

About Us

Learn about the history of Chat SDK and our future plans in this post.

Scalability and Cost

People always ask about how much Chat SDK costs to run. And will it scale to millions of users? So I wrote an article talking about just that.

Library Size

The Chat SDK library with ALL modules is around 20mb

Community

You can also help us by:

  • Providing feedback and feature requests
  • Reporting bugs
  • Fixing bugs
  • Writing documentation

Email us at: team@sdk.chat

We also offer development services we are a team of full stack developers who are Firebase experts. For more information check out our consulting site.

Firestream - A light-weight messaging library for Firebase

If you are looking for something that is more-light weight than Chat SDK, we also have a library which only provides instant messaging functionality.

  1. 1-to-1 Messaging
  2. Group chat, roles, moderation
  3. Android, iOS, Web and Node.js
  4. Fully customisable messages
  5. Typing Indicator
  6. Delivery receipts
  7. User blocking
  8. Presence
  9. Message history (optional)
  10. Firestore or Realtime database

You can check out the project: Firestream on Github.

Chat SDK Firebase Documentation

Quick Start

Video Tutorial

Bear in mind that the video is not updated frequently. Please cross reference with with the text based instructions for the latest gradle dependencies.

Integration

  1. Add the Chat SDK to your project
  2. Firebase Setup
  3. Chat SDK Initialization
  4. Set the Chat SDK Theme
  5. Enable Location Messages
  6. Display the login screen
  7. Add module dependencies
  8. Module Configuration
  9. Proguard
  10. Push Notifications, Security Rules and Storage

Customization

  1. Override Activity or Fragment
  2. Theme Chat SDK
  3. Customize the Icons
  4. Customize the Tabs
  5. Add a Chat Option
  6. Custom Message Types
  7. Handling Events
  8. Custom Push Handling
  9. Synchronize user profiles with your app
  10. Custom File Upload Handler
  11. Enable token authentication

Extras

Example Firebase Schema

Migrating from v4

Recommended background

Setup Service

We provide extensive documentation on Github but if you’re a non-technical user or want to save yourself some work you can take advantage of our setup and integration service.

Download Details:

Author: Chat-sdk
Source Code: https://github.com/chat-sdk/chat-sdk-android 
License: Apache-2.0 license

#firebase #sdk #messaging #android 

Chat-sdk-android: Chat SDK Android - Open Source Mobile Messenger
Lawrence  Lesch

Lawrence Lesch

1675878300

Sentry-react-native: Official Sentry SDK for React-Native

Sentry

Bad software is everywhere, and we're tired of it. Sentry is on a mission to help developers write better software faster, so we can get back to enjoying technology. If you want to join us Check out our open positions

Sentry SDK for React Native

Requirements

  • react-native >= 0.56.0

Features

Installation and Usage

To install the package:

npm install --save @sentry/react-native
# OR
yarn add @sentry/react-native

If you are using a version of React Native <= 0.60.x link the package using react-native.

react-native link @sentry/react-native
# OR, if self hosting
SENTRY_WIZARD_URL=http://sentry.acme.com/ react-native link @sentry/react-native

How to use it:

import * as Sentry from "@sentry/react-native";

Sentry.init({
  dsn: "__DSN__",
});

Sentry.setTag("myTag", "tag-value");
Sentry.setExtra("myExtra", "extra-value");
Sentry.addBreadcrumb({ message: "test" });

Sentry.captureMessage("Hello Sentry!");

Upgrade

If you are coming from react-native-sentry which was our SDK < 1.0 you should follow the upgrade guide and then follow the install steps.

Blog posts

Mobile Vitals - Four Metrics Every Mobile Developer Should Care About.

Performance Monitoring Support for React Native.

Download Details:

Author: Getsentry
Source Code: https://github.com/getsentry/sentry-react-native 
License: MIT license

#typescript #javascript #android #ios #reactnative #sdk

Sentry-react-native: Official Sentry SDK for React-Native
Rupert  Beatty

Rupert Beatty

1675119780

Soto: Swift SDK for AWS That Works on Linux, MacOS and iOS

Soto for AWS

Soto is a Swift language SDK for Amazon Web Services (AWS), working on Linux, macOS and iOS. This library provides access to all AWS services. The service APIs it provides are a direct mapping of the REST APIs Amazon publishes for each of its services. Soto is a community supported project and is in no way affiliated with AWS.

Structure

The library consists of three parts

  1. soto-core which does all the core request encoding and signing, response decoding and error handling.
  2. The service api files which define the individual AWS services and their commands with their input and output structures.
  3. The CodeGenerator which builds the service api files from the JSON model files supplied by Amazon.

Swift Package Manager

Soto uses the Swift Package Manager to manage its code dependencies. To use Soto in your codebase it is recommended you do the same. Add a dependency to the package in your own Package.swift dependencies.

    dependencies: [
        .package(url: "https://github.com/soto-project/soto.git", from: "6.0.0")
    ],

Then add target dependencies for each of the Soto targets you want to use.

    targets: [
        .target(name: "MyApp", dependencies: [
            .product(name: "SotoS3", package: "soto"),
            .product(name: "SotoSES", package: "soto"),
            .product(name: "SotoIAM", package: "soto")
        ]),
    ]
)

Alternatively if you are using Xcode 11 or later you can use the Swift Package Manager integration and add a dependency to Soto through that.

Compatibility

Soto works on Linux, macOS and iOS. It requires v2.0 of Swift NIO. Below is a compatibility table for different Soto versions.

VersionSwiftMacOSiOSLinuxVapor
6.x5.4 -12.0 -Ubuntu 18.04-22.044.0
5.x5.2 -12.0 -Ubuntu 18.04-20.044.0
4.x5.0 -12.0 -Ubuntu 18.04-20.044.0

Configuring Credentials

Before using the SDK, you will need AWS credentials to sign all your requests. Credentials can be provided to the library in the following ways.

  • Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  • ECS container IAM policy
  • EC2 IAM instance profile
  • Shared credentials file in your home directory
  • Static credentials provided at runtime

You can find out more about credential providers here

Using Soto

To use Soto you need to create an AWSClient and a service object for the AWS service you want to work with. The AWSClient provides all the communication with AWS and the service object provides the configuration and APIs for communicating with a specific AWS service. More can be found out about AWSClient here and the AWS service objects here.

Each Soto command returns a Swift NIO EventLoopFuture. An EventLoopFuture is not the response of the command, but rather a container object that will be populated with the response at a later point. In this manner calls to AWS do not block the main thread. It is recommended you familiarise yourself with the Swift NIO documentation, specifically EventLoopFuture if you want to take full advantage of Soto.

The recommended manner to interact with EventLoopFutures is chaining. The following function returns an EventLoopFuture that creates an S3 bucket, puts a file in the bucket, reads the file back from the bucket and finally prints the contents of the file. Each of these operations are chained together. The output of one being the input of the next.

import SotoS3 //ensure this module is specified as a dependency in your package.swift

let bucket = "my-bucket"

let client = AWSClient(
    credentialProvider: .static(accessKeyId: "Your-Access-Key", secretAccessKey: "Your-Secret-Key"),
    httpClientProvider: .createNew
)
let s3 = S3(client: client, region: .uswest2)

func createBucketPutGetObject() -> EventLoopFuture<S3.GetObjectOutput> {
    // Create Bucket, Put an Object, Get the Object
    let createBucketRequest = S3.CreateBucketRequest(bucket: bucket)

    s3.createBucket(createBucketRequest)
        .flatMap { response -> EventLoopFuture<S3.PutObjectOutput> in
            // Upload text file to the s3
            let bodyData = "hello world".data(using: .utf8)!
            let putObjectRequest = S3.PutObjectRequest(
                acl: .publicRead,
                body: bodyData,
                bucket: bucket,
                key: "hello.txt"
            )
            return s3.putObject(putObjectRequest)
        }
        .flatMap { response -> EventLoopFuture<S3.GetObjectOutput> in
            let getObjectRequest = S3.GetObjectRequest(bucket: bucket, key: "hello.txt")
            return s3.getObject(getObjectRequest)
        }
        .whenSuccess { response in
            if let body = response.body {
                print(String(data: body, encoding: .utf8)!)
            }
    }
}

Build Plugin

Soto is a vary large package. If you would rather not include it in your package dependencies you can instead use the SotoCodeGenerator Swift Package Manager build plugin to generate the Swift source code for only the services/operations you actually need. Find out more here.

Documentation

API Reference

Visit soto.codes to browse the user guides and api reference. As there is a one-to-one correspondence with AWS REST api calls and the Soto api calls, you can also use the official AWS documentation for more detailed information about AWS commands.

User guides

Additional user guides for specific elements of Soto are available

Contributing

We welcome and encourage contributions from all developers. Please read CONTRIBUTING.md for our contributing guidelines.

Download Details:

Author: Soto-project
Source Code: https://github.com/soto-project/soto 
License: Apache-2.0 license

#swift #aws #sdk #server 

Soto: Swift SDK for AWS That Works on Linux, MacOS and iOS