How to Use ChatGPT To Boost Your Developer ProductHow to Use Chativity

In this ChatGPT tutorial, we learn about How to Use ChatGPT To Boost Your Developer Productivity. 

Introduction:

As a developer, you're constantly searching for ways to increase your productivity and get more done in less time. With the ever-growing demands of the tech industry, it's more important than ever to stay ahead of the curve and work efficiently. Fortunately, OpenAI's ChatGPT provides a solution that can help you increase your productivity and streamline your workflow. In this blog post, we'll explore how ChatGPT can help you boost your productivity as a developer.

Automated Code Generation:

ChatGPT can be trained to generate code based on specific programming languages and frameworks. This can save you hours of time and effort, allowing you to focus on more important tasks. ChatGPT can also generate code snippets, making it easier to quickly implement new features and functionalities.

Documentation and Comments Generation:

ChatGPT can also be trained to generate documentation and comments for your code, making it easier to understand and maintain your code over time. This can save you time and effort in the long run, as well as improving the overall quality of your code.

Research and Data Collection:

ChatGPT can be used to perform research and data collection, allowing you to gather information on specific topics and technologies in a fraction of the time it would take you to do it manually. This can save you hours of time and effort, giving you more time to focus on your coding and development work.

Error Checking:

ChatGPT can also be used to perform error checking and debugging, reducing the amount of time and effort you need to spend on this task. By automating this process, you can spend more time on other important tasks, such as coding and development.

Collaboration:

ChatGPT can be used to collaborate with other developers and team members, allowing you to communicate and share information in real-time. This can improve communication and collaboration within your team, making it easier to get things done and stay on track.

Conclusion:

In conclusion, ChatGPT is a game-changer for developers looking to increase their productivity and streamline their workflow. With its ability to automate code generation, perform error checking, and improve collaboration, ChatGPT is a must-have tool for anyone looking to boost their productivity and get more done in less time. So, why not give it a try and see how it can help you take your productivity to the next level?

Original article sourced at: https://dev.to/manthanbhatt

#chartgpt 

How to Use ChatGPT To Boost Your Developer ProductHow to Use Chativity
Amy  Waelchi

Amy Waelchi

1676151960

ChatGPT Web: ChartGPT Demo Page Built with Express and Vue3

ChatGPT Web Bot

ChartGPT demo page built with express and vue3

cover

Usage

Make sure node >= 18

If pnpm is not installed

npm install pnpm -g

install node deps

pnpm install

Sign up for an OpenAI API key and store it in your environment.

# .env
OPENAI_API_KEY="Your Key"

Run service

pnpm run service

Run web

pnpm run dev

.editorconfig

# Editor configuration, see http://editorconfig.org

root = true

[*]
charset = utf-8
indent_style = tab
indent_size = 2
end_of_line = lf
trim_trailing_whitespace = true
insert_final_newline = true

.env

# OpenAI API Key
OPENAI_API_KEY='xxxx'

# Glob API URL
VITE_GLOB_API_URL='http://localhost:3002'

.eslintrc.cjs

module.exports = {
  root: true,
  extends: ['@antfu'],
}

.gitattributes

"*.vue"    eol=lf
"*.js"     eol=lf
"*.ts"     eol=lf
"*.jsx"    eol=lf
"*.tsx"    eol=lf
"*.cjs"    eol=lf
"*.cts"    eol=lf
"*.mjs"    eol=lf
"*.mts"    eol=lf
"*.json"   eol=lf
"*.html"   eol=lf
"*.css"    eol=lf
"*.less"   eol=lf
"*.scss"   eol=lf
"*.sass"   eol=lf
"*.styl"   eol=lf
"*.md"     eol=lf

.gitignore

# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

node_modules
.DS_Store
dist
dist-ssr
coverage
*.local

/cypress/videos/
/cypress/screenshots/

# Editor directories and files
.vscode/*
!.vscode/settings.json
!.vscode/extensions.json
.idea
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

Download details:

Author: Chanzhaoyu
Source code: https://github.com/Chanzhaoyu/chatgpt-web

License: MIT license

#chartgpt #vue 

ChatGPT Web: ChartGPT Demo Page Built with Express and Vue3
Kevon  Krajcik

Kevon Krajcik

1672928100

The Difference Between ChatGPT Vs GPT-3

In this GPT article, we will learn the Difference Between ChatGPT Vs GPT-3. Explore the features and capabilities of the popular language models developed by OpenAI: ChatGPT and GPT-3, and discuss how they differ from each other.

Language models are an essential part of natural language processing (NLP), which is a field of artificial intelligence (AI) that focuses on enabling computers to understand and generate human language. ChatGPT and GPT-3 are two popular language models that have been developed by OpenAI, a leading AI research institute. In this blog post, we will explore the features and capabilities of these two models and discuss how they differ from each other.

ChatGPT

A. Overview of ChatGPT

ChatGPT is a state-of-the-art conversational language model that has been trained on a large amount of text data from various sources, including social media, books, and news articles. This model is capable of generating human-like responses to text input, making it suitable for tasks such as chatbots and conversational AI systems.

B. Features and Capabilities of ChatGPT

ChatGPT has several key features and capabilities that make it a powerful language model for NLP tasks. Some of these include:

Human-like responses: ChatGPT has been trained to generate responses that are similar to how a human would respond in a given situation. This allows it to engage in natural, human-like conversations with users.

Contextual awareness: ChatGPT is able to maintain context and track the flow of a conversation, allowing it to provide appropriate responses even in complex or multi-turn conversations.

Large training data: ChatGPT has been trained on a large amount of text data, which has allowed it to learn a wide range of language patterns and styles. This makes it capable of generating diverse and nuanced responses.

C. How ChatGPT Differs From Other Language Models

ChatGPT differs from other language models in several ways.

  1. First, it is specifically designed for conversational tasks, whereas many other language models are more general-purpose and can be used for a wide range of language-related tasks. 
  2. Second, ChatGPT is trained on a large amount of text data from various sources, including social media and news articles, which gives it a wider range of language patterns and styles compared to other models that may be trained on more limited data sets. 
  3. Finally, ChatGPT has been specifically designed to generate human-like responses, making it more suitable for tasks that require natural, human-like conversations.

GPT-3 or Generative Pre-Trained Transformer 3 

A. Overview of GPT-3

GPT-3 is a large-scale language model that has been developed by OpenAI. This model is trained on a massive amount of text data from various sources, including books, articles, and websites. 

It is capable of generating human-like responses to text input and can be used for a wide range of language-related tasks.

B. Features and Capabilities of GPT-3

GPT-3 has several key features and capabilities that make it a powerful language model for NLP tasks. Some of these include:

Large training data: GPT-3 has been trained on a massive amount of text data, which has allowed it to learn a wide range of language patterns and styles. This makes it capable of generating diverse and nuanced responses.

Multiple tasks: GPT-3 can be used for a wide range of language-related tasks, including translation, summarization, and text generation. This makes it a versatile model that can be applied to a variety of applications.

C. How GPT-3 Differs From Other Language Models

GPT-3 differs from other language models in several ways. 

  1. First, it is one of the largest and most powerful language models currently available, with 175 billion parameters. This allows it to learn a wide range of language patterns and styles and generate highly accurate responses. 
  2. Second, GPT-3 is trained on a massive amount of text data from various sources, which gives it a broader range of language patterns and styles compared to other models that may be trained on more limited data sets. 
  3. Finally, GPT-3 is capable of multiple tasks, making it a versatile model that can be applied to a variety of applications.

Comparison of ChatGPT and GPT-3

A. Similarities Between the Two Models

Both ChatGPT and GPT-3 are language models developed by OpenAI that are trained on large amounts of text data from various sources. Both models are capable of generating human-like responses to text input, and both are suitable for tasks such as chatbots and conversational AI systems.

B. Differences Between the Two Models

There are several key differences between ChatGPT and GPT-3. 

  1. First, ChatGPT is specifically designed for conversational tasks, whereas GPT-3 is a more general-purpose model that can be used for a wide range of language-related tasks. 
  2. Second, ChatGPT is trained on a smaller amount of data compared to GPT-3, which may affect its ability to generate diverse and nuanced responses. 
  3. Finally, GPT-3 is significantly larger and more powerful than ChatGPT, with 175 billion parameters compared to only 1.5 billion for ChatGPT.

ChatGPT is a state-of-the-art conversational language model that has been trained on a large amount of text data from various sources, including social media, books, and news articles. This model is capable of generating human-like responses to text input, making it suitable for tasks such as chatbots and conversational AI systems. 

GPT-3, on the other hand, is a large-scale language model that has been trained on a massive amount of text data from various sources. It is capable of generating human-like responses and can be used for a wide range of language-related tasks.

In terms of similarities, both ChatGPT and GPT-3 are trained on large amounts of text data, allowing them to generate human-like responses to text input. They are also both developed by OpenAI and are considered state-of-the-art language models.

However, there are also some key differences between the two models. ChatGPT is specifically designed for conversational tasks, whereas GPT-3 is more general-purpose and can be used for a wider range of language-related tasks. Additionally, ChatGPT is trained on a wide range of language patterns and styles, making it more capable of generating diverse and nuanced responses compared to GPT-3.

In terms of when to use each model, ChatGPT is best suited for tasks that require natural, human-like conversations, such as chatbots and conversational AI systems. GPT-3, on the other hand, is best suited for tasks that require a general-purpose language model, such as text generation and translation.

Final Words

In conclusion, understanding the differences between ChatGPT and GPT-3 is important for natural language processing tasks. While both models are highly advanced and capable of generating human-like responses, they have different strengths and are best suited for different types of tasks. By understanding these differences, users can make informed decisions about which model to use for their specific NLP needs.

Original article sourced at: https://dzone.com

#chartgpt #GPT

The Difference Between ChatGPT Vs GPT-3
Fleta  Dickens

Fleta Dickens

1672783800

How to Use ChatGPT's Conversational Nature To Learn Python

In this ChatGPT tutorial we will learn about How to Use ChatGPT's Conversational Nature To Learn Python. ChatGPT is a conversational bot launched by OpenAI in November 2022. This article explores how the conversational nature of ChatGPT can be used to learn Python.

Prerequisite

You need an OpenAI account before you can start interacting with ChatGPT. If you haven’t done so already, sign up for an account on OpenAI’s website. 

What Is ChatGPT?

GPT (Generative Pre-training Transformer) is a type of language model developed by OpenAI that uses deep learning techniques to generate human-like text. ChatGPT is a variant of the GPT model that has been specifically trained to engage in conversation with humans. It is able to generate responses to user input by predicting the next word or phrase in a conversation based on the context of the conversation. ChatGPT is an example of a chatbot, which is a computer program designed to mimic human conversation in a chat or messaging interface. ChatGPT can be used for a variety of purposes, including entertainment, customer service, and education.

The definition above was generated by ChatGPT itself. Impressive, isn’t it? 

Learn Python with ChatGPT

The Human Definition

ChatGPT is the newest “chatbot” in the market. However, its creators prefer calling it a model rather than a bot. It is build on top of OpenAI’s GPT-3.5 family of large language models. The GPT in ChatGPT stands for Generative Pre-trained Transformer. The conversational bot is trained to engage in human-like conversations. It maintains context, admits mistakes, accepts follow-up questions and provides updated information. One thing to remember is the information provided by ChatGPT is from its trained models and not from the Internet. ChatGPT has no access to the Internet and the information it provides is as fresh as when the model was last updated.

As you start interacting with ChatGPT, you will start realizing that it is astonishing and scary at the same time. Astonishing due to its insane capabilities and its ability to mimic a human-like conversation. Scary, because of the ways these capabilities can be used or misused.

Python

According to its official website, “Python is a programming language that lets you work more quickly and integrate your systems more effectively.” It is a general purpose, high-level, open-source, cross-platform, multi-paradigm programming language. Python is a scripting language and uses indentation to separate code blocks rather than using braces. It supports multiple programming paradigms like structured programming, object oriented programming and functional programming.

Let’s Start Learning

To begin, let’s ask ChatGPT to chart out a learning plan for us.

Note: the response you receive from ChatGPT may be completely different from the ones you see here.

Learn Python with ChatGPT

ChatGPT returns a very verbose list of steps to be followed to learn Python, just like a human tutor does.

Let’s ask “Teach me Python” and see what it returns.

Learn Python with ChatGPT

Learn Python with ChatGPT

Learn Python with ChatGPT

That, sure enough, is pretty good information about Python and its basics. It starts by explaining what Python is and how basic programming concepts like variables, data types, operators, control flow, and functions are defined and used in Python.

Functions

Let’s dig deeper into the next topics. I’m going to ask ChatGPT to explain how functions are defined and used in Python.

Python Functions

Python Functions

Python Functions

It starts by explaining the syntax for declaring functions with examples. It also explains in a human-like manner how parameters and defined and used within a function and how parameters can have default values. Then it moves on to explain how functions can have multiple parameters and return multiple return values. In each of these steps, it also provides examples of how the functions can be invoked, parameters passed and return values accessed. Notice how each step is accompanied by code examples. Pretty impressive, isn’t it?

Create and Consume REST APIs 

Let’s move on to advanced topics. Let’s ask ChatGPT how to use create and host a REST API and invoke it. 

Note: I did not specify Python in my question, but ChatGPT still infers it from the context of the conversation.

Python REST API

Python REST API

Python REST API

Python REST API

As you can see, it starts by explaining that to create REST API in Python, you need to use a web framework such as Flask. It then provides a fully working example of how to define functions that will handle the requests and how to define routes and associate them with functions. Following the code example, it explains how the endpoints work and then moves on to show code examples of how to invoke the APIs.

Note: the entire conversation feels like you are actually talking to a human tutor and in no way feels like a machine generated instructions.

Download Files From Cloud Storage Providers

ChatGPT even understands incomplete requests and tries to infer the intent of the request and provides appropriate information. For example, if you ask it to write code to download files from the Cloud, it actually returns code to connect to DropBox using its API and download the files.

Python DropBox download files

Data Handling

We’ll move on next to ask few questions on the data handling capabilities of Python.

Python Dataframe merge

Python Dataframe merge

Note: ChatGPT understands that you haven’t discussed data handling earlier and how it introduces you to the pandas library for joining data frames. Also, the description of the pandas library and how it contrasts with the description of pandas for the next questions. When you ask ChatGPT how to remove duplicates from a data frame, it explains the data cleaning and manipulation capabilities of the pandas library.

Python Dataframe remove duplicates

Python Dataframe remove duplicates

Again, ChatGPT answers your question with a fully working code example and explains the code line by line.

Mathematical Capabilities

To understand basic mathematical capabilities of Python, let’s ask ChatGPT to write code to determine if a number is prime or not.

Python prime check

Python prime check

The working code example and the detailed explanation of each step helps you understand not just the algorithm, but also what programming constructs to use to implement the algorithm in Python.

ChatGPT Warning

Sometimes, ChatGPT can return completely wrong, inaccurate or misleading information, especially for presumptive questions like this. So, it is a good practice to cross check its responses when you are doubtful about them.

ChatGPT can be wrong

More Conversation Ideas

You can continue the conversation and ask more questions like these:

  1. How can I create a web application in Python?
  2. What Machine Language packages are available in Python?
  3. What Data Analysis packages are available in Python?
  4. How can I run Python in AWS Lambda?
  5. Write a Python program to back up a PostgreSQL database.
  6. Write a Python program to exchange data using a TCP channel.
  7. Write a Python program to parse a web page and inspect a text box inside the page.
  8. How do I use loops and control statements in Python?
  9. How do I handle exceptions in Python?
  10. How do I work with files and directories in Python?
  11. How do I use modules and packages in Python?
  12. How do I perform basic data manipulation and analysis with Python libraries such as NumPy and Pandas?
  13. How do I use Python for web development with frameworks such as Django and Flask?
  14. How do I use Python for machine learning with libraries such as scikit-learn?
  15. How do I use classes and object-oriented programming in Python?
  16. How do I use modules and import statements in Python?
  17. What are the differences between lists and tuples in Python?
  18. How do I use a for loop in Python?
  19. How do I work with data structures in Python, such as lists and dictionaries?
  20. How do I install and use third-party libraries in Python?

Further Reading

Let’s ask ChatGPT to tell us where we can find more information about Python.

Learn Python with ChatGPT

Thanking ChatGPT

Like we’d normally thank a human tutor at the end of the course, let’s thank ChatGPT and see how it responds.

Learn Python with ChatGPT

Conclusion

Learning a new programming language is a tedious task. Every language is different and we generally tend to get confused and lost when beginning to learn a new language, and are generally limited by our tendency to compare what we already know to what we are learning. Working code examples help us understand new concepts better and break these limitations.

The methods explained in the article can be used to learn anything about Python. Just ask the question in simple worded and clearly laid out sentences and ChatGPT will return appropriate responses. Remember that ChatGPT maintains context throughout the conversation and it is usually not necessary to make verbose questions. But if you find ChatGPT having difficulty in understanding your statement, rephrase your statement and try adding some context that you think can help ChatGPT return more relevant and accurate information.

ChatGPT can serve as a very good starting point to learn the basics, intermediate and advanced topics of any language with code examples in a short span of time and a focused approach without getting distracted. It may not always return working code or accurate information, but the information it provides for most languages is good enough to get us started with the languages or feature and servers as a significant step in our journey towards learning the language

Original article sourced at: https://dzone.com

#python #chartgpt 

How to Use ChatGPT's Conversational Nature To Learn Python
Ella  Windler

Ella Windler

1672548420

A Simple Light-weight Library That Wraps ChatGPT API Completions

Whetstone.ChatGPT

A simple light-weight library that wraps ChatGPT API completions. Additions to support images and other beta features are in progress.

string apiKey = GetChatGPTKey();

ChatGPTClient client = new ChatGPTClient(apiKey);

var gptRequest = new ChatGPTCompletionRequest
{
    Model = ChatGPTCompletionModels.Ada,
    Prompt = "How is the weather?",
    Temperature = 0.9f,
    MaxTokens = 140
};

var response = await client.GetResponseAsync(gptRequest);

Console.WriteLine(response.Choices?[0]?.Text);

.gitignore

## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.

# Azure Functions localsettings file
local.settings.json

# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates

# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs

# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
x64/
x86/
bld/
[Bb]in/
[Oo]bj/
[Ll]og/

# Visual Studio 2015 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/

# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*

# NUNIT
*.VisualState.xml
TestResult.xml

# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c

# DNX
project.lock.json
project.fragment.lock.json
artifacts/

*_i.c
*_p.c
*_i.h
*.ilk
*.meta
*.obj
*.pch
*.pdb
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc

# Chutzpah Test files
_Chutzpah*

# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
*.VC.VC.opendb

# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap

# TFS 2012 Local Workspace
$tf/

# Guidance Automation Toolkit
*.gpState

# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user

# JustCode is a .NET coding add-in
.JustCode

# TeamCity is a build add-in
_TeamCity*

# DotCover is a Code Coverage Tool
*.dotCover

# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*

# MightyMoose
*.mm.*
AutoTest.Net/

# Web workbench (sass)
.sass-cache/

# Installshield output folder
[Ee]xpress/

# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html

# Click-Once directory
publish/

# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# TODO: Comment the next line if you want to checkin your web deploy settings
# but database connection strings (with potential passwords) will be unencrypted
#*.pubxml
*.publishproj

# Microsoft Azure Web App publish settings. Comment the next line if you want to
# checkin your Azure Web App publish settings, but sensitive information contained
# in these scripts will be unencrypted
PublishScripts/

# NuGet Packages
*.nupkg
# The packages folder can be ignored because of Package Restore
**/packages/*
# except build/, which is used as an MSBuild target.
!**/packages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/packages/repositories.config
# NuGet v3's project.json files produces more ignoreable files
*.nuget.props
*.nuget.targets

# Microsoft Azure Build Output
csx/
*.build.csdef

# Microsoft Azure Emulator
ecf/
rcf/

# Windows Store app package directories and files
AppPackages/
BundleArtifacts/
Package.StoreAssociation.xml
_pkginfo.txt

# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/

# Others
ClientBin/
~$*
*~
*.dbmdl
*.dbproj.schemaview
*.jfm
*.pfx
*.publishsettings
node_modules/
orleans.codegen.cs

# Since there are multiple workflows, uncomment next line to ignore bower_components
# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
#bower_components/

# RIA/Silverlight projects
Generated_Code/

# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm

# SQL Server files
*.mdf
*.ldf

# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings

# Microsoft Fakes
FakesAssemblies/

# GhostDoc plugin setting file
*.GhostDoc.xml

# Node.js Tools for Visual Studio
.ntvs_analysis.dat

# Visual Studio 6 build log
*.plg

# Visual Studio 6 workspace options file
*.opt

# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions

# Paket dependency manager
.paket/paket.exe
paket-files/

# FAKE - F# Make
.fake/

# JetBrains Rider
.idea/
*.sln.iml

# CodeRush
.cr/

# Python Tools for Visual Studio (PTVS)
__pycache__/
*.pyc

Download details:

Author: johniwasz
Source code: https://github.com/johniwasz/whetstone.chatgpt

#chartgpt 

A Simple Light-weight Library That Wraps ChatGPT API Completions
Mary  Turcotte

Mary Turcotte

1672503300

Swift Package Manager for Accessing The OpenAI GPT-3 API

SwiftOpenAI

SDK written for OpenAI API. You can get more details about API from here

Installation

Add this project on your Package.swift

import PackageDescription

let package = Package(
    dependencies: [
        .Package(url: "https://github.com/msege/swiftopenai", majorVersion: 0, minor: 0.1)
    ]
)

Configuration

You have to register it before use. Get your token from api-keys You can also set your Organization ID. You can find your organization id from settings

import SwiftOpenAI
SwiftOpenAI.register(token: "YOUR-API-TOKEN", organization: "ORGANIZATION-ID")

Usage of SDK

  1. Generate request with related parameters.
let request = RetrieveModelRequest(modelName: "text-davinci-003")

2.    Then either use the rest api in the sdk or create UrlRequest and use it your own implementation

NetworkApi.shared.request(request: request) { result in
    // Handle `result` success/failure cases
    switch result {
    case .success(let data):
    // Data will be from `Request.ResponseType`
    // In that case it will be `RetrieveModelResponse`
        print("Data: \(data)")
    case .failure(let error):
        print("Error: \(error)")
    }
}

or

let urlRequest: URLRequest? = request.urlRequest
// Use `urlRequest` in your own implementation
// You can use `Request.ResponseType` while decoding response.

Supported Endpoints

Contact

Sinan Ege – @sinan_egemehmetsinanege@gmail.com

Distributed under the MIT license. See LICENSE for more information.

https://github.com/msege/swiftopenai


Download details:

Author: msege
Source code: https://github.com/msege/swiftopenai

License: MIT license

#openai #swift #chartgpt  

Swift Package Manager for Accessing The OpenAI GPT-3 API
Mary  Turcotte

Mary Turcotte

1672495560

Violet: A Next-generation Chat Bot Based on OpenAI

V i o l e t G P T

🤖 A next-generation chat bot based on OpenAI, which can access your files, open the webbrowser and more!

Features

  • Accurate mathematical calculations
  • File system access
  • Voice support (SpeechToText + TextToSpeech)
  • Errors are explained

Planned

  • API access (save your API tokens for various services in a .env file, and the bot will be able to find out e.g. the weather.)
  • Web UI planned (the sandbox has a way higher priority, though! No web UI is going to be worked on before there is no proper sandbox.)

Warning

The sandbox is currently being developed, which means, in theory, the bot do anything.

Never, ever run the bot with administrator permissions. You should always make sure the sandbox can't delete any imporant files! Additionally, never ever, give others access to the sandbox features! Malicious actors could use this as a remote code execution.

Support

✅ Debian-Based GNU+Linux Distros (such as Mint 20.2)

✅ Windows 10 (and above)

✅ MacOS (probably - testers needed!)

Troubleshooting

AttributeError: Could not find PyAudio; check installation

pip install pipwin
pipwin install pyaudio

Debian-based GNU+Linux:

sudo apt-get install python3-pyaudio

Example tasks

  • What WiFi am I connected with (on Linux)?
  • Tell me a short story!
  • Translate "Hello there" into Spanish!

Download details:

Author: nsde
Source code: https://github.com/nsde/violet

License: MIT license

#openai #chartgpt 

Violet: A Next-generation Chat Bot Based on OpenAI
Mary  Turcotte

Mary Turcotte

1672476180

Discord GPT: Discord Chat Bot That Uses The OpenAI API

Discord GPT

This is a Discord bot that will use the OpenAI API, to be able to query and recieve very useful information like you would in ChatGPT, except within merely your Discord chats.
 

You can invite it to your server using the following link:
https://discord.com/api/oauth2/authorize?client_id=1050671337049423932&permissions=517544057920&scope=bot

What is ChatGPT?

ChatGPT is a chatbot web application written by the developers at OpenAI to be used in conjunction with their own API that uses GPT-3's neural network of deep learning.

What makes this project different?

I decided that not only was I going to code a Discord bot that uses the OpenAI API, but I was also going to code the Discord bot, using the OpenAI API......
 

What on Earth does that mean?
 

It means that in order to code this bot, I will rely on external Internet sources as little as possible, and try to use ChatGPT to help me with almost any question I have in order to complete this project! (as much as possible at least)
 

Why am I doing this?
 

What is the point of creating a project around a deep-learning API if that OpenAI's neural networks can't truly answer important questions and help developers write code (as it is promoted). So this isn't just a challenge for myself, but rather for OpenAI as well; to see how reliable it really is in performing these kinds of tasks.
There are exceptions. This being the offical documentation provided by Discord and OpenAI, and external resources IF official documentation nor ChatGPT can provide the answers. But unless it comes to that, no YouTube tutorials, StackOverflow, GeeksForGeeks, W3 Schools, freeCodeCamp, etc.

Process

First, I began with posing the general query to ChatGPT.
create_bot
So, I followed its directions. I went onto the Discord [Applications] (https://discord.com/developers/applications page) of the Developer Portal, and created a new Discord bot (it's important to note that I do have a bit of experience creating and coding Discord bots in the past, though not a lot).

Next, I had the option to put 5 tags for the bot, so I consulted ChatGPT once again!
tags_query
I ultimately went with these 5 tags:
bot_tags

The following decision though was honestly the hardest, and was something I was trying to think really hard about:
Which programming language do I use to code this bot? Python or JavaScript?
Both are exceptional programming languages, especially when it comes to applications such as this. I decided to use NodeJS to build this bot, as discord.js is more up to date (where discord.py) and hosting NodeJS applications is more streamlined than hosting Python applications. In addition, I don't usually code as much in JavaScript as I do in Python, so I figured "Why not?"

 

So then how was it? Well...honestly not as smooth as I thought it would be.
Any experienced developer who has played around with ChatGPT knows that while the AI is very impressive at delivering large bodies of code and detailed descriptions based on queries for projects and debugging, it most certainly has its limitations.
The main issue, at least when it comes to a project like this, is that GPT-3's model was trained prior to 2022, so if you try to answer questions with knowledge from this year will be very difficult for it. This is especially an issue when many coding libraries like discord.js and OPENAI's are constantly changing.
 


In summary, while ChatGPT was able to help me with some code so long as I was specific and took it one step at a time, both it and official documentation were not as helpful as it should have been (in my humble opinion at least).


Download details:

Author: adonitakos
Source code: https://github.com/adonitakos/Discord-GPT

#discord #chartgpt #openai 

Discord GPT: Discord Chat Bot That Uses The OpenAI API
Mary  Turcotte

Mary Turcotte

1672423500

OpenAI Chat-GPT Application in Salesforce Lighting Experience

OpenAI Chat-GPT application in Salesforce Lighting experience ⚡

Standalone Salesforce lighting-web-components application created levering OpenAI Chat-GPT API.

✨ Features

  • Completions API interface in lighting experience.
    Completions UI
  • Edits API interface in lighting experience.
    Edits UI
  • API settings UI.
    API Settings UI

🚧 Features under development

  • Images API interface in lighting experience.
  • Chat-GPT request settings UI.

🚫 Limitations

  • Apex HTTP Callout timeout Governor Limit (Maximum is set as 120 seconds).
  • It seems that package deployment with external named credentials have some bugs at the moment and Salesforce throws unknown error while trying to deploy.
    • Fortunately, this behavior can be avoided deploying package in stages.
    • Unfortunately, in order to deploy package in stages it has to be managed.

📥 Installation

  1. Install "Pre-Deployment" version (package ID = 04t68000000ko4Z)
  2. Go to OpenAI lighting application and click "Process Pre-Deployment" button.
  3. Upgrade package to "Initial Version" (package ID = 04t68000000koce).
  4. The OpenAI API uses API keys for authentication. Visit your API keys page to retrieve the API key. Save it after generating it, you will need it to complete application setup.
  5. Update API key in OpenAI lighting application's API settings section
  6. If application still do not work, go to OpenAI lighting application's API settings section and click "Grant Access" button.

https://user-images.githubusercontent.com/55427802/208730474-de75819f-d07e-4768-ac3a-c7e34880f803.mp4

🗑️ Uninstalling

  1. Execute anonymous apex block:

 Database.delete([ SELECT Id FROM PermissionSetAssignment WHERE PermissionSet.Name = 'ChatGPT_Access' ]);

  1. Go to Setup → Platform Tools → Apps → Installed Packages
  2. Find and uninstall "Chat-GPT" managed package

🤝 Development

  • If you have suggestions for how this app could be improved, or want to report a bug, open an issue or write me an email! We'd love all and any contributions.
  • In order to run this application as unmanaged package keep in mind that some component's references have managed packaged chatGPT__ namespace. Delete them before deployment.

Download details:

Author: ArnasBaronas
Source code: https://github.com/ArnasBaronas/chat-gpt-sfdc

#openai #chartgpt 

OpenAI Chat-GPT Application in Salesforce Lighting Experience
Mary  Turcotte

Mary Turcotte

1672411560

Myaibot: A Chatbot Built using The OpenAI GPT-3 Model

myaibot

myaibot, a chatbot built using the OpenAI GPT-3 model.

Introduction

myaibot is a chatbot that uses the OpenAI GPT-3 model to generate responses to user input. It is designed to be simple to use, with a single function myaibot that takes an input string and prints the bot's response.

Installation

To use myaibot, you will need to install the required dependencies:

pip install myaibot

You will also need to set your OpenAI API key as an environment variable.

Usage

To set up your OpenAI API key, you will need to sign up for an OpenAI account and obtain an API key. Here is a step-by-step guide:

Go to the OpenAI website (https://openai.com/) and click on the "Sign Up" button in the top right corner.

Follow the prompts to create a new OpenAI account. You will need to provide your email address, name, and password.

Once your account has been created, go to the "API Keys" page in your account settings. This can be accessed by clicking on your profile icon in the top right corner and selecting "Settings" from the dropdown menu.

Click on the "Create New Key" button to create a new API key.

Copy the API key to your clipboard.

Set the API key as an environment variable. You can do this by adding the following line to your shell configuration file (such as ~/.bashrc or ~/.bash_profile):

export OPENAI_API_KEY="your-api-key"

Replace your-api-key with the API key that you copied in step 5.

  1. Reload your shell configuration file by running the following command:
source ~/.bashrc #or source ~/.zshrc or source ~/.bash_profile

depending on which file you edited in step 6. Your OpenAI API key is now set up and ready to use. You can use it in your Python code by accessing the os.environ["OPENAI_API_KEY"] variable.

To use myaibot, simply call the myaibot function with the input string as an argument:

import myaibot
myaibot.myaibot("Hello, how are you today?")

This will print the bot's response to the user's input.

You can also pass the --debug flag to enable debug output:

python -m myaibot --input "Hello, how are you today?" --debug

Download details:

Author: syedhamidali
Source code: https://github.com/syedhamidali/myaibot

License: MIT license

#openai #chartgpt 

Myaibot: A Chatbot Built using The OpenAI GPT-3 Model
Mary  Turcotte

Mary Turcotte

1672389360

ChatGPT But Discord Bot Version using Discord.js and Openai

chatGPT-discord-bot

packages that I use: discord.js, openai

Installation

Clone the repo on your system:

git clone https://github.com/3raphat/simple-discord-bot-template.git

Then, navigate to the directory and install the npm packages:

npm install

Configuration

Rename .env.example to .env and fill out the values:

TOKEN=
CLIENT_ID=
OPENAI_API_KEY=

How to generate a OpenAI API key

Go to https://beta.openai.com/account/api-keys.

Click on Create new secret key button.

Copy the OpenAI API key, then paste it into the OPENAI_API_KEY in .env file.

Start

npm run start

After run the bot, use node deploy-commands.js to deploy the slash commands.


Download details:

Author: 3raphat
Source code: https://github.com/3raphat/chatGPT-discord-bot

License: MIT license

#discord #chartgpt #openai 

ChatGPT But Discord Bot Version using Discord.js and Openai

Natural Language Processing (NLP) Guide

A guide covering Natural Language Processing (NLP) including the applications, libraries and tools that will make you a better and more efficient Natural Language Processing (NLP) development.

Note: You can easily convert this markdown file to a PDF in VSCode using this handy extension Markdown PDF.

Back to the Top

Natural Language Processing (NLP) is a branch of artificial intelligence (AI) focused on giving computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics rule-based modeling of human language with statistical, machine learning, and deep learning models.

Natural Language Processing With Python's NLTK Package

Cognitive Services—APIs for AI Developers | Microsoft Azure

Artificial Intelligence Services - Amazon Web Services (AWS)

Google Cloud Natural Language API

Top Natural Language Processing Courses Online | Udemy

Introduction to Natural Language Processing (NLP) | Udemy

Top Natural Language Processing Courses | Coursera

Natural Language Processing | Coursera

Natural Language Processing in TensorFlow | Coursera

Learn Natural Language Processing with Online Courses and Lessons | edX

Build a Natural Language Processing Solution with Microsoft Azure | Pluralsight

Natural Language Processing (NLP) Training Courses | NobleProg

Natural Language Processing with Deep Learning Course | Standford Online

Advanced Natural Language Processing - MIT OpenCourseWare

Certified Natural Language Processing Expert Certification | IABAC

Natural Language Processing Course - Intel

NLP Tools, Libraries, and Frameworks

Back to the Top

Natural Language Toolkit (NLTK) is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries.

spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. spaCy comes with pretrained pipelines and currently supports tokenization and training for 60+ languages. It also features neural network models for tagging, parsing, named entity recognition, text classification and more, multi-task learning with pretrained transformers like BERT.

CoreNLP is a set of natural language analysis tools written in Java. CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.

NLPnet is a Python library for Natural Language Processing tasks based on neural networks. It performs part-of-speech tagging, semantic role labeling and dependency parsing.

Flair is a simple framework for state-of-the-art Natural Language Processing (NLP) models to your text, such as named entity recognition (NER), part-of-speech tagging (PoS), special support for biomedical data, sense disambiguation and classification, with support for a rapidly growing number of languages.

Catalyst is a C# Natural Language Processing library built for speed. Inspired by spaCy's design, it brings pre-trained models, out-of-the box support for training word and document embeddings, and flexible entity recognition models.

Apache OpenNLP is an open-source library for a machine learning based toolkit used in the processing of natural language text. It features an API for use cases like Named Entity Recognition, Sentence Detection, POS(Part-Of-Speech) tagging, Tokenization Feature extraction, Chunking, Parsing, and Coreference resolution.

DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. These kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.

MLpack is a fast, flexible C++ machine learning library written in C++ and built on the Armadillo linear algebra library, the ensmallen numerical optimization library, and parts of Boost.

OpenNN is an open-source neural networks library for machine learning. It contains sophisticated algorithms and utilities to deal with many artificial intelligence solutions.

Microsoft Cognitive Toolkit (CNTK) is an open-source toolkit for commercial-grade distributed deep learning. It describes neural networks as a series of computational steps via a directed graph. CNTK allows the user to easily realize and combine popular model types such as feed-forward DNNs, convolutional neural networks (CNNs) and recurrent neural networks (RNNs/LSTMs). CNTK implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers.

NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN accelerates widely used deep learning frameworks, including Caffe2, Chainer, Keras, MATLAB, MxNet, PyTorch, and TensorFlow.

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.

Tensorflow_macOS is a Mac-optimized version of TensorFlow and TensorFlow Addons for macOS 11.0+ accelerated using Apple's ML Compute framework.

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano.It was developed with a focus on enabling fast experimentation. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML.

PyTorch is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds. Primarily developed by Facebook's AI Research lab.

Eclipse Deeplearning4J (DL4J) is a set of projects intended to support all the needs of a JVM-based(Scala, Kotlin, Clojure, and Groovy) deep learning application. This means starting with the raw data, loading and preprocessing it from wherever and whatever format it is in to building and tuning a wide variety of simple and complex deep learning networks.

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference.

Anaconda is a very popular Data Science platform for machine learning and deep learning that enables users to develop models, train them, and deploy them.

PlaidML is an advanced and portable tensor compiler for enabling deep learning on laptops, embedded devices, or other devices where the available computing hardware is not well supported or the available software stack contains unpalatable license restrictions.

Scikit-Learn is a Python module for machine learning built on top of SciPy, NumPy, and matplotlib, making it easier to apply robust and simple implementations of many popular machine learning algorithms.

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently including tight integration with NumPy.

Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

Apache Spark Connector for SQL Server and Azure SQL is a high-performance connector that enables you to use transactional data in big data analytics and persists results for ad-hoc queries or reporting. The connector allows you to use any SQL database, on-premises or in the cloud, as an input data source or output data sink for Spark jobs.

Apache PredictionIO is an open source machine learning framework for developers, data scientists, and end users. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs. It is based on scalable open source services like Hadoop, HBase (and other DBs), Elasticsearch, Spark and implements what is called a Lambda Architecture.

Apache Airflow is an open-source workflow management platform created by the community to programmatically author, schedule and monitor workflows. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity.

Open Neural Network Exchange(ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.

BigDL is a distributed deep learning library for Apache Spark. With BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters.

Numba is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax. Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks.

Algorithms

Back to the Top

Fuzzy logic is a heuristic approach that allows for more advanced decision-tree processing and better integration with rules-based programming.

 
 

Architecture of a Fuzzy Logic System. Source: ResearchGate

Support Vector Machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems.

 
 

Support Vector Machine (SVM). Source:OpenClipArt

Neural networks are a subset of machine learning and are at the heart of deep learning algorithms. The name/structure is inspired by the human brain copying the process that biological neurons/nodes signal to one another.

 
 

Deep neural network. Source: IBM

Convolutional Neural Networks (R-CNN) is an object detection algorithm that first segments the image to find potential relevant bounding boxes and then run the detection algorithm to find most probable objects in those bounding boxes.

 
 

Convolutional Neural Networks. Source:CS231n

Recurrent neural networks (RNNs) is a type of artificial neural network which uses sequential data or time series data.

 
 

Recurrent Neural Networks. Source: Slideteam

Multilayer Perceptrons (MLPs) is multi-layer neural networks composed of multiple layers of perceptrons with a threshold activation.

 
 

Multilayer Perceptrons. Source: DeepAI

Random forest is a commonly-used machine learning algorithm, which combines the output of multiple decision trees to reach a single result. A decision tree in a forest cannot be pruned for sampling and therefore, prediction selection. Its ease of use and flexibility have fueled its adoption, as it handles both classification and regression problems.

 
 

Random forest. Source: wikimedia

Decision trees are tree-structured models for classification and regression.

 
 

**Decision Trees. Source: CMU

Naive Bayes is a machine learning algorithm that is used solved calssification problems. It's based on applying Bayes' theorem with strong independence assumptions between the features.

 
 

Bayes' theorem. Source:mathisfun

Machine Learning

Back to the Top

 
 

Machine Learning/Deep Learning Frameworks.

Learning Resources for ML

Machine Learning is a branch of artificial intelligence (AI) focused on building apps using algorithms that learn from data models and improve their accuracy over time without needing to be programmed.

Machine Learning by Stanford University from Coursera

AWS Training and Certification for Machine Learning (ML) Courses

Machine Learning Scholarship Program for Microsoft Azure from Udacity

Microsoft Certified: Azure Data Scientist Associate

Microsoft Certified: Azure AI Engineer Associate

Azure Machine Learning training and deployment

Learning Machine learning and artificial intelligence from Google Cloud Training

Machine Learning Crash Course for Google Cloud

JupyterLab

Scheduling Jupyter notebooks on Amazon SageMaker ephemeral instances

How to run Jupyter Notebooks in your Azure Machine Learning workspace

Machine Learning Courses Online from Udemy

Machine Learning Courses Online from Coursera

Learn Machine Learning with Online Courses and Classes from edX

ML Frameworks, Libraries, and Tools

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano.It was developed with a focus on enabling fast experimentation. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML.

PyTorch is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds. Primarily developed by Facebook's AI Research lab.

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

Azure Databricks is a fast and collaborative Apache Spark-based big data analytics service designed for data science and data engineering. Azure Databricks, sets up your Apache Spark environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn.

Microsoft Cognitive Toolkit (CNTK) is an open-source toolkit for commercial-grade distributed deep learning. It describes neural networks as a series of computational steps via a directed graph. CNTK allows the user to easily realize and combine popular model types such as feed-forward DNNs, convolutional neural networks (CNNs) and recurrent neural networks (RNNs/LSTMs). CNTK implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers.

Apple CoreML is a framework that helps integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user's device. A model is the result of applying a machine learning algorithm to a set of training data. You use a model to make predictions based on new input data.

Tensorflow_macOS is a Mac-optimized version of TensorFlow and TensorFlow Addons for macOS 11.0+ accelerated using Apple's ML Compute framework.

Apache OpenNLP is an open-source library for a machine learning based toolkit used in the processing of natural language text. It features an API for use cases like Named Entity Recognition, Sentence Detection, POS(Part-Of-Speech) tagging, Tokenization Feature extraction, Chunking, Parsing, and Coreference resolution.

Apache Airflow is an open-source workflow management platform created by the community to programmatically author, schedule and monitor workflows. Install. Principles. Scalable. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity.

Open Neural Network Exchange(ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.

Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines. Support for Python, R, Julia, Scala, Go, Javascript and more.

AutoGluon is toolkit for Deep learning that automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data.

Anaconda is a very popular Data Science platform for machine learning and deep learning that enables users to develop models, train them, and deploy them.

PlaidML is an advanced and portable tensor compiler for enabling deep learning on laptops, embedded devices, or other devices where the available computing hardware is not well supported or the available software stack contains unpalatable license restrictions.

OpenCV is a highly optimized library with focus on real-time computer vision applications. The C++, Python, and Java interfaces support Linux, MacOS, Windows, iOS, and Android.

Scikit-Learn is a Python module for machine learning built on top of SciPy, NumPy, and matplotlib, making it easier to apply robust and simple implementations of many popular machine learning algorithms.

Weka is an open source machine learning software that can be accessed through a graphical user interface, standard terminal applications, or a Java API. It is widely used for teaching, research, and industrial applications, contains a plethora of built-in tools for standard machine learning tasks, and additionally gives transparent access to well-known toolboxes such as scikit-learn, R, and Deeplearning4j.

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently including tight integration with NumPy.

nGraph is an open source C++ library, compiler and runtime for Deep Learning. The nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets.It provides the freedom, performance, and ease-of-use to AI developers.

NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN accelerates widely used deep learning frameworks, including Caffe2, Chainer, Keras, MATLAB, MxNet, PyTorch, and TensorFlow.

Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Jupyter is used widely in industries that do data cleaning and transformation, numerical simulation, statistical modeling, data visualization, data science, and machine learning.

Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

Apache Spark Connector for SQL Server and Azure SQL is a high-performance connector that enables you to use transactional data in big data analytics and persists results for ad-hoc queries or reporting. The connector allows you to use any SQL database, on-premises or in the cloud, as an input data source or output data sink for Spark jobs.

Apache PredictionIO is an open source machine learning framework for developers, data scientists, and end users. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs. It is based on scalable open source services like Hadoop, HBase (and other DBs), Elasticsearch, Spark and implements what is called a Lambda Architecture.

Cluster Manager for Apache Kafka(CMAK) is a tool for managing Apache Kafka clusters.

BigDL is a distributed deep learning library for Apache Spark. With BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters.

Eclipse Deeplearning4J (DL4J) is a set of projects intended to support all the needs of a JVM-based(Scala, Kotlin, Clojure, and Groovy) deep learning application. This means starting with the raw data, loading and preprocessing it from wherever and whatever format it is in to building and tuning a wide variety of simple and complex deep learning networks.

Tensorman is a utility for easy management of Tensorflow containers by developed by System76.Tensorman allows Tensorflow to operate in an isolated environment that is contained from the rest of the system. This virtual environment can operate independent of the base system, allowing you to use any version of Tensorflow on any version of a Linux distribution that supports the Docker runtime.

Numba is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax. Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks.

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference.

XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. XGBoost provides a parallel tree boosting (also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. It supports distributed training on multiple machines, including AWS, GCE, Azure, and Yarn clusters. Also, it can be integrated with Flink, Spark and other cloud dataflow systems.

cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn.

CUDA Development

Back to the Top

 
 

 
 

CUDA Toolkit. Source: NVIDIA Developer CUDA

CUDA Learning Resources

CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. In GPU-accelerated applications, the sequential part of the workload runs on the CPU, which is optimized for single-threaded. The compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers can program in popular languages such as C, C++, Fortran, Python and MATLAB.

CUDA Toolkit Documentation

CUDA Quick Start Guide

CUDA on WSL

CUDA GPU support for TensorFlow

NVIDIA Deep Learning cuDNN Documentation

NVIDIA GPU Cloud Documentation

NVIDIA NGC is a hub for GPU-optimized software for deep learning, machine learning, and high-performance computing (HPC) workloads.

NVIDIA NGC Containers is a registry that provides researchers, data scientists, and developers with simple access to a comprehensive catalog of GPU-accelerated software for AI, machine learning and HPC. These containers take full advantage of NVIDIA GPUs on-premises and in the cloud.

CUDA Tools Libraries, and Frameworks

CUDA Toolkit is a collection of tools & libraries that provide a development environment for creating high performance GPU-accelerated applications. The CUDA Toolkit allows you can develop, optimize, and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler, and a runtime library to build and deploy your application on major architectures including x86, Arm and POWER.

NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN accelerates widely used deep learning frameworks, including Caffe2, Chainer, Keras, MATLAB, MxNet, PyTorch, and TensorFlow.

CUDA-X HPC is a collection of libraries, tools, compilers and APIs that help developers solve the world's most challenging problems. CUDA-X HPC includes highly tuned kernels essential for high-performance computing (HPC).

NVIDIA Container Toolkit is a collection of tools & libraries that allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unpooling, and broadcasting operations for sparse tensors.

CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-multiplication (GEMM) at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS.

CUB is a cooperative primitives for CUDA C++ kernel authors.

Tensorman is a utility for easy management of Tensorflow containers by developed by System76.Tensorman allows Tensorflow to operate in an isolated environment that is contained from the rest of the system. This virtual environment can operate independent of the base system, allowing you to use any version of Tensorflow on any version of a Linux distribution that supports the Docker runtime.

Numba is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax. Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks.

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference.

CuPy is an implementation of NumPy-compatible multi-dimensional array on CUDA. CuPy consists of the core multi-dimensional array class, cupy.ndarray, and many functions on it. It supports a subset of numpy.ndarray interface.

CatBoost is a fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data. cuDF provides a pandas-like API that will be familiar to data engineers & data scientists, so they can use it to easily accelerate their workflows without going into the details of CUDA programming.

cuML is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures including CPUs, GPUs, and other hardware acceleration devices.

Thrust is a C++ parallel programming library which resembles the C++ Standard Library. Thrust's high-level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs.

AresDB is a GPU-powered real-time analytics storage and query engine. It features low query latency, high data freshness and highly efficient in-memory and on disk storage management.

Arraymancer is a tensor (N-dimensional array) project in Nim. The main focus is providing a fast and ergonomic CPU, Cuda and OpenCL ndarray library on which to build a scientific computing ecosystem.

Kintinuous is a real-time dense visual SLAM system capable of producing high quality globally consistent point and mesh reconstructions over hundreds of metres in real-time with only a low-cost commodity RGB-D sensor.

GraphVite is a general graph embedding engine, dedicated to high-speed and large-scale embedding learning in various applications.

MATLAB Development

Back to the Top

 
 

MATLAB Learning Resources

MATLAB is a programming language that does numerical computing such as expressing matrix and array mathematics directly.

MATLAB Documentation

Getting Started with MATLAB

MATLAB and Simulink Training from MATLAB Academy

MathWorks Certification Program

MATLAB Online Courses from Udemy

MATLAB Online Courses from Coursera

MATLAB Online Courses from edX

Building a MATLAB GUI

MATLAB Style Guidelines 2.0

Setting Up Git Source Control with MATLAB & Simulink

Pull, Push and Fetch Files with Git with MATLAB & Simulink

Create New Repository with MATLAB & Simulink

PRMLT is Matlab code for machine learning algorithms in the PRML book.

MATLAB Tools, Libraries, Frameworks

MATLAB and Simulink Services & Applications List

MATLAB in the Cloud is a service that allows you to run in cloud environments from MathWorks Cloud to Public Clouds including AWS and Azure.

MATLAB Online™ is a service that allows to users to uilitize MATLAB and Simulink through a web browser such as Google Chrome.

Simulink is a block diagram environment for Model-Based Design. It supports simulation, automatic code generation, and continuous testing of embedded systems.

Simulink Online™ is a service that provides access to Simulink through your web browser.

MATLAB Drive™ is a service that gives you the ability to store, access, and work with your files from anywhere.

MATLAB Parallel Server™ is a tool that lets you scale MATLAB® programs and Simulink® simulations to clusters and clouds. You can prototype your programs and simulations on the desktop and then run them on clusters and clouds without recoding. MATLAB Parallel Server supports batch jobs, interactive parallel computations, and distributed computations with large matrices.

MATLAB Schemer is a MATLAB package makes it easy to change the color scheme (theme) of the MATLAB display and GUI.

LRSLibrary is a Low-Rank and Sparse Tools for Background Modeling and Subtraction in Videos. The library was designed for moving object detection in videos, but it can be also used for other computer vision and machine learning problems.

Image Processing Toolbox™ is a tool that provides a comprehensive set of reference-standard algorithms and workflow apps for image processing, analysis, visualization, and algorithm development. You can perform image segmentation, image enhancement, noise reduction, geometric transformations, image registration, and 3D image processing.

Computer Vision Toolbox™ is a tool that provides algorithms, functions, and apps for designing and testing computer vision, 3D vision, and video processing systems. You can perform object detection and tracking, as well as feature detection, extraction, and matching. You can automate calibration workflows for single, stereo, and fisheye cameras. For 3D vision, the toolbox supports visual and point cloud SLAM, stereo vision, structure from motion, and point cloud processing.

Statistics and Machine Learning Toolbox™ is a tool that provides functions and apps to describe, analyze, and model data. You can use descriptive statistics, visualizations, and clustering for exploratory data analysis; fit probability distributions to data; generate random numbers for Monte Carlo simulations, and perform hypothesis tests. Regression and classification algorithms let you draw inferences from data and build predictive models either interactively, using the Classification and Regression Learner apps, or programmatically, using AutoML.

Lidar Toolbox™ is a tool that provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. Lidar Toolbox supports lidar-camera cross calibration for workflows that combine computer vision and lidar processing.

Mapping Toolbox™ is a tool that provides algorithms and functions for transforming geographic data and creating map displays. You can visualize your data in a geographic context, build map displays from more than 60 map projections, and transform data from a variety of sources into a consistent geographic coordinate system.

UAV Toolbox is an application that provides tools and reference applications for designing, simulating, testing, and deploying unmanned aerial vehicle (UAV) and drone applications. You can design autonomous flight algorithms, UAV missions, and flight controllers. The Flight Log Analyzer app lets you interactively analyze 3D flight paths, telemetry information, and sensor readings from common flight log formats.

Parallel Computing Toolbox™ is a tool that lets you solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. High-level constructs such as parallel for-loops, special array types, and parallelized numerical algorithms enable you to parallelize MATLAB® applications without CUDA or MPI programming. The toolbox lets you use parallel-enabled functions in MATLAB and other toolboxes. You can use the toolbox with Simulink® to run multiple simulations of a model in parallel. Programs and models can run in both interactive and batch modes.

Partial Differential Equation Toolbox™ is a tool that provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis.

ROS Toolbox is a tool that provides an interface connecting MATLAB® and Simulink® with the Robot Operating System (ROS and ROS 2), enabling you to create a network of ROS nodes. The toolbox includes MATLAB functions and Simulink blocks to import, analyze, and play back ROS data recorded in rosbag files. You can also connect to a live ROS network to access ROS messages.

Robotics Toolbox™ provides a toolbox that brings robotics specific functionality(designing, simulating, and testing manipulators, mobile robots, and humanoid robots) to MATLAB, exploiting the native capabilities of MATLAB (linear algebra, portability, graphics). The toolbox also supports mobile robots with functions for robot motion models (bicycle), path planning algorithms (bug, distance transform, D*, PRM), kinodynamic planning (lattice, RRT), localization (EKF, particle filter), map building (EKF) and simultaneous localization and mapping (EKF), and a Simulink model a of non-holonomic vehicle. The Toolbox also including a detailed Simulink model for a quadrotor flying robot.

Deep Learning Toolbox™ is a tool that provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps. You can use convolutional neural networks (ConvNets, CNNs) and long short-term memory (LSTM) networks to perform classification and regression on image, time-series, and text data. You can build network architectures such as generative adversarial networks (GANs) and Siamese networks using automatic differentiation, custom training loops, and shared weights. With the Deep Network Designer app, you can design, analyze, and train networks graphically. It can exchange models with TensorFlow™ and PyTorch through the ONNX format and import models from TensorFlow-Keras and Caffe. The toolbox supports transfer learning with DarkNet-53, ResNet-50, NASNet, SqueezeNet and many other pretrained models.

Reinforcement Learning Toolbox™ is a tool that provides an app, functions, and a Simulink® block for training policies using reinforcement learning algorithms, including DQN, PPO, SAC, and DDPG. You can use these policies to implement controllers and decision-making algorithms for complex applications such as resource allocation, robotics, and autonomous systems.

Deep Learning HDL Toolbox™ is a tool that provides functions and tools to prototype and implement deep learning networks on FPGAs and SoCs. It provides pre-built bitstreams for running a variety of deep learning networks on supported Xilinx® and Intel® FPGA and SoC devices. Profiling and estimation tools let you customize a deep learning network by exploring design, performance, and resource utilization tradeoffs.

Model Predictive Control Toolbox™ is a tool that provides functions, an app, and Simulink® blocks for designing and simulating controllers using linear and nonlinear model predictive control (MPC). The toolbox lets you specify plant and disturbance models, horizons, constraints, and weights. By running closed-loop simulations, you can evaluate controller performance.

Vision HDL Toolbox™ is a tool that provides pixel-streaming algorithms for the design and implementation of vision systems on FPGAs and ASICs. It provides a design framework that supports a diverse set of interface types, frame sizes, and frame rates. The image processing, video, and computer vision algorithms in the toolbox use an architecture appropriate for HDL implementations.

SoC Blockset™ is a tool that provides Simulink® blocks and visualization tools for modeling, simulating, and analyzing hardware and software architectures for ASICs, FPGAs, and systems on a chip (SoC). You can build your system architecture using memory models, bus models, and I/O models, and simulate the architecture together with the algorithms.

Wireless HDL Toolbox™ is a tool that provides pre-verified, hardware-ready Simulink® blocks and subsystems for developing 5G, LTE, and custom OFDM-based wireless communication applications. It includes reference applications, IP blocks, and gateways between frame and sample-based processing.

ThingSpeak™ is an IoT analytics service that allows you to aggregate, visualize, and analyze live data streams in the cloud. ThingSpeak provides instant visualizations of data posted by your devices to ThingSpeak. With the ability to execute MATLAB® code in ThingSpeak, you can perform online analysis and process data as it comes in. ThingSpeak is often used for prototyping and proof-of-concept IoT systems that require analytics.

SEA-MAT is a collaborative effort to organize and distribute Matlab tools for the Oceanographic Community.

Gramm is a complete data visualization toolbox for Matlab. It provides an easy to use and high-level interface to produce publication-quality plots of complex data with varied statistical visualizations. Gramm is inspired by R's ggplot2 library.

hctsa is a software package for running highly comparative time-series analysis using Matlab.

Plotly is a Graphing Library for MATLAB.

YALMIP is a MATLAB toolbox for optimization modeling.

GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation.

C/C++ Development

Back to the Top

 
 

C/C++ Learning Resources

C++ is a cross-platform language that can be used to build high-performance applications developed by Bjarne Stroustrup, as an extension to the C language.

C is a general-purpose, high-level language that was originally developed by Dennis M. Ritchie to develop the UNIX operating system at Bell Labs. It supports structured programming, lexical variable scope, and recursion, with a static type system. C also provides constructs that map efficiently to typical machine instructions, which makes it one was of the most widely used programming languages today.

Embedded C is a set of language extensions for the C programming language by the C Standards Committee to address issues that exist between C extensions for different embedded systems. The extensions hep enhance microprocessor features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations. This makes Embedded C the most popular embedded software language in the world.

C & C++ Developer Tools from JetBrains

Open source C++ libraries on cppreference.com

C++ Graphics libraries

C++ Libraries in MATLAB

C++ Tools and Libraries Articles

Google C++ Style Guide

Introduction C++ Education course on Google Developers

C++ style guide for Fuchsia

C and C++ Coding Style Guide by OpenTitan

Chromium C++ Style Guide

C++ Core Guidelines

C++ Style Guide for ROS

Learn C++

Learn C : An Interactive C Tutorial

C++ Institute

C++ Online Training Courses on LinkedIn Learning

C++ Tutorials on W3Schools

Learn C Programming Online Courses on edX

Learn C++ with Online Courses on edX

Learn C++ on Codecademy

Coding for Everyone: C and C++ course on Coursera

C++ For C Programmers on Coursera

Top C Courses on Coursera

C++ Online Courses on Udemy

Top C Courses on Udemy

Basics of Embedded C Programming for Beginners on Udemy

C++ For Programmers Course on Udacity

C++ Fundamentals Course on Pluralsight

Introduction to C++ on MIT Free Online Course Materials

Introduction to C++ for Programmers | Harvard

Online C Courses | Harvard University

C/C++ Tools and Frameworks

AWS SDK for C++

Azure SDK for C++

Azure SDK for C

C++ Client Libraries for Google Cloud Services

Visual Studio is an integrated development environment (IDE) from Microsoft; which is a feature-rich application that can be used for many aspects of software development. Visual Studio makes it easy to edit, debug, build, and publish your app. By using Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, and Windows Store.

Visual Studio Code is a code editor redefined and optimized for building and debugging modern web and cloud applications.

Vcpkg is a C++ Library Manager for Windows, Linux, and MacOS.

ReSharper C++ is a Visual Studio Extension for C++ developers developed by JetBrains.

AppCode is constantly monitoring the quality of your code. It warns you of errors and smells and suggests quick-fixes to resolve them automatically. AppCode provides lots of code inspections for Objective-C, Swift, C/C++, and a number of code inspections for other supported languages. All code inspections are run on the fly.

CLion is a cross-platform IDE for C and C++ developers developed by JetBrains.

Code::Blocks is a free C/C++ and Fortran IDE built to meet the most demanding needs of its users. It is designed to be very extensible and fully configurable. Built around a plugin framework, Code::Blocks can be extended with plugins.

CppSharp is a tool and set of libraries which facilitates the usage of native C/C++ code with the .NET ecosystem. It consumes C/C++ header and library files and generates the necessary glue code to surface the native API as a managed API. Such an API can be used to consume an existing native library in your managed code or add managed scripting support to a native codebase.

Conan is an Open Source Package Manager for C++ development and dependency management into the 21st century and on par with the other development ecosystems.

High Performance Computing (HPC) SDK is a comprehensive toolbox for GPU accelerating HPC modeling and simulation applications. It includes the C, C++, and Fortran compilers, libraries, and analysis tools necessary for developing HPC applications on the NVIDIA platform.

Thrust is a C++ parallel programming library which resembles the C++ Standard Library. Thrust's high-level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software.

Boost is an educational opportunity focused on cutting-edge C++. Boost has been a participant in the annual Google Summer of Code since 2007, in which students develop their skills by working on Boost Library development.

Automake is a tool for automatically generating Makefile.in files compliant with the GNU Coding Standards. Automake requires the use of GNU Autoconf.

Cmake is an open-source, cross-platform family of tools designed to build, test and package software. CMake is used to control the software compilation process using simple platform and compiler independent configuration files, and generate native makefiles and workspaces that can be used in the compiler environment of your choice.

GDB is a debugger, that allows you to see what is going on `inside' another program while it executes or what another program was doing at the moment it crashed.

GCC is a compiler Collection that includes front ends for C, C++, Objective-C, Fortran, Ada, Go, and D, as well as libraries for these languages.

GSL is a numerical library for C and C++ programmers. It is free software under the GNU General Public License. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite.

OpenGL Extension Wrangler Library (GLEW) is a cross-platform open-source C/C++ extension loading library. GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform.

Libtool is a generic library support script that hides the complexity of using shared libraries behind a consistent, portable interface. To use Libtool, add the new generic library building commands to your Makefile, Makefile.in, or Makefile.am.

Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.

TAU (Tuning And Analysis Utilities) is capable of gathering performance information through instrumentation of functions, methods, basic blocks, and statements as well as event-based sampling. All C++ language features are supported including templates and namespaces.

Clang is a production quality C, Objective-C, C++ and Objective-C++ compiler when targeting X86-32, X86-64, and ARM (other targets may have caveats, but are usually easy to fix). Clang is used in production to build performance-critical software like Google Chrome or Firefox.

OpenCV is a highly optimized library with focus on real-time applications. Cross-Platform C++, Python and Java interfaces support Linux, MacOS, Windows, iOS, and Android.

Libcu++ is the NVIDIA C++ Standard Library for your entire system. It provides a heterogeneous implementation of the C++ Standard Library that can be used in and between CPU and GPU code.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface that makes it easy to respond to the recognition of phrases of interest.

Oat++ is a light and powerful C++ web framework for highly scalable and resource-efficient web application. It's zero-dependency and easy-portable.

JavaCPP is a program that provides efficient access to native C++ inside Java, not unlike the way some C/C++ compilers interact with assembly language.

Cython is a language that makes writing C extensions for Python as easy as Python itself. Cython is based on Pyrex, but supports more cutting edge functionality and optimizations such as calling C functions and declaring C types on variables and class attributes.

Spdlog is a very fast, header-only/compiled, C++ logging library.

Infer is a static analysis tool for Java, C++, Objective-C, and C. Infer is written in OCaml.

Python Development

Back to the Top

 
 

Python Learning Resources

Python is an interpreted, high-level programming language. Python is used heavily in the fields of Data Science and Machine Learning.

Python Developer’s Guide is a comprehensive resource for contributing to Python – for both new and experienced contributors. It is maintained by the same community that maintains Python.

Azure Functions Python developer guide is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the Azure Functions developers guide.

CheckiO is a programming learning platform and a gamified website that teaches Python through solving code challenges and competing for the most elegant and creative solutions.

Python Institute

PCEP – Certified Entry-Level Python Programmer certification

PCAP – Certified Associate in Python Programming certification

PCPP – Certified Professional in Python Programming 1 certification

PCPP – Certified Professional in Python Programming 2

MTA: Introduction to Programming Using Python Certification

Getting Started with Python in Visual Studio Code

Google's Python Style Guide

Google's Python Education Class

Real Python

The Python Open Source Computer Science Degree by Forrest Knight

Intro to Python for Data Science

Intro to Python by W3schools

Codecademy's Python 3 course

Learn Python with Online Courses and Classes from edX

Python Courses Online from Coursera

Python Frameworks, Libraries, and Tools

Python Package Index (PyPI) is a repository of software for the Python programming language. PyPI helps you find and install software developed and shared by the Python community.

PyCharm is the best IDE I've ever used. With PyCharm, you can access the command line, connect to a database, create a virtual environment, and manage your version control system all in one place, saving time by avoiding constantly switching between windows.

Python Tools for Visual Studio(PTVS) is a free, open source plugin that turns Visual Studio into a Python IDE. It supports editing, browsing, IntelliSense, mixed Python/C++ debugging, remote Linux/MacOS debugging, profiling, IPython, and web development with Django and other frameworks.

Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries.

Web2py is an open-source web application framework written in Python allowing allows web developers to program dynamic web content. One web2py instance can run multiple web sites using different databases.

AWS Chalice is a framework for writing serverless apps in python. It allows you to quickly create and deploy applications that use AWS Lambda.

Tornado is a Python web framework and asynchronous networking library. Tornado uses a non-blocking network I/O, which can scale to tens of thousands of open connections.

HTTPie is a command line HTTP client that makes CLI interaction with web services as easy as possible. HTTPie is designed for testing, debugging, and generally interacting with APIs & HTTP servers.

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

Sentry is a service that helps you monitor and fix crashes in realtime. The server is in Python, but it contains a full API for sending events from any language, in any application.

Pipenv is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world.

Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.

Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.

CherryPy is a minimalist Python object-oriented HTTP web framework.

Sanic is a Python 3.6+ web server and web framework that's written to go fast.

Pyramid is a small and fast open source Python web framework. It makes real-world web application development and deployment more fun and more productive.

TurboGears is a hybrid web framework able to act both as a Full Stack framework or as a Microframework.

Falcon is a reliable, high-performance Python web framework for building large-scale app backends and microservices with support for MongoDB, Pluggable Applications and autogenerated Admin.

Neural Network Intelligence(NNI) is an open source AutoML toolkit for automate machine learning lifecycle, including Feature Engineering, Neural Architecture Search, Model Compression and Hyperparameter Tuning.

Dash is a popular Python framework for building ML & data science web apps for Python, R, Julia, and Jupyter.

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built-in.

Locust is an easy to use, scriptable and scalable performance testing tool.

spaCy is a library for advanced Natural Language Processing in Python and Cython.

NumPy is the fundamental package needed for scientific computing with Python.

Pillow is a friendly PIL(Python Imaging Library) fork.

IPython is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history.

GraphLab Create is a Python library, backed by a C++ engine, for quickly building large-scale, high-performance machine learning models.

Pandas is a fast, powerful, and easy to use open source data structrures, data analysis and manipulation tool, built on top of the Python programming language.

PuLP is an Linear Programming modeler written in python. PuLP can generate LP files and call on use highly optimized solvers, GLPK, COIN CLP/CBC, CPLEX, and GUROBI, to solve these linear problems.

Matplotlib is a 2D plotting library for creating static, animated, and interactive visualizations in Python. Matplotlib produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.

Scikit-Learn is a simple and efficient tool for data mining and data analysis. It is built on NumPy,SciPy, and mathplotlib.

Java Development

Back to the Top

 
 

Java Learning Resources

Java is a popular programming language and development platform(JDK). It reduces costs, shortens development timeframes, drives innovation, and improves application services. With millions of developers running more than 51 billion Java Virtual Machines worldwide.

The Eclipse Foundation is home to a worldwide community of developers, the Eclipse IDE, Jakarta EE and over 375 open source projects, including runtimes, tools and frameworks for Java and other languages.

Getting Started with Java

Oracle Java certifications from Oracle University

Google Developers Training

Google Developers Certification

Java Tutorial by W3Schools

Building Your First Android App in Java

Getting Started with Java in Visual Studio Code

Google Java Style Guide

AOSP Java Code Style for Contributors

Chromium Java style guide

Get Started with OR-Tools for Java

Getting started with Java Tool Installer task for Azure Pipelines

Gradle User Manual

Java Tools and Frameworks

Java SE contains several tools to assist in program development and debugging, and in the monitoring and troubleshooting of production applications.

JDK Development Tools includes the Java Web Start Tools (javaws) Java Troubleshooting, Profiling, Monitoring and Management Tools (jcmd, jconsole, jmc, jvisualvm); and Java Web Services Tools (schemagen, wsgen, wsimport, xjc).

Android Studio is the official integrated development environment for Google's Android operating system, built on JetBrains' IntelliJ IDEA software and designed specifically for Android development. Availble on Windows, macOS, Linux, Chrome OS.

IntelliJ IDEA is an IDE for Java, but it also understands and provides intelligent coding assistance for a large variety of other languages such as Kotlin, SQL, JPQL, HTML, JavaScript, etc., even if the language expression is injected into a String literal in your Java code.

NetBeans is an IDE provides Java developers with all the tools needed to create professional desktop, mobile and enterprise applications. Creating, Editing, and Refactoring. The IDE provides wizards and templates to let you create Java EE, Java SE, and Java ME applications.

Java Design Patterns is a collection of the best formalized practices a programmer can use to solve common problems when designing an application or system.

Elasticsearch is a distributed RESTful search engine built for the cloud written in Java.

RxJava is a Java VM implementation of Reactive Extensions: a library for composing asynchronous and event-based programs by using observable sequences. It extends the observer pattern to support sequences of data/events and adds operators that allow you to compose sequences together declaratively while abstracting away concerns about things like low-level threading, synchronization, thread-safety and concurrent data structures.

Guava is a set of core Java libraries from Google that includes new collection types (such as multimap and multiset), immutable collections, a graph library, and utilities for concurrency, I/O, hashing, caching, primitives, strings, and more! It is widely used on most Java projects within Google, and widely used by many other companies as well.

okhttp is a HTTP client for Java and Kotlin developed by Square.

Retrofit is a type-safe HTTP client for Android and Java develped by Square.

LeakCanary is a memory leak detection library for Android develped by Square.

Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities with elegant and fluent APIs in Java and Scala.

Fastjson is a Java library that can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object.

libGDX is a cross-platform Java game development framework based on OpenGL (ES) that works on Windows, Linux, Mac OS X, Android, your WebGL enabled browser and iOS.

Jenkins is the leading open-source automation server. Built with Java, it provides over 1700 plugins to support automating virtually anything, so that humans can actually spend their time doing things machines cannot.

DBeaver is a free multi-platform database tool for developers, SQL programmers, database administrators and analysts. Supports any database which has JDBC driver (which basically means - ANY database). EE version also supports non-JDBC datasources (MongoDB, Cassandra, Redis, DynamoDB, etc).

Redisson is a Redis Java client with features of In-Memory Data Grid. Over 50 Redis based Java objects and services: Set, Multimap, SortedSet, Map, List, Queue, Deque, Semaphore, Lock, AtomicLong, Map Reduce, Publish / Subscribe, Bloom filter, Spring Cache, Tomcat, Scheduler, JCache API, Hibernate, MyBatis, RPC, and local cache.

GraalVM is a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Clojure, Kotlin, and LLVM-based languages such as C and C++.

Gradle is a build automation tool for multi-language software development. From mobile apps to microservices, from small startups to big enterprises, Gradle helps teams build, automate and deliver better software, faster. Write in Java, C++, Python or your language of choice.

Apache Groovy is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn syntax. It integrates smoothly with any Java program, and immediately delivers to your application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time meta-programming and functional programming.

JaCoCo is a free code coverage library for Java, which has been created by the EclEmma team based on the lessons learned from using and integration existing libraries for many years.

Apache JMeter is used to test performance both on static and dynamic resources, Web dynamic applications. It also used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types.

Junit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.

Mockito is the most popular Mocking framework for unit tests written in Java.

SpotBugs is a program which uses static analysis to look for bugs in Java code.

SpringBoot is a great tool that helps you to create Spring-powered, production-grade applications and services with absolute minimum fuss. It takes an opinionated view of the Spring platform so that new and existing users can quickly get to the bits they need.

YourKit is a technology leader, creator of the most innovative and intelligent tools for profiling Java & .NET applications.

R Development

Back to the Top

 
 

R Learning Resources

R is an open source software environment for statistical computing and graphics. It compiles and runs on a wide variety of platforms such as Windows and MacOS.

An Introduction to R

Google's R Style Guide

R developer's guide to Azure

Running R at Scale on Google Compute Engine

Running R on AWS

RStudio Server Pro for AWS

Learn R by Codecademy

Learn R Programming with Online Courses and Lessons by edX

R Language Courses by Coursera

Learn R For Data Science by Udacity

R Tools, Libraries, and Frameworks

RStudio is an integrated development environment for R and Python, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management.

Shiny is a newer package from RStudio that makes it incredibly easy to build interactive web applications with R.

Rmarkdown is a package helps you create dynamic analysis documents that combine code, rendered output (such as figures), and prose.

Rplugin is R Language supported plugin for the IntelliJ IDE.

Plotly is an R package for creating interactive web graphics via the open source JavaScript graphing library plotly.js.

Metaflow is a Python/R library that helps scientists and engineers build and manage real-life data science projects. Metaflow was originally developed at Netflix to boost productivity of data scientists who work on a wide variety of projects from classical statistics to state-of-the-art deep learning.

Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data.

LightGBM is a gradient boosting framework that uses tree based learning algorithms, used for ranking, classification and many other machine learning tasks.

Dash is a Python framework for building analytical web applications in Python, R, Julia, and Jupyter.

MLR is Machine Learning in R.

ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. ML workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (Tensorflow, PyTorch, Keras, and MXnet) and dev tools (Jupyter, VS Code, and Tensorboard) perfectly configured, optimized, and integrated.

CatBoost is a fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Plumber is a tool that allows you to create a web API by merely decorating your existing R source code with special comments.

Drake is an R-focused pipeline toolkit for reproducibility and high-performance computing.

DiagrammeR is a package you can create, modify, analyze, and visualize network graph diagrams. The output can be incorporated into R Markdown documents, integrated with Shiny web apps, converted to other graph formats, or exported as image files.

Knitr is a general-purpose literate programming engine in R, with lightweight API's designed to give users full control of the output without heavy coding work.

Broom is a tool that converts statistical analysis objects from R into tidy format.

Julia Development

Back to the Top

 
 

Julia Learning Resources

Julia is a high-level, high-performance dynamic language for technical computing. Julia programs compile to efficient native code for multiple platforms via LLVM.

JuliaHub contains over 4,000 Julia packages for use by the community.

Julia Observer

Julia Manual

JuliaLang Essentials

Julia Style Guide

Julia By Example

JuliaLang Gitter

DataFrames Tutorial using Jupyter Notebooks

Julia Academy

Julia Meetup groups

Julia on Microsoft Azure

Julia Tools, Libraries and Frameworks

JuliaPro is a free and fast way to setup Julia for individual researchers, engineers, scientists, quants, traders, economists, students and others. Julia developers can build better software quicker and easier while benefiting from Julia's unparalleled high performance. It includes 2600+ open source packages or from a curated list of 250+ JuliaPro packages. Curated packages are tested, documented and supported by Julia Computing.

Juno is a powerful, free IDE based on Atom for the Julia language.

Debugger.jl is the Julia debuggin tool.

Profile (Stdlib) is a module provides tools to help developers improve the performance of their code. When used, it takes measurements on running code, and produces output that helps you understand how much time is spent on individual line's.

Revise.jl allows you to modify code and use the changes without restarting Julia. With Revise, you can be in the middle of a session and then update packages, switch git branches, and/or edit the source code in the editor of your choice; any changes will typically be incorporated into the very next command you issue from the REPL. This can save you the overhead of restarting Julia, loading packages, and waiting for code to JIT-compile.

JuliaGPU is a Github organization created to unify the many packages for programming GPUs in Julia. With its high-level syntax and flexible compiler, Julia is well positioned to productively program hardware accelerators like GPUs without sacrificing performance.

IJulia.jl is the Julia kernel for Jupyter.

AWS.jl is a Julia interface for Amazon Web Services.

CUDA.jl is a package for the main programming interface for working with NVIDIA CUDA GPUs using Julia. It features a user-friendly array abstraction, a compiler for writing CUDA kernels in Julia, and wrappers for various CUDA libraries.

XLA.jl is a package for compiling Julia to XLA for Tensor Processing Unit(TPU).

Nanosoldier.jl is a package for running JuliaCI services on MIT's Nanosoldier cluster.

Julia for VSCode is a powerful extension for the Julia language.

JuMP.jl is a domain-specific modeling language for mathematical optimization embedded in Julia.

Optim.jl is a univariate and multivariate optimization in Julia.

RCall.jl is a package that allows you to call R functions from Julia.

JavaCall.jl is a package that allows you to call Java functions from Julia.

PyCall.jl is a package that allows you to call Python functions from Julia.

MXNet.jl is the Apache MXNet Julia package. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia.

Knet is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia.

Distributions.jl is a Julia package for probability distributions and associated functions.

DataFrames.jl is a tool for working with tabular data in Julia.

Flux.jl is an elegant approach to machine learning. It's a 100% pure-Julia stack, and provides lightweight abstractions on top of Julia's native GPU and AD support.

IRTools.jl is a simple and flexible IR format, expressive enough to work with both lowered and typed Julia code, as well as external IRs.

Cassette.jl is a Julia package that provides a mechanism for dynamically injecting code transformation passes into Julia’s just-in-time (JIT) compilation cycle, enabling post hoc analysis and modification of "Cassette-unaware" Julia programs without requiring manual source annotation or refactoring of the target code.

Contribute

  •  If would you like to contribute to this guide simply make a Pull Request.

License

Back to the Top

Distributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) Public License.


Download details:

Author: mikeroyal
Source code: https://github.com/mikeroyal/NLP-Guide

#chartgpt #naturallanguageprocessing 

 Natural Language Processing (NLP) Guide
Ella  Windler

Ella Windler

1672380660

The Combination Of ChatGPT and Google Search Engine

There has been a lot of discussion and debate surrounding ChatGPT, as there is with any new bot. Some people support it while others are opposed to it. In my experience, ChatGPT has been a helpful tool, although it is not particularly intelligent and its data only goes up until 2021. Despite this, it has assisted me in my work and I have even developed an extension that combines ChatGPT with the Google search engine. I have recommended it to friends who have also had positive experiences with it. However, I still want to share my thoughts about ChatGPT with all of you and get your direct feedback on it.

Image description

Hope you'll give it a try and leave me a comment

Link to set up extension

Original article sourced at: https://dev.to

#chatgpt #chartgpt #openai 

The Combination Of ChatGPT and Google Search Engine

Chatgptcli: The Command Line Wrapper for ChatGPT

chatgptcli

The command line wrapper for ChatGPT

ChatGPT-PyPi

Install

pip install chatgptcli --upgrade

Terminal Chat

from chatgptcli import ChatGPT

ChatGPT(email="email", password="password").chat()

Ask a Question

from chatgptcli import ChatGPT

chatGPT = ChatGPT(email="email", password="password")
print(chatGPT.ask("Write a rap song"))

Download details:

Author: miscup
Source code: https://github.com/miscup/chatgptcli

#chartgpt #openai 

Chatgptcli: The Command Line Wrapper for ChatGPT

Sample Code to Create Chatgpt Bot on Telegram

chatgpt2telegram

Sample code to create chatgpt bot on telegram

inspired by (acheong08)[https://github.com/acheong08/ChatGPT]

first update the telegram_token by the your telegram bot token. run the bot and input anything.

the bot will need the Bearer token (chatgpt), please refer the steps below.

if not token (chatgpt), will reply message to asking input, format as auth <auth token>. after input auth <auth token>, the bot ready to communicate with chatgpt.

Get your Bearer token

Go to https://chat.openai.com/chat and log in or sign up

  1. Open console with F12
  2. Go to Network tab in console
  3. Find session request (Might need refresh)
  4. Copy accessToken value to config.json.example as Authorization
  5. Save as config.json (In current active directory) image image

Recommand to use https://github.com/acheong08/ChatGPT, this repo also include login.


Download details:

Author: robertyu
Source code: https://github.com/robertyu/chatgpt2telegram

#chartgpt 

Sample Code to Create Chatgpt Bot on Telegram