MindsDB ML-SQL Server enables machine learning workflows for the most powerful databases and data warehouses using SQL.
NEW! Check-out the rewards and community programs.
You can try the Mindsdb ML SQL server here (demo).
To install the latest version of MindsDB please pull the following Docker image:
docker pull mindsdb/mindsdb
Or, use PyPI:
pip install mindsdb
MindsDB automates and abstracts machine learning models through virtual AI Tables:
Apart from abstracting ML models as AI Tables inside databases, MindsDB has a set of unique capabilities:
Easily make predictions over very complex multivariate time-series data with high cardinality
An open JSON-AI syntax to tune ML models and optimize ML pipelines in a declarative way
Let MindsDB connect to your database.
Train a Predictor using a single SQL statement (make MindsDB learn from historical data automatically) or import your ML model to a Predictor via JSON-AI.
Make predictions with SQL statements (Predictor is exposed as virtual AI Tables). There’s no need to deploy models since they are already part of the data layer.
MindsDB works with most of the SQL and NoSQL databases and data Streams for real-time ML.
You can find the complete documentation of MindsDB at docs.mindsdb.com.
If you found a bug, please submit an issue on GitHub.
To get community support, you can:
If you need commercial support, please contact MindsDB team.
A great place to start contributing to MindsDB will be our GitHub projects for checkered_flag:
Also, we are always open to suggestions so feel free to open new issues with your ideas and we can guide you!
Being part of the core team is accessible to anyone who is motivated and wants to be part of that journey! If you'd like to contribute to the project, refer to the contributing documentation.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project, you agree to abide by its terms.
Subscribe to MindsDB Monthly Community Newsletter to get general announcements, release notes, information about MindsDB events, and the latest blog posts. You may also join our beta-users group, and get access to new beta features.
Text-to-image Playground (fka DALL-E Playground)
A playground for text-to-image enthusiasts using Stable Diffusion V2.
The original version of this repository used DALL-E Mini. With the recent release of Stable Diffusion (SD) V2 and the ease of implementation - this repository has moved to use SD over DALL-E Mini.
Also see Whisper Playground - a playground for building real-time speech2text web apps using OpenAI's Whisper
You can tinker with the DALL-E playground using a Github-hosted frontend. Follow these steps:
your url is: https://XXXX.trycloudflare.com
--> Image generation server is up and running!
backendUrlquery parameter should be the url from the previous step
General note: while it is possible to run the backend on the free tier of Google Colab, generating more than ~2 images would take >1min, which will result in a frontend timeout. Consider upgrading to Colab Pro or run the backend notebook on your stronger ML machine (e.g. AWS EC2).
Follow these steps in case you'd like to clone and run the DALL-E Playground locally:
cd backend && python3 -m venv ENV_NAME
pip install -r requirements.txt
python3 app.py --port 8080(you can change from 8080 to your own port)
cd interface && npm installand run it
Window's WSL2 Linux layer has some unique issues getting running with GPU support. Nvidia CUDA drivers are installed on the Windows side instead of Linux, but jax does not see the GPU without compiling from source. Here are extra instructions to get jax compiled.
jaxfrom source, remember to enable cuda during compilation with
python3 build/build.py --enable_cudainstructions
jaxlib, you might hit a broken configuration file, solution here: https://github.com/google/jax/issues/11068
WSL2 installs are fairly bare bones, expect to install packages like
python3-pip and many others to get things working. More troubleshooting here
Clone or fork this repository
docker-compose up, add
docker-compose up if you'd like to run it in the background
The first time will take some time to download the images, models and other dependencies. models and other dependencies are downloaded only once, and then cached.
Copy backend's url from step 2 and paste it in the backend's url input within the web app.
The original reposistory used @borisdayma's DALL-E Mini.
In this tutorial, we’ll setup a simple example teaching you the exact steps which are needed to make use of ChatGPT in your Python program.
ChatGPT is a variant of the GPT-3 language model, specifically designed for conversational language generation. To use ChatGPT in Python, you will need to install the OpenAI API client and obtain an API key. In this article we’ll setup a simple example teaching you the exact steps which are needed to make use of ChatGPT in your Python program.
Let’s get started. First create a new empty project folder:
$ mkdir python-chatgpt $ cd python-chatgpt
In the next step we need to install the OpenAI API client library for Python.
To install the OpenAI API client library for Python, you will need to have Python and pip, the Python package manager, installed on your system.
To install the library, open a terminal or command prompt and type the following command:
$ pip install openai
This will install the OpenAI API client and all of its dependencies.
Once the installation is complete, you can import the library in your Python code by adding the following line at the top of your script:
Let’s do so: Create a new file chat.py in the project folder and start by insert the import statement on top of the file.
Let’s extend the Python code further and use the OpenAI client library to interact with the artificial intelligence:
import openai # Set up the OpenAI API client openai.api_key = "YOUR_API_KEY" # Set up the model and prompt model_engine = "text-davinci-003" prompt = "Hello, how are you today?" # Generate a response completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, temperature=0.5, ) response = completion.choices.text print(response)
This will generate a response to the prompt Hello, how are you today? using the ChatGPT model. The response will be returned as a string in the response variable.
You can customize the behavior of the model by adjusting the temperature parameter, which controls the level of randomness in the generated text. A higher temperature will result in more varied and potentially less coherent responses, while a lower temperature will produce responses that are more predictable and potentially more coherent.
You can also use the stop parameter to specify a string or sequence of strings that, if encountered in the generated text, will cause the model to stop generating further text. This can be useful for controlling the length of the generated text or for ensuring that the model does not generate inappropriate content.
YOUR_API_KEY is a placeholder for your actual API key.
To use the OpenAI API, you will need to sign up for an account and obtain an API key. You can do this by visiting the OpenAI website and clicking on the Get an API key button.
Once you have obtained your API key, you will need to replace YOUR_API_KEY in the Python code with your actual API key. This will allow the API client to authenticate your requests to the OpenAI API.
Let’s try it out. Change to the command line again and execute the Python script by typing in:
$ python chat.py
In the following screenshot you can see that ChatGPT is responding with an answer to the question Hello, how are you today? which we have submitted:
That is how easy it is to use ChatGPT functionality from within your Python program. Feel free to extend this example and adapt it to your requirements. Have much fun!
This is a CodingTheSmartWay tutorial.
#chatgpt #python #gpt3 #ai #openai #artificialintelligence #webdevelopment #programming
This Machine Learning Full Course will help you understand and learn Machine Learning Algorithms in detail. The topics covered in this Machine Learning Roadmap course: How to become a Machine Learning Engineer? Machine Learning Algorithm, Linear Regression Algorithm, Logistic Regression Algorithm, Linear Regression Algorithm vs Logistic Regression Algorithm, Decision tree Algorithm, Random Forest, and more
This Edureka Machine Learning Full Course video will help you understand and learn Machine Learning Algorithms in detail. This Machine Learning Tutorial is ideal for both beginners as well as professionals who want to master Machine Learning Algorithms. Below are the topics covered in this Machine Learning Roadmap course:
#datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience
In this video on Machine Learning with a full Python course, you will understand the basics of machine learning, essential applications of machine learning, and machine learning concepts, and why mathematics, statistics, and linear algebra are crucial. We'll also learn about regularization, dimensionality reduction, and PCA. We will perform a prediction analysis on the recently held US Elections. Finally, you will study the Machine Learning roadmap for 2023?
#python #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience
This Data Science Full Course will help you understand and learn Data Science Algorithms in detail. You'll learn: What is Data Science? What is Machine Learning? Data Scientist Roadmap, Statistics and Probability, Linear Regression, Logistic Regression, Decisiontree aglorithm, Randon Forest, KNN Algorithm, Naive Bayes Classifier, and more
This Edureka Data Science Full Course video will help you understand and learn Data Science Algorithms in detail. This Data Science Tutorial is ideal for both beginners as well as professionals who want to master Data Science Algorithms.
Below are the topics covered in this Data Science Full course tutorial:
#datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience #datastructures #algorithms
Learn Python Programming in this full course for beginners. You'll learn: Python Installation, Object-oriented programming, Type Hierarchy, Operator Precedence, Conditional Statements, Iterators, Modules, Packages, Generators, List Comprehensions, Regular Expression, Serialization, Closures & decorators, and more
This Python programming full course video will cover all the basics to advanced concepts of Python programming. This Python Full Course contains real-life examples and in-depth, hands-on demos.
Python is a high-level object-oriented programming language developed by Guido van Rossum in 1989 and was first released in 1991. Python is often called batteries included language due to its comprehensive standard library. A fun fact about Python is that The name Python was taken from the popular BBC comedy show of that time, "Monty Python's Flying Circus." Python is widely used in data analytics, machine learning, and web development these days. Python allows you to write programs in fewer lines of code than most programming languages. Python as a programming language is growing rapidly. It's the right time to get trained in Python.
Following are the standard or built-in data types of Python:
1. Numeric data types
2. Text data type
3. Sequence data type
4. Mapping data type
5. Set data type
6. Boolean data type
7. Binary data type
#python #programming #developer #programmer #coding #softwaredeveloper #computerscience #webdev #webdeveloper #webdevelopment #datascience #machinelearning #deeplearning #ai #artificialintelligence
#datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience
Artificial Intelligence for social control of public administration
Serenata de Amor is an open project using artificial intelligence for social control of public administration.
We are a group of people who believes in power to the people motto. We are also part of the Data Science for Civic Innovation Programme from Open Knowledge Brasil.
Among founders and long-term members, we can list a group of eight people – plus numerous contributors from the open source and open knowledge communities: Tatiana Balachova, Felipe Cabral, Eduardo Cuducos, Irio Musskopf, Bruno Pazzim, Ana Schwendler, Jessica Temporal, Yasodara Córdova and Pedro Vilanova.
Similar to organizations like Google, Facebook, and Netflix, we use technology to track government spendings and make open data accessible for everyone. We started looking into data from the Chamber of Deputies (Brazilian lower house) but we expanded to the Federal Senate (Brazilian upper house) and to municipalities.
Irio had the main ideas for the project in early 2016. For a few months, he experimented and gathered people around the project. September, 2016 marks the launching of our first crowd funding. Since then, we have been creating open source technological products and tools, as well as high quality content on civic tech on our Facebook and Medium.
We have no non-virtual headquarters, but we work remotely everyday. Most of our ideas are crafted to work in any country that offers open data, but our main implementations focus in Brazil.
Empowering citizens with data is important: people talk about smart cities, surveillance and privacy. We prefer to focus on smart citizens, accountability and open knowledge.
Serenata de Amor develops open source tools to make it easy for people to use open data. The focus is to gather relevant insights and share them in an accessible interface. Through this interface, we invite citizens to dialogue with politicians, state and government about public spendings.
Serenata's main role is played by Rosie: she is an artificial intelligence who analyzes Brazilian congresspeople expenses while they are in office. Rosie can find suspicious spendings and engage citizens in the discussion about these findings. She's on Twitter.
To allow people to visualize and make sense of data Rosie generates, we have created Jarbas. On this website, users can browse congresspeople expenses and get details about each of the suspicions. It is the starting point to validate a suspicion.
We have two main repositories on GitHub. This is the main repo and hosts Rosie and Jarbas. In addition, we have the toolbox - a
pip installable package. Yet there are experimental notebooks maintained by the community and our static webpage.
Despite all these players acting together, the core part of the job is ran manually from time to time. The only part that is always online is Jarbas – freely serving a wide range of information about public expenditure 24/7.
Roughly once a month, we manually run Rosie and update Jarbas. A few times per year, we upload versioned datasets accessible via the toolbox – but we encourage you to use the toolbox to generate fresh datasets whenever you need.
The answer to most technical why questions is because that is what we had in the past and enabled us to deliver fast. We acknowledge that this is not the best stack ever, but it has brought us here.
Make sure you have read the Tech crash course on this page. Next, check out our contributing guide.
Python with AI: From Zero to Hero. Get started with Python programming. Understand Python fundamentals such as print, input, data conversion, if statement. Be ready to learn advanced Python programming
Python programming is fun and useful, but starting from zero can be intimidating.
This course is designed to remove the intimidation factor of Python programming. It is for elementary or secondary students who are curious about Python programming, or who are scared of programming. The course provides bite-size videos. Each video covers one topic in about 10 minutes. The coding demos will show every steps. You can easily follow along and start to write Python programs.
There is no jargons, just plain explanations!
If you're an adult and want to have a taste of Python programming, this is for you as well. Who would mind a quick and easy start?
The students will
After the course, the students will be able to:
#python #ai #artificialintelligence
Machine Learning Real World Projects with Python. Create classical Machine Learning Algorithms in Python. Solve business problems using Data Science practically. Build & Deploy Machine Learning, Data Science, Artificial Intelligence, Auto Ml, Deep Learning, Natural Language Processing (NLP) Web Applications Projects with Python (Flask, Django, Heroku).
Data science is the field of study that combines domain expertise, programming skills, and knowledge of mathematics and statistics to extract meaningful insights from data. Data science practitioners apply machine learning algorithms to numbers, text, images, video, audio, and more to produce artificial intelligence (AI) systems to perform tasks that ordinarily require human intelligence. In turn, these systems generate insights which analysts and business users can translate into tangible business value.
More and more companies are coming to realize the importance of data science, AI, and machine learning. Regardless of industry or size, organizations that wish to remain competitive in the age of big data need to efficiently develop and implement data science capabilities or risk being left behind.
In This Course, We Are Going To Work On 2 Real World Projects Listed Below:
The Only Course You Need To Become A Data Scientist, Get Hired And Start A New Career
#python #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience
In this tutorial, you'll learn: What is ChatGPT? What Can ChatGPT Do? Testing ChatGPT, Have ChatGPT Build a To-Do App using Next.js and Tailwind CSS, A Short Story Written by ChatGPT & Generated by Descript
What is ChatGPT? Can it Help You? It built a To-Do App.. Sorta 🤔
Everyone is talking about ChatGPT, so I had to give it a try. Let’s see what it’s capable of.
What’s ChatGPT? Let’s ask ChatGPT what it is..
It says it’s a variant of the Generative Pre-trained Transformer language model that is fine-tuned for chatbot-style communication.
It’s part of OpenAI and the GPT-3 family.
It goes on to explain what GPT-3 is.
ChatGPT is specifically for conversations and can be used in chat-bots.
And it uses context-aware responses taking into account previous exchanges in conversation.
What can ChatGPT help us to do? Here are some examples:
- Explain quantum computing in simple terms.
- Write a short story.
- Build an entire application in any programming language!!
⏱️ Timestamps ⏱️
00:00 - Intro
00:10 - What is ChatGPT?
01:09 - What Can ChatGPT Do?
01:56 - Testing ChatGPT
02:49 - Have ChatGPT Build a To-Do App using Next.js and Tailwind CSS
11:25 - A Short Story Written by ChatGPT & Generated by Descript
#artificialintelligence #chatgpt #learntocode
In this tutorial, you'll learn Generative Adversarial Networks (GANs) with PyTorch. Building a GAN from scratch with PyTorch
I really love Generative Learning and Generative Adversarial Networks. These amazing models can generate high-quality images (and not only images). I am an AI researcher, and I would like to share with you all my practical experience with GANs.
Generative Adversarial Networks were invented in 2014 and since that time it is a breakthrough in Deep Learning for the generation of new objects. Now, in 2019, there exists around a thousand different types of Generative Adversarial Networks. And it seems impossible to study them all.
I work with GANs for several years, since 2015. And now I can share with you all my experience, going from the classical algorithm to the advanced techniques and state-of-the-art models. I also added a section with different applications of GANs: super-resolution, text to image translation, image to image translation, and others.
This course has rather strong prerequisites:
Here are tips for taking most from the course:
#gans #pytorch #python #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience
Pair Programming with the ChatGPT AI, learn how well does GPT-3.5 understand Bash. Learn how ChatGPT handles Linux Bash scripting
Can I use the latest ChatGPT AI natural language processor to generate effective Bash code? Here's the result of my fascinating experiment. What came out the other end wasn't quite good enough, but it taught me a lot about how insanely useful the AI can be.
00:00 What's going on with the new ChatGPT thing?
01:05 Defining the problem I want to solve with the script
02:05 The first attempt by ChatGPT
02:53 The problem with the first attempt
03:10 ChatGPT incorporates the basename tool
03:35 ChatGPT reads the yt-dlp documentation
04:19 ChatGPT gets confused
04:35 ChatGPT gives up
04:53 An actual working yt-dlp script using awk
We've been hearing a lot about artificial intelligence and natural language processing – and in particular about the latest version of OpenAI's GPT – for weeks.
The recent release of GPT 3.5, and specifically the very new ChatGPT tool, is definitely a huge leap forward.
You may have read about people asking the AI to draw up real estate contracts and wills, and even generate programming code. This article approaches the tool from a slightly different perspective.
I decided to see how ChatGPT handles Linux Bash scripting. But (spoiler alert!) as I worked through the tool's suggestions - both good code and bad code - I began to realize that there was an even more powerful benefit staring me in the face: an opportunity to pair-program with an eminently helpful partner.
The AI ultimately failed to solve my problem, but the way it failed was absolutely fascinating. And watching the AI searching for new answers was absolutely crazy.
So let's see how it all happened. Here, more or less, is the initial prompt I provided to the AI:
Write me a Bash script that prompts for the URL of a YouTube video and then uses the yt-dlp program to download from that URL. The script should then change the file name to "myvideo.mp4"
After a few seconds, the ChatGPT told me that a script was coming. Here's how that code looked:
#!/bin/bash # Prompt user for YouTube video URL read -p "Enter YouTube video URL: " url # Download video from YouTube using yt-dlp yt-dlp $url # Get the name of the downloaded video file yt-dlp $url # Change the downloaded video filename to "myvideo.mkv" mv video-filename.mp4 myvideo.mp4
The AI obviously knows what Bash scripts are and how to present code. It even did a great job commenting each line of code. The AI knows how to properly prompt the user for input and then how to import the input string into a variable called
It also knows about the yt-dlp program and how it works, appropriately feeding it the URL that was provided. Then it at least attempted to rename the downloaded video file to myvideo.mp4. But everything kind of broke down when it assumed that the download filename would be video-filename.mp4. Whoops.
At any rate, the AI helpfully showed me how to create the script from its code and then run it.
My first follow-up prompt pointed out the obvious mistake and, after a few seconds, ChatGPT admitted it was wrong and offered an improvement.
You're right, the script I provided would not work as expected because it assumes that the downloaded video will be saved to a file with the name
video-filename.mp4. However, this is not guaranteed to be the case. To fix this...
It's just mind-blowing how the AI is completely engaged in the process here. It remembers its first code, listens to and understands my complaint, and thinks through a solution. Here's that second attempt:
#!/bin/bash # Prompt user for YouTube video URL read -p "Enter YouTube video URL: " url # Download video from YouTube using yt-dlp yt-dlp $url # Get the name of the downloaded video file filename=$(basename $url) # Change the downloaded video filename to "myvideo.mkv" mv $filename.mp4 myvideo.mp4
Ok, so I can see how the solution kind of looked right.
basename is a Linux tool that can strip out file system directories and leave nothing but the actual filename.
The problem is that the variable
url doesn't contain a local filename, but an internet address. This one definitely isn't going to work.
So I tried again, explaining how that wasn't the right tool for the job. And ChatGPT gives it another try.
#!/bin/bash # Prompt user for YouTube video URL read -p "Enter YouTube video URL: " url # Extract the filename from the URL filename=$(basename $url) # Download the video using the yt-dlp tool and specify the filename yt-dlp -o $filename $url # Change the downloaded video filename to "myvideo.mkv" mv $filename.mp4 myvideo.mp4
This time, it shows that it had familiarized itself with the yt-dlp documentation, suggesting that we use the
-o argument to give the downloading video a name, against which
basename should now work. With each iteration, the AI gives me this updated code for my cutting and pasting pleasure.
Of course, that wasn't going to work either. With the next attempt, I get the feeling that the AI is a bit desperate.
I'm going to skip past the next attempt or two, as we ran into a side problem: Videos coming from YouTube aren't always MP4s, they're sometimes MKVs or even WEBMs. That uncertainty could mess with the filenames we're working with, so I asked ChatGPT to work that into the code.
#!/bin/bash # Prompt user for YouTube video URL read -p "Enter YouTube video URL: " url # Extract the filename from the URL filename=$(basename $url) # Download the video using the yt-dlp tool and specify the filename yt-dlp -F mp4 -O $filename $url # Change the downloaded video filename to "myvideo.mkv" mv $filename.mp4 myvideo.mp4
When I reported how that, too, wouldn't work, the AI started to get confused. First it told me that the
-F option was expecting a valid video format, but that "mp4" wasn't valid. Then it gives me valid options, including "mp4".
When nothing the AI suggested worked, it gave up, advising me to make sure that yt-dlp was actually installed on my machine and showing me how to do that.
I figured that was a good time to give up. It was a surprisingly good effort and, even if it didn't solve the problem, it did show me some useful syntax and got me thinking.
In case you're interested, by the way, here's how an actual working script would look.
#!/bin/bash # Prompt user for YouTube video URL read -p "Enter YouTube video URL: " url # Download video from YouTube using yt-dlp yt-dlp $url # Get the name of the downloaded video file original_filename=$(ls | awk '/mp4/ || /webm/ || /mkv/') # Change the downloaded video filename to "myvideo.mkv" mv "$original_filename" /home/ubuntu/vids/myvideo.mp4
The first two lines of code are just the way ChatGPT suggested. But I then isolate the filename by listing all the files in the current directory and using
awk to filter for only filenames containing either mp4, webm, or mkv. (This assumes that there will never be more than one video file in the directory at a time.)
The filename will then be written to the
original_filename variable. I'll then use that variable as part of a
move command to rename the file
myvideo.mp4 As far as I can tell, that'll work no matter what format was actually sent.
Artificial intelligence might not be quite ready for real-world programming tasks today, but I wouldn't want to bet against it getting there tomorrow...or some time next year.
In the meantime though, I would absolutely recommend engaging with tools like ChatGPT whenever you're faced with a challenging programming task.
Two minds are (almost) always better than one.
Original article source at https://www.freecodecamp.org
#artificialintelligence #chatgpt #ai
Vision transformer (ViT) is a transformer-based architecture used in the field of computer vision, it is directly inspired by the use of Transformers in NLP tasks.
0:00:00 - Introduction
0:00:31 - Dataset Explanation (Flower Images Dataset)
0:01:00 - Importing, Seeding, Dataset and others
0:37:22 - Implementing the Vision Transformer (ViT)
1:04:30 - Training the Vision Transformer (ViT)
1:08:49 - Testing the Vision Transformer (ViT)
1:13:57 - Ending - SUBSCRIBE
Blog Post: https://idiotdeveloper.com/vision-transformer-an-image-is-worth-16x16-words-transformers-for-image-recognition-at-scale
Vision Transformer Research Paper: https://arxiv.org/pdf/2010.11929.pdf
#tensorflow #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #softwaredeveloper #computerscience