Palm API: Embrace AI for Powerful Hand Gesture Recognition

PaLM API

Faster. Easier. Smarter.


Features

Highlights

palm-api v1.0 compared to Google's own API:

  • Fast: As fast as native API (also making it 4x faster than googlebard)
  • 🪶 Lightweight: 260x smaller minzipped size
  • 🚀 Simple & Easy: 2.8x less code needed

Why PaLM API?

Google has its own API interface for PaLM, through their @google/generativelanguage and google-auth-library packages .

However, making requests with the native package is simply too complicated, clunky, and slow.

Here's the code you need for the Google API to ask PaLM something with context:

const { DiscussServiceClient } = require("@google-ai/generativelanguage");
const { GoogleAuth } = require("google-auth-library");

const client = new DiscussServiceClient({
	authClient: new GoogleAuth().fromAPIKey(process.env.API_KEY),
});

const result = await client.generateMessage({
	model: "models/chat-bison-001",
	prompt: {
		context: "Respond to all questions with a single number.",
		messages: [{ content: "How tall is the Eiffel Tower?" }],
	},
});

console.log(result[0].candidates[0].content);

And here's the equivalent code in palm-api:

import PaLM from "palm-api";

let bot = new PaLM(process.env.API_KEY);

bot.ask("How tall is the Eiffel Tower?", {
	context: "Respond to all questions with a single number.",
});

Yep! That's it... get the best features of PaLM, in a package that's simpler and easier to use, test, and maintain.

Statistics

Comparing against the Google API...

Size

PaLM API clocks in at just 1.3kb minzipped.

@google/generativelanguage and google-auth-library, the two required packages for Google's own implementation, clocks in at a total of (more or less) 337kb minzipped.

That makes PaLM API around 260 times smaller!

Code Needed

Using at the exact example I showed above, we are at around 2.8x less code needed for PaLM API, looking at characters.

Speed

Comparing the speed with the demo code on Google's own website, and equivalent code written in PaLM API, the times are virtually similar.

Tested with hyperfine.

Documentation

Setup

First, install PaLM API on NPM:

npm install palm-api

or PNPM:

pnpm add palm-api

Then, get yourself an API key in Makersuite here. Click on "Create API key in new project," and then simply copy the string.

Import PaLM API, then initialize the class with your API key.

Warning

It is recommended that you access your API from process.env or .env

import PaLM from "palm-api";

let bot = new PaLM(API_KEY, { ...config });

Config

ConfigTypeDescription
fetchfunctionFetch polyfill with same interface as native fetch. Optional

Note

PaLM itself and all of its methods have a config object that you can pass in as a secondary parameter. Example:

import PaLM from "palm-api";
import fetch from "node-fetch";
let bot = new PaLM(API_KEY, {
	fetch: fetch,
});

PaLM.ask()

Uses the generateMessage capable models to provide a high-quality LLM experience, with context, examples, and more.

Models available: chat-bison-001

Usage:

PaLM.ask(message, { ...config });

Config:

Learn more about model parameters here.

ConfigTypeDescription
modelstringAny model capable of generateMessage. Default: chat-bison-001.
candidate_countintegerHow many responses to generate. Default: 1
temperaturefloatTemperature of model. Default: 0.7
top_pfloattop_p of model. Default: 0.95
top_kfloattop_k of model. Default: 40
formatPaLM.FORMATS.MD or PaLM.FORMATS.JSONReturn as JSON or Markdown. Default: PaLM.FORMATS.MD
contextstringAdd context to your query. Optional
examplesarray of [example_input, example_output]Show PaLM how to respond. See examples below. Optional.

Example:

import PaLM from "palm-api";

let bot = new PaLM(API_KEY);

bot.ask("x^2+2x+1", {
	temperature: 0.5,
	candidateCount: 1,
	context: "Simplify the expression",
	examples: [
		["x^2-4", "(x-2)(x+2)"],
		["2x+2", "2(x+1)"],
		// ... etc
	],
});

JSON Response:

[
	{ content: string }, // Your message
	{ content: string }, // AI response
	// And so on in such pairs...
];

PaLM.generateText()

Uses the generateText capable models to let PaLM generate text.

Models available: text-bison-001

API:

PaLM.generateText(message, { ...config });

Config:

Learn more about model parameters here.

ConfigTypeDescription
modelstringAny model capable of generateText. Default: text-bison-001.
candidate_countintegerHow many responses to generate. Default: 1
temperaturefloatTemperature of model. Default: 0
top_pfloattop_p of model. Default: 0.95
top_kfloattop_k of model. Default: 40
formatPaLM.FORMATS.MD or PaLM.FORMATS.JSONReturn as JSON or Markdown. Default: PaLM.FORMATS.MD

Example:

import PaLM from "palm-api";

let bot = new PaLM(API_KEY);

bot.generateText("Write a poem on puppies.", {
	temperature: 0.5,
	candidateCount: 1,
});

JSON Response:

See more about safety ratings here.

[
	{
		output: output,
		safetyRatings: [
			HARM_CATEGORY_UNSPECIFIED: rating,
			HARM_CATEGORY_DEROGATORY: rating,
			HARM_CATEGORY_TOXICITY: rating,
			HARM_CATEGORY_VIOLENCE: rating,
			HARM_CATEGORY_SEXUAL: rating,
			HARM_CATEGORY_DANGEROUS: rating,
		]
	},
	// More candidates (if asked for)...
];

PaLM.embed()

Uses PaLM to embed your text into a float matrix with embedText enabled models, that you can use for various complex tasks.

Models available: embedding-gecko-001

Usage:

PaLM.embed(message, { ...config });

Config:

ConfigTypeDescription
modelstringAny model capable of embedText. Default: embedding-gecko-001.

Example:

import PaLM from "palm-api";

let bot = new PaLM(API_KEY);

bot.embed("Hello, world!", {
	model: "embedding-gecko-001",
});

JSON Response:

[...embeddingMatrix];

PaLM.createChat()

Uses generateMessage capable models to create a chat interface that's simple, fast, and easy to use.

Usage:

let chat = PaLM.createChat({ ...config });

chat.ask(message, { ...config });

chat.export();

The ask method on Chat remembers previous messages and responses, so you can have a continued conversation.

Basic steps to use import/export chats:

  1. Create an instance of Chat with PaLM.createChat()
  2. Use Chat.ask() to query PaLM
  3. Use Chat.export() to export your messages and PaLM responses
  4. Import your messages with the messages config with PaLM.createChat({messages: exportedMessages})

Info You can actually change the messages exported, and PaLM in your new chat instance will adapt to the "edited history." Use this to your advantage!

Config for createChat():

Learn more about model parameters here. All configuration associated with Chat.ask() except the format is set in the config for createChat().

ConfigTypeDescription
messagesarrayExported messages from previous Chats. Optional.
modelstringAny model capable of generateMessage. Default: chat-bison-001.
candidate_countintegerHow many responses to generate. Default: 1
temperaturefloatTemperature of model. Default: 0.7
top_pfloattop_p of model. Default: 0.95
top_kfloattop_k of model. Default: 40
contextstringAdd context to your query. Optional
examplesarray of [example_input, example_output]Show PaLM how to respond. See examples below. Optional.

Config for Chat.ask():

ConfigTypeDescription
formatPaLM.FORMATS.MD or PaLM.FORMATS.JSONReturn as JSON or Markdown. Default: PaLM.FORMATS.MD

Example:

import PaLM from "palm-api";

let bot = new PaLM(API_KEY);

let chat = PaLM.createChat({
	temperature: 0,
	context: "Respond like Shakespeare",
});

chat.ask("What is 1+1?");
chat.ask("What do you get if you add 1 to that?");

The response for Chat.ask() is exactly the same as PaLM.ask(). In fact,they use the same query function under-the-hood.

Frequently Asked Questions

"Why can the model not access the internet like Bard?"

PaLM is only a Language Model. Google Bard is able to search the Internet because it is performing additional searches on Google, and feeding the results back into PaLM. Thus, if you do want to mimic the web-search behavior, you need to implement it yourself. (Or just use bard-ai for a Google Bard API!)

"fetch is undefined" or "I can't use default fetch"

PaLM API uses the experimental fetch function in Node.js. It is enabled by default now, but there are still environments in which it is undefined, or disabled. You may also be here because you need to use a special fetch for your development environment. Because of this, PaLM API comes with a way to Polyfill fetch—built in. Just pass in your custom fetch function into the config for the PaLM class.

import PaLM from "palm-api";
import fetch from "node-fetch";

let bot = new PaLM(API_KEY, {
	fetch: fetch,
});

It's that easy! Just ensure your polyfill has the exact same API as the default Node.js/browser fetch or it is not guarenteed to work.

"Cannot require of a ES6 module"

PaLM API, as per today's standards, is a strictly ES6 module (that means it uses the new import/export syntax). Because of this, you have two options if you are still using require/CommonJS Modules:

  1. Migrate to ESM yourself! It will be beneficial for you in the future.
  2. Use a dynamic import.

Contributors

A special shoutout to developers and contributors of the bard-ai library. The PaLM API interface is basically an exact port of the bard-ai interface.

Additionally, huge thank-you to @Nyphet for converting the library to TypeScript!

However, we thank every person that helps in the development process of this library, no matter that be in code, ideas, or anything else.


Docs | GitHub | FAQ


Download Details:

Author: EvanZhouDev
Source Code: https://github.com/EvanZhouDev/palm-api 
License: GPL-3.0 license

#AI #api #palm 

Palm API: Embrace AI for Powerful Hand Gesture Recognition
3.90 GEEK