Faster. Easier. Smarter.
temperature
, top_p
, and morepalm-api
v1.0 compared to Google's own API:
googlebard
)Google has its own API interface for PaLM, through their @google/generativelanguage and google-auth-library packages .
However, making requests with the native package is simply too complicated, clunky, and slow.
Here's the code you need for the Google API to ask PaLM something with context:
const { DiscussServiceClient } = require("@google-ai/generativelanguage");
const { GoogleAuth } = require("google-auth-library");
const client = new DiscussServiceClient({
authClient: new GoogleAuth().fromAPIKey(process.env.API_KEY),
});
const result = await client.generateMessage({
model: "models/chat-bison-001",
prompt: {
context: "Respond to all questions with a single number.",
messages: [{ content: "How tall is the Eiffel Tower?" }],
},
});
console.log(result[0].candidates[0].content);
And here's the equivalent code in palm-api
:
import PaLM from "palm-api";
let bot = new PaLM(process.env.API_KEY);
bot.ask("How tall is the Eiffel Tower?", {
context: "Respond to all questions with a single number.",
});
Yep! That's it... get the best features of PaLM, in a package that's simpler and easier to use, test, and maintain.
Comparing against the Google API...
PaLM API clocks in at just 1.3kb minzipped.
@google/generativelanguage and google-auth-library, the two required packages for Google's own implementation, clocks in at a total of (more or less) 337kb minzipped.
That makes PaLM API around 260 times smaller!
Using at the exact example I showed above, we are at around 2.8x less code needed for PaLM API, looking at characters.
Comparing the speed with the demo code on Google's own website, and equivalent code written in PaLM API, the times are virtually similar.
Tested with hyperfine.
First, install PaLM API on NPM:
npm install palm-api
or PNPM:
pnpm add palm-api
Then, get yourself an API key in Makersuite here. Click on "Create API key in new project," and then simply copy the string.
Import PaLM API, then initialize the class with your API key.
Warning
It is recommended that you access your API from process.env
or .env
import PaLM from "palm-api";
let bot = new PaLM(API_KEY, { ...config });
Config | Type | Description |
---|---|---|
fetch | function | Fetch polyfill with same interface as native fetch. Optional |
Note
PaLM itself and all of its methods have a config
object that you can pass in as a secondary parameter. Example:
import PaLM from "palm-api";
import fetch from "node-fetch";
let bot = new PaLM(API_KEY, {
fetch: fetch,
});
PaLM.ask()
Uses the generateMessage
capable models to provide a high-quality LLM experience, with context, examples, and more.
Models available: chat-bison-001
PaLM.ask(message, { ...config });
Learn more about model parameters here.
Config | Type | Description |
---|---|---|
model | string | Any model capable of generateMessage . Default: chat-bison-001 . |
candidate_count | integer | How many responses to generate. Default: 1 |
temperature | float | Temperature of model. Default: 0.7 |
top_p | float | top_p of model. Default: 0.95 |
top_k | float | top_k of model. Default: 40 |
format | PaLM.FORMATS.MD or PaLM.FORMATS.JSON | Return as JSON or Markdown. Default: PaLM.FORMATS.MD |
context | string | Add context to your query. Optional |
examples | array of [example_input, example_output] | Show PaLM how to respond. See examples below. Optional. |
import PaLM from "palm-api";
let bot = new PaLM(API_KEY);
bot.ask("x^2+2x+1", {
temperature: 0.5,
candidateCount: 1,
context: "Simplify the expression",
examples: [
["x^2-4", "(x-2)(x+2)"],
["2x+2", "2(x+1)"],
// ... etc
],
});
[
{ content: string }, // Your message
{ content: string }, // AI response
// And so on in such pairs...
];
PaLM.generateText()
Uses the generateText
capable models to let PaLM generate text.
Models available: text-bison-001
PaLM.generateText(message, { ...config });
Learn more about model parameters here.
Config | Type | Description |
---|---|---|
model | string | Any model capable of generateText . Default: text-bison-001 . |
candidate_count | integer | How many responses to generate. Default: 1 |
temperature | float | Temperature of model. Default: 0 |
top_p | float | top_p of model. Default: 0.95 |
top_k | float | top_k of model. Default: 40 |
format | PaLM.FORMATS.MD or PaLM.FORMATS.JSON | Return as JSON or Markdown. Default: PaLM.FORMATS.MD |
import PaLM from "palm-api";
let bot = new PaLM(API_KEY);
bot.generateText("Write a poem on puppies.", {
temperature: 0.5,
candidateCount: 1,
});
See more about safety ratings here.
[
{
output: output,
safetyRatings: [
HARM_CATEGORY_UNSPECIFIED: rating,
HARM_CATEGORY_DEROGATORY: rating,
HARM_CATEGORY_TOXICITY: rating,
HARM_CATEGORY_VIOLENCE: rating,
HARM_CATEGORY_SEXUAL: rating,
HARM_CATEGORY_DANGEROUS: rating,
]
},
// More candidates (if asked for)...
];
PaLM.embed()
Uses PaLM to embed your text into a float matrix with embedText
enabled models, that you can use for various complex tasks.
Models available: embedding-gecko-001
PaLM.embed(message, { ...config });
Config | Type | Description |
---|---|---|
model | string | Any model capable of embedText . Default: embedding-gecko-001 . |
import PaLM from "palm-api";
let bot = new PaLM(API_KEY);
bot.embed("Hello, world!", {
model: "embedding-gecko-001",
});
[...embeddingMatrix];
PaLM.createChat()
Uses generateMessage
capable models to create a chat interface that's simple, fast, and easy to use.
let chat = PaLM.createChat({ ...config });
chat.ask(message, { ...config });
chat.export();
The ask
method on Chat remembers previous messages and responses, so you can have a continued conversation.
Basic steps to use import/export chats:
PaLM.createChat()
Chat.ask()
to query PaLMChat.export()
to export your messages and PaLM responsesPaLM.createChat({messages: exportedMessages})
Info You can actually change the messages exported, and PaLM in your new chat instance will adapt to the "edited history." Use this to your advantage!
createChat()
:Learn more about model parameters here. All configuration associated with Chat.ask()
except the format
is set in the config for createChat()
.
Config | Type | Description |
---|---|---|
messages | array | Exported messages from previous Chats. Optional. |
model | string | Any model capable of generateMessage . Default: chat-bison-001 . |
candidate_count | integer | How many responses to generate. Default: 1 |
temperature | float | Temperature of model. Default: 0.7 |
top_p | float | top_p of model. Default: 0.95 |
top_k | float | top_k of model. Default: 40 |
context | string | Add context to your query. Optional |
examples | array of [example_input, example_output] | Show PaLM how to respond. See examples below. Optional. |
Chat.ask()
:Config | Type | Description |
---|---|---|
format | PaLM.FORMATS.MD or PaLM.FORMATS.JSON | Return as JSON or Markdown. Default: PaLM.FORMATS.MD |
import PaLM from "palm-api";
let bot = new PaLM(API_KEY);
let chat = PaLM.createChat({
temperature: 0,
context: "Respond like Shakespeare",
});
chat.ask("What is 1+1?");
chat.ask("What do you get if you add 1 to that?");
The response for Chat.ask()
is exactly the same as PaLM.ask()
. In fact,they use the same query function under-the-hood.
PaLM is only a Language Model. Google Bard is able to search the Internet because it is performing additional searches on Google, and feeding the results back into PaLM. Thus, if you do want to mimic the web-search behavior, you need to implement it yourself. (Or just use bard-ai
for a Google Bard API!)
fetch
is undefined" or "I can't use default fetch
"PaLM API uses the experimental fetch
function in Node.js. It is enabled by default now, but there are still environments in which it is undefined
, or disabled. You may also be here because you need to use a special fetch for your development environment. Because of this, PaLM API comes with a way to Polyfill fetch—built in. Just pass in your custom fetch
function into the config for the PaLM class.
import PaLM from "palm-api";
import fetch from "node-fetch";
let bot = new PaLM(API_KEY, {
fetch: fetch,
});
It's that easy! Just ensure your polyfill has the exact same API as the default Node.js/browser fetch
or it is not guarenteed to work.
require
of a ES6 module"PaLM API, as per today's standards, is a strictly ES6 module (that means it uses the new import
/export
syntax). Because of this, you have two options if you are still using require
/CommonJS Modules:
A special shoutout to developers and contributors of the bard-ai
library. The PaLM API interface is basically an exact port of the bard-ai
interface.
Additionally, huge thank-you to @Nyphet for converting the library to TypeScript!
However, we thank every person that helps in the development process of this library, no matter that be in code, ideas, or anything else.
Author: EvanZhouDev
Source Code: https://github.com/EvanZhouDev/palm-api
License: GPL-3.0 license