Alpaca-lora: Finetuning instructLLaMA on Consumer Hardware

馃馃尣馃 Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning

Try the pretrained model out on Colab here! If you have problems with short outputs or very long outputs, please redownload the weights (force_download=True) and pull the latest version of the code.

This repository contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). We provide an Instruct model of similar quality to text-davinci-003 that can run on a Raspberry Pi (for research), and the code can be easily extended to the 13b, 30b, and 65b models.

In addition to the training code, which runs within five hours on a single RTX 4090, we publish a script for downloading and inference on the foundation model and LoRA, as well as the resulting LoRA weights themselves. To fine-tune cheaply and efficiently, we use Huggingface's PEFT as well as Tim Dettmers' bitsandbytes.

Without hyperparameter tuning or validation-based checkpointing, the LoRA model produces outputs comparable to the Stanford Alpaca model. (Please see the outputs included below.) Further tuning might be able to achieve better performance; I invite interested users to give it a try and report their results.

Setup

  • Install dependencies
pip install -r requirements.txt

Inference (generate.py)

This file reads the foundation model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training (finetune.py)

This file contains a straightforward application of PEFT to the LLaMA model, as well as some code related to prompt construction and tokenization. Near the top of this file is a set of hardcoded hyperparameters that you should feel free to modify. PRs adapting this code to multi-GPU setups and larger models are always welcome.

Checkpoint export (export_state_dict_checkpoint.py)

This file contains a script to convert the LoRA back into a standard PyTorch model checkpoint, which should help users who want to use the model with projects like llama.cpp.

To do

  •  Merge LoRA weights into LLaMA weights to remove inference dependency on PEFT
  •  Train/val split
  •  Hyperparameter tuning code
  •  Support for 13b, 30b, 65b
  •  Train a version that doesn't waste tokens on the prompt header
  •  Inference CLI and evaluation
  •  Better disclaimers about why using LLaMA without permission is very bad!

Example outputs

Instruction: Tell me about alpacas.

Alpaca-LoRA: Alpacas are members of the camelid family and are native to the Andes Mountains of South America. They are known for their soft, luxurious fleece, which is used to make clothing, blankets, and other items. Alpacas are herbivores and graze on grasses and other plants. They are social animals and live in herds of up to 20 individuals.

Stanford Alpaca: Alpacas are small, fluffy animals related to camels and llamas. They are native to Peru and Bolivia, and were first domesticated around 5,000 years ago. They are kept mainly for their fine, soft fleece, which is used to make knitwear and other garments. Alpacas are herd animals and live in small family groups, led by an older male. They are highly social animals and can form strong bonds with their owners.

text-davinci-003: Alpacas are a domesticated species of South American camelid, similar to llamas. They are native to the Andes Mountains in South America and are kept as livestock for their soft, luxurious wool. Alpacas are social animals and live in herds of up to 20 individuals. They are typically kept in small herds of two to five animals, and are relatively easy to care for. Alpacas are herbivores and feed on grass, hay, and other vegetation. They are also known for their gentle and friendly nature, making them popular as pets.


Instruction: Tell me about the president of Mexico in 2019.

Alpaca-LoRA: The president of Mexico in 2019 was Andr茅s Manuel L贸pez Obrador, who took office on December 1, 2018. He is a member of the National Regeneration Movement (MORENA) political party and is the first left-wing president of Mexico since 1946. He is known for his anti-corruption and anti-neoliberal policies, as well as his commitment to improving the living conditions of the Mexican people.

Stanford Alpaca: In 2019, Andres Manuel Lopez Obrador was sworn in as President of Mexico. He is the country鈥檚 58th president and the first from the left-wing Morena party. He was elected in 2018 on a promise to combat corruption and reduce poverty and has faced criticism for his handling of the economy, crime and the country鈥檚 relationship with the United States. He is also known for his ambitious social welfare programs and has been praised for raising the minimum wage and providing aid to low-income families. He is seen as a champion of democracy and human rights in Mexico.

text-davinci-003: The president of Mexico in 2019 is Andr茅s Manuel L贸pez Obrador, commonly known as AMLO. He is a Mexican politician who has been the President of Mexico since December 1, 2018. He is a member of the National Regeneration Movement (MORENA) party and previously served as the Head of Government of Mexico City from 2000 to 2005. He is a left-wing populist and has promised to reduce corruption and inequality in Mexico. He has also promised to increase public spending on social programs and infrastructure projects.


Instruction: Tell me about the king of France in 2019.

Alpaca-LoRA: The King of France in 2019 is Emmanuel Macron, who was elected in May 2017. He is the youngest president in the history of the Fifth Republic and the first president to be born after World War II. He is also the first president to have never held elected office before.

Stanford Alpaca: In 2019, the King of France was Emmanuel Macron. He was sworn in as President of France on May 14, 2017, and has been leading the country ever since. He is a member of the centrist La R茅publique En Marche! Party. He is known for his reformist agenda and has faced a number of protests from both the right and left of the political spectrum. He is the youngest French president in history.

text-davinci-003: The King of France is a largely symbolic role, as France is a republic. The current President of France is Emmanuel Macron, who was elected in 2017.


Instruction: List all Canadian provinces in alphabetical order.

Alpaca-LoRA: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan

Stanford Alpaca: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan, Northwest Territories, Nunavut, Yukon.

text-davinci-003:

  1. Alberta
  2. British Columbia
  3. Manitoba
  4. New Brunswick
  5. Newfoundland and Labrador
  6. Northwest Territories
  7. Nova Scotia
  8. Nunavut
  9. Ontario
  10. Prince Edward Island
  11. Quebec
  12. Saskatchewan
  13. Yukon

Instruction: Write a Python program that prints the first 10 Fibonacci numbers.

Alpaca-LoRA:

def fibonacci(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fibonacci(n-1) + fibonacci(n-2)

print(fibonacci(10))

Stanford Alpaca:

numbers = [0, 1]

for i in range(2, 11):
    numbers.append(numbers[i-2] + numbers[i-1])

print(numbers)
# Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

text-davinci-003:

# Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones.

# Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

# Program:

# Initialize the first two numbers of the Fibonacci sequence
a = 0
b = 1

# Print the first 10 numbers of the Fibonacci sequence
print(a)
print(b)

for i in range(2,10):
    c = a + b
    a = b
    b = c
    print(c)

Instruction: Write a program that prints the numbers from 1 to 100. But for multiples of three print 'Fizz' instead of the number and for the multiples of five print 'Buzz'. For numbers which are multiples of both three and five print 'FizzBuzz'.

Alpaca-LoRA:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

Stanford Alpaca:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print('FizzBuzz')
    elif i % 3 == 0:
        print('Fizz')
    elif i % 5 == 0:
        print('Buzz')
    else:
        print(i)

text-davinci-003:

for (let i = 1; i <= 100; i++) {
  if (i % 3 === 0 && i % 5 === 0) {
    console.log("FizzBuzz");
  } else if (i % 3 === 0) {
    console.log("Fizz");
  } else if (i % 5 === 0) {
    console.log("Buzz");
  } else {
    console.log(i);
  }
}

Instruction: Tell me five words that rhyme with 'shock'.

Alpaca-LoRA: Flock, lock, rock, stock, and sock.

Stanford Alpaca: Five words that rhyme with shock are: rock, pop, shock, cook, and snock.

text-davinci-003:

  1. Block
  2. Lock
  3. Dock
  4. Clock
  5. Rock

Instruction: Translate the sentence 'I have no mouth but I must scream' into Spanish.

Alpaca-LoRA: No tengo boca pero tengo que gritar.

Stanford Alpaca: No tengo boca, pero debo gritar.

text-davinci-003: No tengo boca pero debo gritar.


Download Details:

Author: tloen
Source Code: https://github.com/tloen/alpaca-lora 
License: Apache-2.0 license

#jupyternotebook #python 

What is GEEK

Buddha Community

Alpaca-lora: Finetuning instructLLaMA on Consumer Hardware
Royce  Reinger

Royce Reinger

1659707040

Phobos: Simplifying Kafka for Ruby Apps

Phobos

Simplifying Kafka for Ruby apps!

Phobos is a micro framework and library for applications dealing with Apache Kafka.

  • It wraps common behaviors needed by consumers and producers in an easy and convenient API
  • It uses ruby-kafka as its Kafka client and core component
  • It provides a CLI for starting and stopping a standalone application ready to be used for production purposes

Why Phobos? Why not ruby-kafka directly? Well, ruby-kafka is just a client. You still need to write a lot of code to manage proper consuming and producing of messages. You need to do proper message routing, error handling, retrying, backing off and maybe logging/instrumenting the message management process. You also need to worry about setting up a platform independent test environment that works on CI as well as any local machine, and even on your deployment pipeline. Finally, you also need to consider how to deploy your app and how to start it.

With Phobos by your side, all this becomes smooth sailing.

Installation

Add this line to your application's Gemfile:

gem 'phobos'

And then execute:

$ bundle

Or install it yourself as:

$ gem install phobos

Usage

Phobos can be used in two ways: as a standalone application or to support Kafka features in your existing project - including Rails apps. It provides a CLI tool to run it.

Standalone apps

Standalone apps have benefits such as individual deploys and smaller code bases. If consuming from Kafka is your version of microservices, Phobos can be of great help.

Setup

To create an application with Phobos you need two things:

  • A configuration file (more details in the Configuration file section)
  • A phobos_boot.rb (or the name of your choice) to properly load your code into Phobos executor

Use the Phobos CLI command init to bootstrap your application. Example:

# call this command inside your app folder
$ phobos init
    create  config/phobos.yml
    create  phobos_boot.rb

phobos.yml is the configuration file and phobos_boot.rb is the place to load your code.

Consumers (listeners and handlers)

In Phobos apps listeners are configured against Kafka - they are our consumers. A listener requires a handler (a ruby class where you should process incoming messages), a Kafka topic, and a Kafka group_id. Consumer groups are used to coordinate the listeners across machines. We write the handlers and Phobos makes sure to run them for us. An example of a handler is:

class MyHandler
  include Phobos::Handler

  def consume(payload, metadata)
    # payload  - This is the content of your Kafka message, Phobos does not attempt to
    #            parse this content, it is delivered raw to you
    # metadata - A hash with useful information about this event, it contains: The event key,
    #            partition number, offset, retry_count, topic, group_id, and listener_id
  end
end

Writing a handler is all you need to allow Phobos to work - it will take care of execution, retries and concurrency.

To start Phobos the start command is used, example:

$ phobos start
[2016-08-13T17:29:59:218+0200Z] INFO  -- Phobos : <Hash> {:message=>"Phobos configured", :env=>"development"}
______ _           _
| ___ \ |         | |
| |_/ / |__   ___ | |__   ___  ___
|  __/| '_ \ / _ \| '_ \ / _ \/ __|
| |   | | | | (_) | |_) | (_) \__ \
\_|   |_| |_|\___/|_.__/ \___/|___/

phobos_boot.rb - find this file at ~/Projects/example/phobos_boot.rb

[2016-08-13T17:29:59:272+0200Z] INFO  -- Phobos : <Hash> {:message=>"Listener started", :listener_id=>"6d5d2c", :group_id=>"test-1", :topic=>"test"}

By default, the start command will look for the configuration file at config/phobos.yml and it will load the file phobos_boot.rb if it exists. In the example above all example files generated by the init command are used as is. It is possible to change both files, use -c for the configuration file and -b for the boot file. Example:

$ phobos start -c /var/configs/my.yml -b /opt/apps/boot.rb

You may also choose to configure phobos with a hash from within your boot file. In this case, disable loading the config file with the --skip-config option:

$ phobos start -b /opt/apps/boot.rb --skip-config

Consuming messages from Kafka

Messages from Kafka are consumed using handlers. You can use Phobos executors or include it in your own project as a library, but handlers will always be used. To create a handler class, simply include the module Phobos::Handler. This module allows Phobos to manage the life cycle of your handler.

A handler is required to implement the method #consume(payload, metadata).

Instances of your handler will be created for every message, so keep a constructor without arguments. If consume raises an exception, Phobos will retry the message indefinitely, applying the back off configuration presented in the configuration file. The metadata hash will contain a key called retry_count with the current number of retries for this message. To skip a message, simply return from #consume.

The metadata hash will also contain a key called headers with the headers of the consumed message.

When the listener starts, the class method .start will be called with the kafka_client used by the listener. Use this hook as a chance to setup necessary code for your handler. The class method .stop will be called during listener shutdown.

class MyHandler
  include Phobos::Handler

  def self.start(kafka_client)
    # setup handler
  end

  def self.stop
    # teardown
  end

  def consume(payload, metadata)
    # consume or skip message
  end
end

It is also possible to control the execution of #consume with the method #around_consume(payload, metadata). This method receives the payload and metadata, and then invokes #consume method by means of a block; example:

class MyHandler
  include Phobos::Handler

  def around_consume(payload, metadata)
    Phobos.logger.info "consuming..."
    output = yield payload, metadata
    Phobos.logger.info "done, output: #{output}"
  end

  def consume(payload, metadata)
    # consume or skip message
  end
end

Note: around_consume was previously defined as a class method. The current code supports both implementations, giving precendence to the class method, but future versions will no longer support .around_consume.

class MyHandler
  include Phobos::Handler

  def self.around_consume(payload, metadata)
    Phobos.logger.info "consuming..."
    output = yield payload, metadata
    Phobos.logger.info "done, output: #{output}"
  end

  def consume(payload, metadata)
    # consume or skip message
  end
end

Take a look at the examples folder for some ideas.

The hander life cycle can be illustrated as:

.start -> #consume -> .stop

or optionally,

.start -> #around_consume [ #consume ] -> .stop

Batch Consumption

In addition to the regular handler, Phobos provides a BatchHandler. The basic ideas are identical, except that instead of being passed a single message at a time, the BatchHandler is passed a batch of messages. All methods follow the same pattern as the regular handler except that they each end in _batch and are passed an array of Phobos::BatchMessages instead of a single payload.

To enable handling of batches on the consumer side, you must specify a delivery method of inline_batch in phobos.yml, and your handler must include BatchHandler. Using a delivery method of batch assumes that you are still processing the messages one at a time and should use Handler.

When using inline_batch, each instance of Phobos::BatchMessage will contain an instance method headers with the headers for that message.

class MyBatchHandler
  include Phobos::BatchHandler

  def around_consume_batch(payloads, metadata)
    payloads.each do |p|
      p.payload[:timestamp] = Time.zone.now
    end

    yield payloads, metadata
  end

  def consume_batch(payloads, metadata)
    payloads.each do |p|
      logger.info("Got payload #{p.payload}, #{p.partition}, #{p.offset}, #{p.key}, #{p.payload[:timestamp]}")
    end
  end

end

Note that retry logic will happen on the batch level in this case. If you are processing messages individually and an error happens in the middle, Phobos's retry logic will retry the entire batch. If this is not the behavior you want, consider using batch instead of inline_batch.

Producing messages to Kafka

ruby-kafka provides several options for publishing messages, Phobos offers them through the module Phobos::Producer. It is possible to turn any ruby class into a producer (including your handlers), just include the producer module, example:

class MyProducer
  include Phobos::Producer
end

Phobos is designed for multi threading, thus the producer is always bound to the current thread. It is possible to publish messages from objects and classes, pick the option that suits your code better. The producer module doesn't pollute your classes with a thousand methods, it includes a single method the class and in the instance level: producer.

my = MyProducer.new
my.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key')

# The code above has the same effect of this code:
MyProducer.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key')

The signature for the publish method is as follows:

def publish(topic: topic, payload: payload, key: nil, partition_key: nil, headers: nil)

When publishing a message with headers, the headers argument must be a hash:

my = MyProducer.new
my.producer.publish(topic: 'topic', payload: 'message-payload', key: 'partition and message key', headers: { header_1: 'value 1' })

It is also possible to publish several messages at once:

MyProducer
  .producer
  .publish_list([
    { topic: 'A', payload: 'message-1', key: '1' },
    { topic: 'B', payload: 'message-2', key: '2' },
    { topic: 'B', payload: 'message-3', key: '3', headers: { header_1: 'value 1', header_2: 'value 2' } }
  ])

There are two flavors of producers: regular producers and async producers.

Regular producers will deliver the messages synchronously and disconnect, it doesn't matter if you use publish or publish_list; by default, after the messages get delivered the producer will disconnect.

Async producers will accept your messages without blocking, use the methods async_publish and async_publish_list to use async producers.

An example of using handlers to publish messages:

class MyHandler
  include Phobos::Handler
  include Phobos::Producer

  PUBLISH_TO = 'topic2'

  def consume(payload, metadata)
    producer.async_publish(topic: PUBLISH_TO, payload: {key: 'value'}.to_json)
  end
end

Note about configuring producers

Since the handler life cycle is managed by the Listener, it will make sure the producer is properly closed before it stops. When calling the producer outside a handler remember, you need to shutdown them manually before you close the application. Use the class method async_producer_shutdown to safely shutdown the producer.

Without configuring the Kafka client, the producers will create a new one when needed (once per thread). To disconnect from kafka call kafka_client.close.

# This method will block until everything is safely closed
MyProducer
  .producer
  .async_producer_shutdown

MyProducer
  .producer
  .kafka_client
  .close

Note about producers with persistent connections

By default, regular producers will automatically disconnect after every publish call. You can change this behavior (which reduces connection overhead, TLS etc - which increases speed significantly) by setting the persistent_connections config in phobos.yml. When set, regular producers behave identically to async producers and will also need to be shutdown manually using the sync_producer_shutdown method.

Since regular producers with persistent connections have open connections, you need to manually disconnect from Kafka when ending your producers' life cycle:

MyProducer
  .producer
  .sync_producer_shutdown

Phobos as a library in an existing project

When running as a standalone service, Phobos sets up a Listener and Executor for you. When you use Phobos as a library in your own project, you need to set these components up yourself.

First, call the method configure with the path of your configuration file or with configuration settings hash.

Phobos.configure('config/phobos.yml')

or

Phobos.configure(kafka: { client_id: 'phobos' }, logger: { file: 'log/phobos.log' })

Listener connects to Kafka and acts as your consumer. To create a listener you need a handler class, a topic, and a group id.

listener = Phobos::Listener.new(
  handler: Phobos::EchoHandler,
  group_id: 'group1',
  topic: 'test'
)

# start method blocks
Thread.new { listener.start }

listener.id # 6d5d2c (all listeners have an id)
listener.stop # stop doesn't block

This is all you need to consume from Kafka with back off retries.

An executor is the supervisor of all listeners. It loads all listeners configured in phobos.yml. The executor keeps the listeners running and restarts them when needed.

executor = Phobos::Executor.new

# start doesn't block
executor.start

# stop will block until all listers are properly stopped
executor.stop

When using Phobos executors you don't care about how listeners are created, just provide the configuration under the listeners section in the configuration file and you are good to go.

Configuration file

The configuration file is organized in 6 sections. Take a look at the example file, config/phobos.yml.example.

The file will be parsed through ERB so ERB syntax/file extension is supported beside the YML format.

logger configures the logger for all Phobos components. It automatically outputs to STDOUT and it saves the log in the configured file.

kafka provides configurations for every Kafka::Client created over the application. All options supported by ruby-kafka can be provided.

producer provides configurations for all producers created over the application, the options are the same for regular and async producers. All options supported by ruby-kafka can be provided. If the kafka key is present under producer, it is merged into the top-level kafka, allowing different connection configuration for producers.

consumer provides configurations for all consumer groups created over the application. All options supported by ruby-kafka can be provided. If the kafka key is present under consumer, it is merged into the top-level kafka, allowing different connection configuration for consumers.

backoff Phobos provides automatic retries for your handlers. If an exception is raised, the listener will retry following the back off configured here. Backoff can also be configured per listener.

listeners is the list of listeners configured. Each listener represents a consumer group.

Additional listener configuration

In some cases it's useful to share most of the configuration between multiple phobos processes, but have each process run different listeners. In that case, a separate yaml file can be created and loaded with the -l flag. Example:

$ phobos start -c /var/configs/my.yml -l /var/configs/additional_listeners.yml

Note that the config file must still specify a listeners section, though it can be empty.

Custom configuration/logging

Phobos can be configured using a hash rather than the config file directly. This can be useful if you want to do some pre-processing before sending the file to Phobos. One particularly useful aspect is the ability to provide Phobos with a custom logger, e.g. by reusing the Rails logger:

Phobos.configure(
  custom_logger: Rails.logger,
  custom_kafka_logger: Rails.logger
)

If these keys are given, they will override the logger keys in the Phobos config file.

Instrumentation

Some operations are instrumented using Active Support Notifications.

In order to receive notifications you can use the module Phobos::Instrumentation, example:

Phobos::Instrumentation.subscribe('listener.start') do |event|
  puts(event.payload)
end

Phobos::Instrumentation is a convenience module around ActiveSupport::Notifications, feel free to use it or not. All Phobos events are in the phobos namespace. Phobos::Instrumentation will always look at phobos. events.

Executor notifications

  • executor.retry_listener_error is sent when the listener crashes and the executor wait for a restart. It includes the following payload:
    • listener_id
    • retry_count
    • waiting_time
    • exception_class
    • exception_message
    • backtrace
  • executor.stop is sent when executor stops

Listener notifications

  • listener.start_handler is sent when invoking handler.start(kafka_client). It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
  • listener.start is sent when listener starts. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
  • listener.process_batch is sent after process a batch. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
    • batch_size
    • partition
    • offset_lag
    • highwater_mark_offset
  • listener.process_message is sent after processing a message. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
    • key
    • partition
    • offset
    • retry_count
  • listener.process_batch_inline is sent after processing a batch with batch_inline mode. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
    • batch_size
    • partition
    • offset_lag
    • retry_count
  • listener.retry_handler_error is sent after waiting for handler#consume retry. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
    • key
    • partition
    • offset
    • retry_count
    • waiting_time
    • exception_class
    • exception_message
    • backtrace
  • listener.retry_handler_error_batch is sent after waiting for handler#consume_batch retry. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
    • batch_size
    • partition
    • offset_lag
    • retry_count
    • waiting_time
    • exception_class
    • exception_message
    • backtrace
  • listener.retry_aborted is sent after waiting for a retry but the listener was stopped before the retry happened. It includes the following payload:
    • listener_id
    • group_id
    • topic
    • handler
  • listener.stopping is sent when the listener receives signal to stop.
    • listener_id
    • group_id
    • topic
    • handler
  • listener.stop_handler is sent after stopping the handler.
    • listener_id
    • group_id
    • topic
    • handler
  • listener.stop is send after stopping the listener.
    • listener_id
    • group_id
    • topic
    • handler

Plugins

List of gems that enhance Phobos:

Phobos DB Checkpoint is drop in replacement to Phobos::Handler, extending it with the following features:

  • Persists your Kafka events to an active record compatible database
  • Ensures that your handler will consume messages only once
  • Allows your system to quickly reprocess events in case of failures

Phobos Checkpoint UI gives your Phobos DB Checkpoint powered app a web gui with the features below. Maintaining a Kafka consumer app has never been smoother:

  • Search events and inspect payload
  • See failures and retry / delete them

Phobos Prometheus adds prometheus metrics to your phobos consumer.

  • Measures total messages and batches processed
  • Measures total duration needed to process each message (and batch)
  • Adds /metrics endpoit to scrape data

Development

After checking out the repo:

  • make sure docker is installed and running (for windows and mac this also includes docker-compose).
  • Linux: make sure docker-compose is installed and running.
  • run bin/setup to install dependencies
  • run docker-compose up -d --force-recreate kafka zookeeper to start the required kafka containers
  • run tests to confirm no environmental issues
    • wait a few seconds for kafka broker to get set up - sleep 30
    • run docker-compose run --rm test
    • make sure it reports X examples, 0 failures

You can also run bin/console for an interactive prompt that will allow you to experiment.

To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org.

Test

Phobos exports a spec helper that can help you test your consumer. The Phobos lifecycle will conveniently be activated for you with minimal setup required.

  • process_message(handler:, payload:, metadata: {}, encoding: nil) - Invokes your handler with payload and metadata, using a dummy listener (encoding and metadata are optional).
### spec_helper.rb
require 'phobos/test/helper'
RSpec.configure do |config|
  config.include Phobos::Test::Helper
  config.before(:each) do
    Phobos.configure(path_to_my_config_file)
  end
end 

### Spec file
describe MyConsumer do
  let(:payload) { 'foo' }
  let(:metadata) { Hash(foo: 'bar') }

  it 'consumes my message' do
    expect_any_instance_of(described_class).to receive(:around_consume).with(payload, metadata).once.and_call_original
    expect_any_instance_of(described_class).to receive(:consume).with(payload, metadata).once.and_call_original

    process_message(handler: described_class, payload: payload, metadata: metadata)
  end
end

Upgrade Notes

Version 2.0 removes deprecated ways of defining producers and consumers:

  • The before_consume method has been removed. You can have this behavior in the first part of an around_consume method.
  • around_consume is now only available as an instance method, and it must yield the values to pass to the consume method.
  • publish and async_publish now only accept keyword arguments, not positional arguments.

Example pre-2.0:

class MyHandler
  include Phobos::Handler

  def before_consume(payload, metadata)
    payload[:id] = 1
  end

  def self.around_consume(payload, metadata)
    metadata[:key] = 5
    yield
  end
end

In 2.0:

class MyHandler
  include Phobos::Handler

  def around_consume(payload, metadata)
    new_payload = payload.dup
    new_metadata = metadata.dup
    new_payload[:id] = 1
    new_metadata[:key] = 5
    yield new_payload, new_metadata
  end
end

Producer, 1.9:

  producer.publish('my-topic', { payload_value: 1}, 5, 3, {header_val: 5})

Producer 2.0:

  producer.publish(topic: 'my-topic', payload: { payload_value: 1}, key: 5, 
     partition_key: 3, headers: { header_val: 5})

Version 1.8.2 introduced a new persistent_connections setting for regular producers. This reduces the number of connections used to produce messages and you should consider setting it to true. This does require a manual shutdown call - please see Producers with persistent connections.

Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/klarna/phobos.

Linting

Phobos projects Rubocop to lint the code, and in addition all projects use Rubocop Rules to maintain a shared rubocop configuration. Updates to the shared configurations are done in phobos/shared repo, where you can also find instructions on how to apply the new settings to the Phobos projects.

Acknowledgements

Thanks to Sebastian Norde for the awesome logo!

Author: Phobos
Source Code: https://github.com/phobos/phobos 
License: Apache-2.0 license

#ruby #kafka 

Alpaca-lora: Finetuning instructLLaMA on Consumer Hardware

馃馃尣馃 Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning

Try the pretrained model out on Colab here! If you have problems with short outputs or very long outputs, please redownload the weights (force_download=True) and pull the latest version of the code.

This repository contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). We provide an Instruct model of similar quality to text-davinci-003 that can run on a Raspberry Pi (for research), and the code can be easily extended to the 13b, 30b, and 65b models.

In addition to the training code, which runs within five hours on a single RTX 4090, we publish a script for downloading and inference on the foundation model and LoRA, as well as the resulting LoRA weights themselves. To fine-tune cheaply and efficiently, we use Huggingface's PEFT as well as Tim Dettmers' bitsandbytes.

Without hyperparameter tuning or validation-based checkpointing, the LoRA model produces outputs comparable to the Stanford Alpaca model. (Please see the outputs included below.) Further tuning might be able to achieve better performance; I invite interested users to give it a try and report their results.

Setup

  • Install dependencies
pip install -r requirements.txt

Inference (generate.py)

This file reads the foundation model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training (finetune.py)

This file contains a straightforward application of PEFT to the LLaMA model, as well as some code related to prompt construction and tokenization. Near the top of this file is a set of hardcoded hyperparameters that you should feel free to modify. PRs adapting this code to multi-GPU setups and larger models are always welcome.

Checkpoint export (export_state_dict_checkpoint.py)

This file contains a script to convert the LoRA back into a standard PyTorch model checkpoint, which should help users who want to use the model with projects like llama.cpp.

To do

  •  Merge LoRA weights into LLaMA weights to remove inference dependency on PEFT
  •  Train/val split
  •  Hyperparameter tuning code
  •  Support for 13b, 30b, 65b
  •  Train a version that doesn't waste tokens on the prompt header
  •  Inference CLI and evaluation
  •  Better disclaimers about why using LLaMA without permission is very bad!

Example outputs

Instruction: Tell me about alpacas.

Alpaca-LoRA: Alpacas are members of the camelid family and are native to the Andes Mountains of South America. They are known for their soft, luxurious fleece, which is used to make clothing, blankets, and other items. Alpacas are herbivores and graze on grasses and other plants. They are social animals and live in herds of up to 20 individuals.

Stanford Alpaca: Alpacas are small, fluffy animals related to camels and llamas. They are native to Peru and Bolivia, and were first domesticated around 5,000 years ago. They are kept mainly for their fine, soft fleece, which is used to make knitwear and other garments. Alpacas are herd animals and live in small family groups, led by an older male. They are highly social animals and can form strong bonds with their owners.

text-davinci-003: Alpacas are a domesticated species of South American camelid, similar to llamas. They are native to the Andes Mountains in South America and are kept as livestock for their soft, luxurious wool. Alpacas are social animals and live in herds of up to 20 individuals. They are typically kept in small herds of two to five animals, and are relatively easy to care for. Alpacas are herbivores and feed on grass, hay, and other vegetation. They are also known for their gentle and friendly nature, making them popular as pets.


Instruction: Tell me about the president of Mexico in 2019.

Alpaca-LoRA: The president of Mexico in 2019 was Andr茅s Manuel L贸pez Obrador, who took office on December 1, 2018. He is a member of the National Regeneration Movement (MORENA) political party and is the first left-wing president of Mexico since 1946. He is known for his anti-corruption and anti-neoliberal policies, as well as his commitment to improving the living conditions of the Mexican people.

Stanford Alpaca: In 2019, Andres Manuel Lopez Obrador was sworn in as President of Mexico. He is the country鈥檚 58th president and the first from the left-wing Morena party. He was elected in 2018 on a promise to combat corruption and reduce poverty and has faced criticism for his handling of the economy, crime and the country鈥檚 relationship with the United States. He is also known for his ambitious social welfare programs and has been praised for raising the minimum wage and providing aid to low-income families. He is seen as a champion of democracy and human rights in Mexico.

text-davinci-003: The president of Mexico in 2019 is Andr茅s Manuel L贸pez Obrador, commonly known as AMLO. He is a Mexican politician who has been the President of Mexico since December 1, 2018. He is a member of the National Regeneration Movement (MORENA) party and previously served as the Head of Government of Mexico City from 2000 to 2005. He is a left-wing populist and has promised to reduce corruption and inequality in Mexico. He has also promised to increase public spending on social programs and infrastructure projects.


Instruction: Tell me about the king of France in 2019.

Alpaca-LoRA: The King of France in 2019 is Emmanuel Macron, who was elected in May 2017. He is the youngest president in the history of the Fifth Republic and the first president to be born after World War II. He is also the first president to have never held elected office before.

Stanford Alpaca: In 2019, the King of France was Emmanuel Macron. He was sworn in as President of France on May 14, 2017, and has been leading the country ever since. He is a member of the centrist La R茅publique En Marche! Party. He is known for his reformist agenda and has faced a number of protests from both the right and left of the political spectrum. He is the youngest French president in history.

text-davinci-003: The King of France is a largely symbolic role, as France is a republic. The current President of France is Emmanuel Macron, who was elected in 2017.


Instruction: List all Canadian provinces in alphabetical order.

Alpaca-LoRA: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan

Stanford Alpaca: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan, Northwest Territories, Nunavut, Yukon.

text-davinci-003:

  1. Alberta
  2. British Columbia
  3. Manitoba
  4. New Brunswick
  5. Newfoundland and Labrador
  6. Northwest Territories
  7. Nova Scotia
  8. Nunavut
  9. Ontario
  10. Prince Edward Island
  11. Quebec
  12. Saskatchewan
  13. Yukon

Instruction: Write a Python program that prints the first 10 Fibonacci numbers.

Alpaca-LoRA:

def fibonacci(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fibonacci(n-1) + fibonacci(n-2)

print(fibonacci(10))

Stanford Alpaca:

numbers = [0, 1]

for i in range(2, 11):
    numbers.append(numbers[i-2] + numbers[i-1])

print(numbers)
# Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

text-davinci-003:

# Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones.

# Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

# Program:

# Initialize the first two numbers of the Fibonacci sequence
a = 0
b = 1

# Print the first 10 numbers of the Fibonacci sequence
print(a)
print(b)

for i in range(2,10):
    c = a + b
    a = b
    b = c
    print(c)

Instruction: Write a program that prints the numbers from 1 to 100. But for multiples of three print 'Fizz' instead of the number and for the multiples of five print 'Buzz'. For numbers which are multiples of both three and five print 'FizzBuzz'.

Alpaca-LoRA:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

Stanford Alpaca:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print('FizzBuzz')
    elif i % 3 == 0:
        print('Fizz')
    elif i % 5 == 0:
        print('Buzz')
    else:
        print(i)

text-davinci-003:

for (let i = 1; i <= 100; i++) {
  if (i % 3 === 0 && i % 5 === 0) {
    console.log("FizzBuzz");
  } else if (i % 3 === 0) {
    console.log("Fizz");
  } else if (i % 5 === 0) {
    console.log("Buzz");
  } else {
    console.log(i);
  }
}

Instruction: Tell me five words that rhyme with 'shock'.

Alpaca-LoRA: Flock, lock, rock, stock, and sock.

Stanford Alpaca: Five words that rhyme with shock are: rock, pop, shock, cook, and snock.

text-davinci-003:

  1. Block
  2. Lock
  3. Dock
  4. Clock
  5. Rock

Instruction: Translate the sentence 'I have no mouth but I must scream' into Spanish.

Alpaca-LoRA: No tengo boca pero tengo que gritar.

Stanford Alpaca: No tengo boca, pero debo gritar.

text-davinci-003: No tengo boca pero debo gritar.


Download Details:

Author: tloen
Source Code: https://github.com/tloen/alpaca-lora 
License: Apache-2.0 license

#jupyternotebook #python 

Gordon  Matlala

Gordon Matlala

1679281166

Alpaca-lora: Instruct-tune LLaMA on Consumer Hardware

馃馃尣馃 Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning

Try the pretrained model out on Colab here!

Update 2023-03-19: weights have been updated with cleaned data and prompts masked out in the loss. This should reduce the number of template artifacts in outputs.

This repository contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). We provide an Instruct model of similar quality to text-davinci-003 that can run on a Raspberry Pi (for research), and the code can be easily extended to the 13b, 30b, and 65b models.

In addition to the training code, which runs within five hours on a single RTX 4090, we publish a script for downloading and inference on the foundation model and LoRA, as well as the resulting LoRA weights themselves. To fine-tune cheaply and efficiently, we use Hugging Face's PEFT as well as Tim Dettmers' bitsandbytes.

Without hyperparameter tuning or validation-based checkpointing, the LoRA model produces outputs comparable to the Stanford Alpaca model. (Please see the outputs included below.) Further tuning might be able to achieve better performance; I invite interested users to give it a try and report their results.

For discussion and support, users have created a dedicated Discord server here.

Setup

  • Install dependencies
pip install -r requirements.txt

Inference (generate.py)

This file reads the foundation model from the Hugging Face model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs a Gradio interface for inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training (finetune.py)

This file contains a straightforward application of PEFT to the LLaMA model, as well as some code related to prompt construction and tokenization. Near the top of this file is a set of hardcoded hyperparameters that you should feel free to modify. PRs adapting this code to support larger models are always welcome.

Checkpoint export (export_*_checkpoint.py)

These files contain scripts that merge the LoRA weights back into the base model for export to Hugging Face format and to PyTorch state_dicts. They should help users who want to run inference in projects like llama.cpp or alpaca.cpp.

Dataset

In addition to alpaca_data.json, which contains the original Stanford Alpaca dataset, we also include alpaca_data_cleaned.json, which has been stripped of various tokenization artifacts with the help of @gururise. This file is now used by default in the training script.

@AndriyMulyar has also provided interactive, embedding-based visualizations of the original dataset's instructions and outputs, as well as clusters of bad examples.

Notes

  • We can likely improve our model performance significantly if we combed through the data and fixed bad examples; in fact, dataset quality might be our bottleneck.
  • We're continually fixing bugs and conducting training runs, and the weights on the Hugging Face Hub are being updated accordingly. In particular, those facing issues with response lengths should make sure that they have the latest version of the weights and code.

Example outputs

Instruction: Tell me about alpacas.

Alpaca-LoRA: Alpacas are members of the camelid family and are native to the Andes Mountains of South America. They are known for their soft, luxurious fleece, which is used to make clothing, blankets, and other items. Alpacas are herbivores and graze on grasses and other plants. They are social animals and live in herds of up to 20 individuals.

Stanford Alpaca: Alpacas are small, fluffy animals related to camels and llamas. They are native to Peru and Bolivia, and were first domesticated around 5,000 years ago. They are kept mainly for their fine, soft fleece, which is used to make knitwear and other garments. Alpacas are herd animals and live in small family groups, led by an older male. They are highly social animals and can form strong bonds with their owners.

text-davinci-003: Alpacas are a domesticated species of South American camelid, similar to llamas. They are native to the Andes Mountains in South America and are kept as livestock for their soft, luxurious wool. Alpacas are social animals and live in herds of up to 20 individuals. They are typically kept in small herds of two to five animals, and are relatively easy to care for. Alpacas are herbivores and feed on grass, hay, and other vegetation. They are also known for their gentle and friendly nature, making them popular as pets.


Instruction: Tell me about the president of Mexico in 2019.

Alpaca-LoRA: The president of Mexico in 2019 was Andr茅s Manuel L贸pez Obrador, who took office on December 1, 2018. He is a member of the National Regeneration Movement (MORENA) political party and is the first left-wing president of Mexico since 1946. He is known for his anti-corruption and anti-neoliberal policies, as well as his commitment to improving the living conditions of the Mexican people.

Stanford Alpaca: In 2019, Andres Manuel Lopez Obrador was sworn in as President of Mexico. He is the country鈥檚 58th president and the first from the left-wing Morena party. He was elected in 2018 on a promise to combat corruption and reduce poverty and has faced criticism for his handling of the economy, crime and the country鈥檚 relationship with the United States. He is also known for his ambitious social welfare programs and has been praised for raising the minimum wage and providing aid to low-income families. He is seen as a champion of democracy and human rights in Mexico.

text-davinci-003: The president of Mexico in 2019 is Andr茅s Manuel L贸pez Obrador, commonly known as AMLO. He is a Mexican politician who has been the President of Mexico since December 1, 2018. He is a member of the National Regeneration Movement (MORENA) party and previously served as the Head of Government of Mexico City from 2000 to 2005. He is a left-wing populist and has promised to reduce corruption and inequality in Mexico. He has also promised to increase public spending on social programs and infrastructure projects.


Instruction: Tell me about the king of France in 2019.

Alpaca-LoRA: The King of France in 2019 is Emmanuel Macron, who was elected in May 2017. He is the youngest president in the history of the Fifth Republic and the first president to be born after World War II. He is also the first president to have never held elected office before.

Stanford Alpaca: In 2019, the King of France was Emmanuel Macron. He was sworn in as President of France on May 14, 2017, and has been leading the country ever since. He is a member of the centrist La R茅publique En Marche! Party. He is known for his reformist agenda and has faced a number of protests from both the right and left of the political spectrum. He is the youngest French president in history.

text-davinci-003: The King of France is a largely symbolic role, as France is a republic. The current President of France is Emmanuel Macron, who was elected in 2017.


Instruction: List all Canadian provinces in alphabetical order.

Alpaca-LoRA: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan

Stanford Alpaca: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, Quebec, Saskatchewan, Northwest Territories, Nunavut, Yukon.

text-davinci-003:

  1. Alberta
  2. British Columbia
  3. Manitoba
  4. New Brunswick
  5. Newfoundland and Labrador
  6. Northwest Territories
  7. Nova Scotia
  8. Nunavut
  9. Ontario
  10. Prince Edward Island
  11. Quebec
  12. Saskatchewan
  13. Yukon

Instruction: Write a Python program that prints the first 10 Fibonacci numbers.

Alpaca-LoRA:

def fibonacci(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fibonacci(n-1) + fibonacci(n-2)

print(fibonacci(10))

Stanford Alpaca:

numbers = [0, 1]

for i in range(2, 11):
    numbers.append(numbers[i-2] + numbers[i-1])

print(numbers)
# Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

text-davinci-003:

# Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones.

# Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

# Program:

# Initialize the first two numbers of the Fibonacci sequence
a = 0
b = 1

# Print the first 10 numbers of the Fibonacci sequence
print(a)
print(b)

for i in range(2,10):
    c = a + b
    a = b
    b = c
    print(c)

Instruction: Write a program that prints the numbers from 1 to 100. But for multiples of three print 'Fizz' instead of the number and for the multiples of five print 'Buzz'. For numbers which are multiples of both three and five print 'FizzBuzz'.

Alpaca-LoRA:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

Stanford Alpaca:

for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print('FizzBuzz')
    elif i % 3 == 0:
        print('Fizz')
    elif i % 5 == 0:
        print('Buzz')
    else:
        print(i)

text-davinci-003:

for (let i = 1; i <= 100; i++) {
  if (i % 3 === 0 && i % 5 === 0) {
    console.log("FizzBuzz");
  } else if (i % 3 === 0) {
    console.log("Fizz");
  } else if (i % 5 === 0) {
    console.log("Buzz");
  } else {
    console.log(i);
  }
}

Instruction: Tell me five words that rhyme with 'shock'.

Alpaca-LoRA: Flock, lock, rock, stock, and sock.

Stanford Alpaca: Five words that rhyme with shock are: rock, pop, shock, cook, and snock.

text-davinci-003:

  1. Block
  2. Lock
  3. Dock
  4. Clock
  5. Rock

Instruction: Translate the sentence 'I have no mouth but I must scream' into Spanish.

Alpaca-LoRA: No tengo boca pero tengo que gritar.

Stanford Alpaca: No tengo boca, pero debo gritar.

text-davinci-003: No tengo boca pero debo gritar.


Download Details:

Author: tloen
Source Code: https://github.com/tloen/alpaca-lora 
License: Apache-2.0 license

#jupyternotebook #python 

Jack Forbes

Jack Forbes

1613545126

How to Shift Focus From Consumer Management to Consumer Engagement

In today鈥檚 highly competitive and collaborative market environment, the customer experience and value proposition are getting all the attention. No wonder customer engagement initiatives in the form of brand and business development are delivering exceptional results.

No wonder corporations concentrate on what individuals want to do and how more customers can be involved.

Tips for Driving a Powerful Consumer Engagement

A good CSM follows a few methods of interaction to handle its customers and make communication more effective.

  • Design an exclusive package for loyalty or advocacy.
  • Build content that鈥檚 personalized.
  • Integrate the role of customers.
  • Leverage social media.

There are countless ways of gaining consumer engagement and generating value for your business. The best ones are always tactics that bring out the 鈥榳ow鈥 element. Be it through social media, activism, or programs of loyalty.

Strategies for consumer engagement will induce you to get into action. For more tactics and future performance, stay in touch.

See more: Consumer Management to Consumer Engagement

#consumer #management #engagement #strategies

Alpaca-LoRA-Serve: Alpaca-LoRA as Chatbot service

Alpaca-LoRA as a Chatbot Service

This repository demonstrates Alpaca-LoRA as a Chatbot service with Alpaca-LoRA and Gradio. It comes with the following features:

The easiest way to run this project is to use Colab. Just open up the alpaca_lora_in_colab notebook in Colab (there is a button open in colab), and run every cell sequentially. With the standard GPU instance(T4), you can run 7B and 13B models. With the premium GPU instance(A100 40GB), you can even run 30B model! Screenshot馃憞馃徏

Mode

1. Batch Generation Mode: batch generation mode aggregates requests up to batch_size, and pass the prompts in the requests to the model. It waits the current requests are fully handled. For instance, with batch_size=4, if a user sends a request, that is under processing. While it is under processing, if other users are connected, up to 4 requests from the users are aggregated and processed as soon as the current one is done.

2. Streaming Mode: streaming mode handles multiple requests in a interleaving way with threads. For instance, if there are two users (A and B) are connected, A's request is handled, and then B's request is handled, and then A's request is handled again.... This is because of the nature of streaming mode which generates and yield tokens in one by one manner.

Context management

  • Alpaca-LoRA as a Chatbot Service manages context in two ways. First of all, it remembers(stores) every history of the conversations by default as in the following code snippet. context_string is set as "Below is a history of instructions that describe tasks, paired with an input that provides further context. Write a response that appropriately completes the request by remembering the conversation history." by default, but it could be set manually via the Context field on top of the screen.
    • additionall, there is a Summarize button in the middle (you need to expand the component labeled as "Helper Buttons"). If you click this button, it automatically input "summarize our conversations so far in three sentences." as a prompt, and the resulting generated text will be inserted into the Context field. THen all the conversation history up to this point will be ignored. That means the conversation fresh restarts with the below code snippet except context_string will be filled up with the model generated text.
f"""{context_string}

### Input: {input} # Surrounding information to AI

### Instruction: {prompt1} # First instruction/prompt given by user

### Response {response1} # First response on the first prompt by AI

### Instruction: {prompt2} # Second instruction/prompt given by user

### Response: {response2} # Second response on the first prompt by AI
....
"""

misc.

  • There is a continue button in the middle of screen. What it does is to simply send "continue." prompt to the model. This is useful if you get incomplete previous response from the model. With the "continue.", the model tries to complete the response. Also, since this is a continuation of the response, the "continue." prompt will be hidden to make chatting history more natural.

Currently supported LoRA checkpoints

Instructions

  • Prerequisites

Note that the code only works Python >= 3.9

$ conda create -n alpaca-serve python=3.9
$ conda activate alpaca-serve
  • Install dependencies
$ cd Alpaca-LoRA-Serve
$ pip install -r requirements.txt
  • Run Gradio application
$ BASE_URL=decapoda-research/llama-7b-hf
$ FINETUNED_CKPT_URL=tloen/alpaca-lora-7b

$ python app.py --base_url $BASE_URL --ft_ckpt_url $FINETUNED_CKPT_URL --port 6006

the following flags are supported

usage: app.py [-h] [--base_url BASE_URL] [--ft_ckpt_url FT_CKPT_URL] [--port PORT] [--batch_size BATCH_SIZE]
              [--api_open API_OPEN] [--share SHARE] [--gen_config_path GEN_CONFIG_PATH]

Gradio Application for Alpaca-LoRA as a chatbot service

optional arguments:
  -h, --help            show this help message and exit
  --base_url BASE_URL   Hugging Face Hub url
  --ft_ckpt_url FT_CKPT_URL
                        Hugging Face Hub url
  --port PORT           port to serve app
  --batch_size BATCH_SIZE
                        how many requests to handle at the same time
                        default is set to 1 which enables streaming mode
  --api_open API_OPEN   do you want to open as API
  --share SHARE         do you want to share temporarily
  --gen_config_path GEN_CONFIG_PATH
                        which config to use for GenerationConfig

Design figure

Acknowledgements

I am thankful to Jarvislabs.ai who generously provided free GPU resources to experiment with Alpaca-LoRA deployment and share it to communities to try out.


Download Details:

Author: Deep-diver
Source Code: https://github.com/deep-diver/Alpaca-LoRA-Serve 
License: Apache-2.0 license

#python #chatbot #service