Chloe  Butler

Chloe Butler

1667474400

Kafka: Perl Implementation Of Kafka API (official CPAN Module)

NAME

Kafka - Apache Kafka low-level synchronous API, which does not use Zookeeper.

VERSION

This documentation refers to Kafka package version 1.08 .

SYNOPSIS

use 5.010;
use strict;
use warnings;

use Scalar::Util qw(
    blessed
);
use Try::Tiny;

use Kafka qw(
    $BITS64
);
use Kafka::Connection;
use Kafka::Producer;
use Kafka::Consumer;

# A simple example of Kafka usage

# common information
say 'This is Kafka package ', $Kafka::VERSION;
say 'You have a ', $BITS64 ? '64' : '32', ' bit system';

my ( $connection, $producer, $consumer );
try {

    #-- Connect to local cluster
    $connection = Kafka::Connection->new( host => 'localhost' );
    #-- Producer
    $producer = Kafka::Producer->new( Connection => $connection );
    #-- Consumer
    $consumer = Kafka::Consumer->new( Connection  => $connection );

} catch {
    my $error = $_;
    if ( blessed( $error ) && $error->isa( 'Kafka::Exception' ) ) {
        warn 'Error: (', $error->code, ') ',  $error->message, "\n";
        exit;
    } else {
        die $error;
    }
};

# cleaning up
undef $consumer;
undef $producer;
$connection->close;
undef $connection;

# another brief code example of the Kafka package
# is provided in the "An Example" section.

ABSTRACT

The Kafka package is a set of Perl modules which provides a simple and consistent application programming interface (API) to Apache Kafka 0.9+, a high-throughput distributed messaging system.

DESCRIPTION

The user modules in this package provide an object oriented API. The IO agents, requests sent, and responses received from the Apache Kafka or mock servers are all represented by objects. This makes a simple and powerful interface to these services.

The main features of the package are:

  • Contains various reusable components (modules) that can be used separately or together.
  • Provides an object oriented model of communication.
  • Supports parsing the Apache Kafka protocol.
  • Supports the Apache Kafka Requests and Responses. Within this package the following implements of Kafka's protocol are implemented: PRODUCE, FETCH, OFFSETS, and METADATA.
  • Simple producer and consumer clients.
  • A simple interface to control the test Kafka server cluster (in the test directory).
  • Simple mock server instance (located in the test directory) for testing without Apache Kafka server.
  • Support for working with 64 bit elements of the Kafka protocol on 32 bit systems.
  • Taint mode support. The input data is not checked for tainted. Returns untainted data.

APACHE KAFKA'S STYLE COMMUNICATION

The Kafka package is based on Kafka's 0.9+ Protocol specification document at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

The Kafka's protocol is based on a request/response paradigm. A client establishes a connection with a server and sends a request to the server in the form of a request method, followed by a messages containing request modifiers. The server responds with a success or error code, followed by a messages containing entity meta-information and content.

Messages are the fundamental unit of communication. They are published to a topic by a producer, which means they are physically sent to a server acting as a broker. Some number of consumers subscribe to a topic, and each published message is delivered to all the consumers. The messages stream is partitioned on the brokers as a set of distinct partitions. The semantic meaning of these partitions is left up to the producer and the producer specifies which partition a message belongs to. Within a partition the messages are stored in the order in which they arrive at the broker, and will be given out to consumers in that same order. In Apache Kafka, the consumers are responsible for maintaining state information (offset) on what has been consumed. A consumer can deliberately rewind back to an old offset and re-consume data. Each message is uniquely identified by a 64-bit integer offset giving the position of the start of this message in the stream of all messages ever sent to that topic on that partition. Reads are done by giving the 64-bit logical offset of a message and a max chunk size.

The request is then passed through the client to a server and we get the response in return to a consumer request that we can examine. A request is always independent of any previous requests, i.e. the service is stateless. This API is completely stateless, with the topic and partition being passed in on every request.

The Connection Object

Clients use the Connection object to communicate with the Apache Kafka cluster. The Connection object is an interface layer between your application code and the Apache Kafka cluster.

Connection object is required to create instances of classes Kafka::Producer or Kafka::Consumer.

Kafka Connection API is implemented by Kafka::Connection class.

use Kafka::Connection;

# connect to local cluster with the defaults
my $connection = Kafka::Connection->new( host => 'localhost' );

The main attributes of the Connection object are:

  • host and port are the IO object attributes denoting any server from the Kafka cluster a client wants to connect.
  • timeout specifies how much time remote servers is given to respond before disconnection occurs and internal exception is thrown.

The IO Object

The Kafka::Connection object use internal class Kafka::IO to maintain communication with the particular server of Kafka cluster The IO object is an interface layer between Kafka::Connection object and the network.

Kafka IO API is implemented by Kafka::IO class. Note that end user normally should have no need to use Kafka::IO but work with Kafka::Connection instead.

use Kafka::IO;

# connect to local server with the defaults
my $io = Kafka::IO->new( host => 'localhost' );

The main attributes of the IO object are:

  • host and port are the IO object attributes denoting the server and the port of Apache Kafka server.
  • timeout specifies how much time is given remote servers to respond before the IO object disconnects and generates an internal exception.

The Producer Object

Kafka producer API is implemented by Kafka::Producer class.

use Kafka::Producer;

#-- Producer
my $producer = Kafka::Producer->new( Connection => $connection );

# Sending a single message
$producer->send(
    'mytopic',          # topic
    0,                  # partition
    'Single message'    # message
);

# Sending a series of messages
$producer->send(
    'mytopic',          # topic
    0,                  # partition
    [                   # messages
        'The first message',
        'The second message',
        'The third message',
    ]
);

The main methods and attributes of the producer request are:

  • The request method of the producer object is send().
  • topic and partition define respective parameters of the messages we want to send.
  • messages is an arbitrary amount of data (a simple data string or reference to an array of the data strings).

The Consumer Object

Kafka consumer API is implemented by Kafka::Consumer class.

use Kafka::Consumer;

$consumer = Kafka::Consumer->new( Connection => $connection );

The request methods of the consumer object are offsets() and fetch().

offsets method returns a reference to the list of offsets of received messages.

fetch method returns a reference to the list of received Kafka::Message objects.

use Kafka qw(
    $DEFAULT_MAX_BYTES
    $DEFAULT_MAX_NUMBER_OF_OFFSETS
    $RECEIVE_EARLIEST_OFFSET
);

# Get a list of valid offsets up to max_number before the given time
my $offsets = $consumer->offsets(
    'mytopic',                      # topic
    0,                              # partition
    $RECEIVE_EARLIEST_OFFSET,      # time
    $DEFAULT_MAX_NUMBER_OF_OFFSETS  # max_number
);
say "Received offset: $_" foreach @$offsets;

# Consuming messages
my $messages = $consumer->fetch(
    'mytopic',                      # topic
    0,                              # partition
    0,                              # offset
    $DEFAULT_MAX_BYTES              # Maximum size of MESSAGE(s) to receive
);
foreach my $message ( @$messages ) {
    if ( $message->valid ) {
        say 'payload    : ', $message->payload;
        say 'key        : ', $message->key;
        say 'offset     : ', $message->offset;
        say 'next_offset: ', $message->next_offset;
    } else {
        say 'error      : ', $message->error;
    }
}

See Kafka::Consumer for additional information and documentation about class methods and arguments.

The Message Object

Kafka message API is implemented by Kafka::Message class.

if ( $message->valid ) {
    say 'payload    : ', $message->payload;
    say 'key        : ', $message->key;
    say 'offset     : ', $message->offset;
    say 'next_offset: ', $message->next_offset;
} else {
    say 'error      : ', $message->error;
}

Methods available for Kafka::Message object :

  • payload A simple message received from the Apache Kafka server.
  • key An optional message key that was used for partition assignment.
  • valid A message entry is valid.
  • error A description of the message inconsistence.
  • offset The offset beginning of the message in the Apache Kafka server.
  • next_offset The offset beginning of the next message in the Apache Kafka server.

The Exception Object

A designated class Kafka::Exception is used to provide a more detailed and structured information when error is detected.

The following attributes are declared within Kafka::Exception: code, message.

Additional subclasses of Kafka::Exception designed to report errors in respective Kafka classes: Kafka::Exception::Connection, Kafka::Exception::Consumer, Kafka::Exception::IO, Kafka::Exception::Int64, Kafka::Exception::Producer.

Authors suggest using of Try::Tiny's try and catch to handle exceptions while working with Kafka module.

EXPORT

None by default.

Additional constants

Additional constants are available for import, which can be used to define some type of parameters, and to identify various error cases.

$KAFKA_SERVER_PORT

default Apache Kafka server port - 9092.

$REQUEST_TIMEOUT

1.5 sec - timeout in secs, for gethostbyname, connect, blocking receive and send calls (could be any integer or floating-point type).

$DEFAULT_MAX_BYTES

1MB - maximum size of message(s) to receive.

$SEND_MAX_ATTEMPTS

4 - The leader may be unavailable transiently, which can fail the sending of a message. This property specifies the number of attempts to send of a message.

Do not use $Kafka::SEND_MAX_ATTEMPTS in Kafka::Producer-<gtsend> request to prevent duplicates.

$RETRY_BACKOFF

200 - (ms)

According to Apache Kafka documentation:

Producer Configs - Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.

Consumer Configs - Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.

$RECEIVE_LATEST_OFFSET

DEPRECATED: please use $RECEIVE_LATEST_OFFSETS, as when using this constant to retrieve offsets, you can get more than one. It's kept for backward compatibility.

-1 : special value that denotes latest available offset.

$RECEIVE_LATEST_OFFSETS

-1 : special value that denotes latest available offsets.

$RECEIVE_EARLIEST_OFFSET

-2 : special value that denotes earliest available offset.

$RECEIVE_EARLIEST_OFFSETS

DEPRECATED: please use $RECEIVE_EARLIEST_OFFSET, as when using this constant to retrieve offset, you can get only one. It's kept for backward compatibility.

-2 : special value that denotes earliest available offset.

$DEFAULT_MAX_NUMBER_OF_OFFSETS

100 - maximum number of offsets to retrieve.

$MIN_BYTES_RESPOND_IMMEDIATELY

The minimum number of bytes of messages that must be available to give a response.

0 - the server will always respond immediately.

$MIN_BYTES_RESPOND_HAS_DATA

The minimum number of bytes of messages that must be available to give a response.

10 - the server will respond as soon as at least one partition has at least 10 bytes of data (Offset => int64 + MessageSize => int32) or the specified timeout occurs.

$NOT_SEND_ANY_RESPONSE

Indicates how many acknowledgements the servers should receive before responding to the request.

0 - the server does not send any response.

$WAIT_WRITTEN_TO_LOCAL_LOG

Indicates how long the servers should wait for the data to be written to the local long before responding to the request.

1 - the server will wait the data is written to the local log before sending a response.

$BLOCK_UNTIL_IS_COMMITTED

Wait for message to be committed by all sync replicas.

-1 - the server will block until the message is committed by all in sync replicas before sending a response.

$DEFAULT_MAX_WAIT_TIME

The maximum amount of time (seconds, may be fractional) to wait when no sufficient amount of data is available at the time the request is dispatched.

0.1 - allow the server to wait up to 0.1s to try to accumulate data before responding.

$MESSAGE_SIZE_OVERHEAD

34 - size of protocol overhead (data added by protocol) for each message.

IP version

Specify IP protocol version for resolving of IP address and host names.

$IP_V4

Interpret address as IPv4 and force resolving of host name in IPv4.

$IP_V6

Interpret address as IPv6 and force resolving of host name in IPv6.

Compression

According to Apache Kafka documentation:

Kafka currently supports three compression codecs with the following codec numbers:

$COMPRESSION_NONE

None = 0

$COMPRESSION_GZIP

GZIP = 1

$COMPRESSION_SNAPPY

Snappy = 2

$COMPRESSION_LZ4

LZ4 = 3 (That module supports only Kafka 0.10 or higher, as initial implementation of LZ4 in Kafka did not follow the standard LZ4 framing specification).

Error codes

Possible error codes (corresponds to descriptions in %ERROR):

$ERROR_MISMATCH_ARGUMENT

-1000 - Invalid argument

$ERROR_CANNOT_SEND

-1001 - Cannot send

$ERROR_SEND_NO_ACK

-1002 - No acknowledgement for sent request

ERROR_CANNOT_RECV

-1003 - Cannot receive

ERROR_CANNOT_BIND

-1004 - Cannot connect to broker

$ERROR_METADATA_ATTRIBUTES

-1005 - Unknown metadata attributes

$ERROR_UNKNOWN_APIKEY

-1006 - Unknown ApiKey

$ERROR_CANNOT_GET_METADATA

-1007 - Cannot get Metadata

$ERROR_LEADER_NOT_FOUND

-1008 - Leader not found

$ERROR_MISMATCH_CORRELATIONID

-1009 - Mismatch CorrelationId

$ERROR_NO_KNOWN_BROKERS

-1010 - There are no known brokers

$ERROR_REQUEST_OR_RESPONSE

-1011 - Bad request or response element

$ERROR_TOPIC_DOES_NOT_MATCH

-1012 - Topic does not match the requested

$ERROR_PARTITION_DOES_NOT_MATCH

-1013 - Partition does not match the requested

$ERROR_NOT_BINARY_STRING

-1014 - Unicode data is not allowed

$ERROR_COMPRESSION

-1015 - Compression error

$ERROR_RESPONSEMESSAGE_NOT_RECEIVED

-1016 - 'ResponseMessage' not received

$ERROR_INCOMPATIBLE_HOST_IP_VERSION

-1017 - Incompatible host name and IP version

$ERROR_NO_CONNECTION

-1018 - No IO connection

$ERROR_GROUP_COORDINATOR_NOT_FOUND

-1019 - Group Coordinator not found

Contains the descriptions of possible error codes obtained via ERROR_CODE box of Apache Kafka Wire Format protocol response.

$ERROR_NO_ERROR

0 - q{}

No error - it worked!

$ERROR_UNKNOWN

-1 - An unexpected server error.

$ERROR_OFFSET_OUT_OF_RANGE

1 - The requested offset is not within the range of offsets maintained by the server.

$ERROR_INVALID_MESSAGE

2 - This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt.

Synonym name $ERROR_CORRUPT_MESSAGE .

$ERROR_UNKNOWN_TOPIC_OR_PARTITION

3 - This server does not host this topic-partition.

$ERROR_INVALID_FETCH_SIZE

4 - The requested fetch size is invalid.

Synonym name $ERROR_INVALID_MESSAGE_SIZE .

$ERROR_LEADER_NOT_AVAILABLE

5 - Unable to write due to ongoing Kafka leader selection.

This error is thrown if we are in the middle of a leadership election and there is no current leader for this partition, hence it is unavailable for writes.

$ERROR_NOT_LEADER_FOR_PARTITION

6 - Server is not a leader for partition.

This error is thrown if the client attempts to send messages to a replica that is not the leader for some partition. It indicates that the clients metadata is out of date.

$ERROR_REQUEST_TIMED_OUT

7 - Request time-out.

This error is thrown if the request exceeds the user-specified time limit in the request.

$ERROR_BROKER_NOT_AVAILABLE

8 - Broker is not available.

This is not a client facing error and is used mostly by tools when a broker is not alive.

$ERROR_REPLICA_NOT_AVAILABLE

9 - The replica is not available for the requested topic-partition.

If replica is expected on a broker, but is not (this can be safely ignored).

$ERROR_MESSAGE_TOO_LARGE

10 - The request included a message larger than the max message size the server will accept.

The server has a configurable maximum message size to avoid unbounded memory allocation. This error is thrown if the client attempt to produce a message larger than this maximum.

Synonym name $ERROR_MESSAGE_SIZE_TOO_LARGE .

$ERROR_STALE_CONTROLLER_EPOCH

11 - The controller moved to another broker.

According to Apache Kafka documentation: Internal error code for broker-to-broker communication.

Synonym name $ERROR_STALE_CONTROLLER_EPOCH_CODE .

$ERROR_OFFSET_METADATA_TOO_LARGE

12 - Specified metadata offset is too big

If you specify a value larger than configured maximum for offset metadata.

Synonym name $ERROR_OFFSET_METADATA_TOO_LARGE_CODE .

$ERROR_NETWORK_EXCEPTION

13 - The server disconnected before a response was received.

$ERROR_GROUP_LOAD_IN_PROGRESS

14 - The coordinator is loading and hence can't process requests for this group.

Synonym name $ERROR_GROUP_LOAD_IN_PROGRESS_CODE, $ERROR_LOAD_IN_PROGRESS_CODE .

$ERROR_GROUP_COORDINATOR_NOT_AVAILABLE

15 - The group coordinator is not available.

Synonym name $ERROR_GROUP_COORDINATOR_NOT_AVAILABLE_CODE, $ERROR_CONSUMER_COORDINATOR_NOT_AVAILABLE_CODE .

$ERROR_NOT_COORDINATOR_FOR_GROUP

16 - This is not the correct coordinator for this group.

Synonym name $ERROR_NOT_COORDINATOR_FOR_GROUP_CODE, $ERROR_NOT_COORDINATOR_FOR_CONSUMER_CODE .

$ERROR_INVALID_TOPIC_EXCEPTION

17 - The request attempted to perform an operation on an invalid topic.

Synonym name $ERROR_INVALID_TOPIC_CODE .

$ERROR_RECORD_LIST_TOO_LARGE

18 - The request included message batch larger than the configured segment size on the server.

Synonym name $ERROR_RECORD_LIST_TOO_LARGE_CODE .

$ERROR_NOT_ENOUGH_REPLICAS

19 - Messages are rejected since there are fewer in-sync replicas than required.

Synonym name $ERROR_NOT_ENOUGH_REPLICAS_CODE .

$ERROR_NOT_ENOUGH_REPLICAS_AFTER_APPEND

20 - Messages are written to the log, but to fewer in-sync replicas than required.

Synonym name $ERROR_NOT_ENOUGH_REPLICAS_AFTER_APPEND_CODE .

$ERROR_INVALID_REQUIRED_ACKS

21 - Produce request specified an invalid value for required acks.

Synonym name $ERROR_INVALID_REQUIRED_ACKS_CODE .

$ERROR_ILLEGAL_GENERATION

22 - Specified group generation id is not valid.

Synonym name $ERROR_ILLEGAL_GENERATION_CODE .

$ERROR_INCONSISTENT_GROUP_PROTOCOL

23 - The group member's supported protocols are incompatible with those of existing members.

Synonym name $ERROR_INCONSISTENT_GROUP_PROTOCOL_CODE .

$ERROR_INVALID_GROUP_ID

24 - The configured groupId is invalid.

Synonym name $ERROR_INVALID_GROUP_ID_CODE .

$ERROR_UNKNOWN_MEMBER_ID

25 - The coordinator is not aware of this member.

Synonym name $ERROR_UNKNOWN_MEMBER_ID_CODE .

$ERROR_INVALID_SESSION_TIMEOUT

26 - The session timeout is not within the range allowed by the broker (as configured by group.min.session.timeout.ms and group.max.session.timeout.ms).

Synonym name $ERROR_INVALID_SESSION_TIMEOUT_CODE .

$ERROR_REBALANCE_IN_PROGRESS

27 - The group is rebalancing, so a rejoin is needed.

Synonym name $ERROR_REBALANCE_IN_PROGRESS_CODE .

$ERROR_INVALID_COMMIT_OFFSET_SIZE

28 - The committing offset data size is not valid.

Synonym name $ERROR_INVALID_COMMIT_OFFSET_SIZE_CODE .

$ERROR_TOPIC_AUTHORIZATION_FAILED

29 - Not authorized to access topics: [Topic authorization failed.].

Synonym name $ERROR_TOPIC_AUTHORIZATION_FAILED_CODE .

$ERROR_GROUP_AUTHORIZATION_FAILED

30 - Not authorized to access group: Group authorization failed.

Synonym name $ERROR_GROUP_AUTHORIZATION_FAILED_CODE .

$ERROR_CLUSTER_AUTHORIZATION_FAILED

31 - Cluster authorization failed.

Synonym name $ERROR_CLUSTER_AUTHORIZATION_FAILED_CODE .

$ERROR_INVALID_TIMESTAMP

32 - The timestamp of the message is out of acceptable range.

$ERROR_UNSUPPORTED_SASL_MECHANISM

33 - The broker does not support the requested SASL mechanism.

$ERROR_ILLEGAL_SASL_STATE

34 - Request is not valid given the current SASL state.

$ERROR_UNSUPPORTED_VERSION

35 - The version of API is not supported.

%ERROR

Contains the descriptions for possible error codes.

BITS64

Know you are working on 64 or 32 bit system

An Example

use 5.010;
use strict;
use warnings;

use Scalar::Util qw(
    blessed
);
use Try::Tiny;

use Kafka qw(
    $KAFKA_SERVER_PORT
    $REQUEST_TIMEOUT
    $RECEIVE_EARLIEST_OFFSET
    $DEFAULT_MAX_NUMBER_OF_OFFSETS
    $DEFAULT_MAX_BYTES
);
use Kafka::Connection;
use Kafka::Producer;
use Kafka::Consumer;

my ( $connection, $producer, $consumer );
try {

    #-- Connection
    $connection = Kafka::Connection->new( host => 'localhost' );

    #-- Producer
    $producer = Kafka::Producer->new( Connection => $connection );

    # Sending a single message
    $producer->send(
        'mytopic',                      # topic
        0,                              # partition
        'Single message'                # message
    );

    # Sending a series of messages
    $producer->send(
        'mytopic',                      # topic
        0,                              # partition
        [                               # messages
            'The first message',
            'The second message',
            'The third message',
        ]
    );

    #-- Consumer
    $consumer = Kafka::Consumer->new( Connection => $connection );

    # Get a list of valid offsets up max_number before the given time
    my $offsets = $consumer->offsets(
        'mytopic',                      # topic
        0,                              # partition
        $RECEIVE_EARLIEST_OFFSET,      # time
        $DEFAULT_MAX_NUMBER_OF_OFFSETS  # max_number
    );

    if ( @$offsets ) {
        say "Received offset: $_" foreach @$offsets;
    } else {
        warn "Error: Offsets are not received\n";
    }

    # Consuming messages
    my $messages = $consumer->fetch(
        'mytopic',                      # topic
        0,                              # partition
        0,                              # offset
        $DEFAULT_MAX_BYTES              # Maximum size of MESSAGE(s) to receive
    );

    if ( $messages ) {
        foreach my $message ( @$messages ) {
            if ( $message->valid ) {
                say 'payload    : ', $message->payload;
                say 'key        : ', $message->key;
                say 'offset     : ', $message->offset;
                say 'next_offset: ', $message->next_offset;
            } else {
                say 'error      : ', $message->error;
            }
        }
    }

} catch {
    my $error = $_;
    if ( blessed( $error ) && $error->isa( 'Kafka::Exception' ) ) {
        warn 'Error: (', $error->code, ') ',  $error->message, "\n";
        exit;
    } else {
        die $error;
    }
};

# Closes and cleans up
undef $consumer;
undef $producer;
$connection->close;
undef $connection;

DEPENDENCIES

In order to install and use this package you will need Perl version 5.10 or later. Some modules within this package depend on other packages that are distributed separately from Perl. We recommend that you have the following packages installed before you install Kafka:

Compress::Snappy
Compress::LZ4Frame
Const::Fast
Data::Compare
Data::HexDump::Range
Data::Validate::Domain
Data::Validate::IP
Exception::Class
List::Utils
Params::Util
Scalar::Util::Numeric
String::CRC32
Sys::SigAction
Try::Tiny

Kafka package has the following optional dependencies:

Capture::Tiny
Clone
Config::IniFiles
File::HomeDir
Proc::Daemon
Proc::ProcessTable
Sub::Install
Test::Deep
Test::Exception
Test::NoWarnings
Test::TCP

If the optional modules are missing, some "prereq" tests are skipped.

DIAGNOSTICS

Debug output can be enabled by setting level via one of the following environment variables:

PERL_KAFKA_DEBUG=1 - debug is enabled for the whole Kafka package.

PERL_KAFKA_DEBUG=IO:1 - enable debug only for Kafka::IO only.

PERL_KAFKA_DEBUG=Connection:1 - enable debug only for particular Kafka::Connection.

It's possible to set different debug levels, like in the following example:

PERL_KAFKA_DEBUG=Connection:1,IO:2

See documentation for a particular module for explanation of various debug levels.

BUGS AND LIMITATIONS

Connection constructor:

Make sure that you always connect to brokers using EXACTLY the same address or host name as specified in broker configuration (host.name in server.properties). Avoid using default value (when host.name is commented) in server.properties - always use explicit value instead.

Producer and Consumer methods only work with one topic and one partition at a time. Also module does not implement the Offset Commit/Fetch API.

Producer's, Consumer's, Connection's string arguments must be binary strings. Using Unicode strings may cause an error or data corruption.

This module does not support Kafka protocol versions earlier than 0.8.

Kafka::IO->new' uses Sys::SigAction and alarm() to limit some internal operations. This means that if an external alarm() was set, signal delivery may be delayed.

With non-empty timeout, we use alarm() internally in Kafka::IO and try preserving existing alarm() if possible. However, if Time::HiRes::ualarm() is set before calling Kafka modules, its behaviour is unspecified (i.e. it could be reset or preserved etc.).

For gethostbyname operations the non-empty timeout is rounded to the nearest greater positive integer; any timeouts less than 1 second are rounded to 1 second.

You can disable the use of alarm() by setting timeout => undef in the constructor.

The Kafka package was written, tested, and found working on recent Linux distributions.

There are no known bugs in this package.

Please report problems to the "AUTHOR".

Patches are welcome.

MORE DOCUMENTATION

All modules contain detailed information on the interfaces they provide.

SEE ALSO

The basic operation of the Kafka package modules:

Kafka - constants and messages used by the Kafka package modules.

Kafka::Connection - interface to connect to a Kafka cluster.

Kafka::Producer - interface for producing client.

Kafka::Consumer - interface for consuming client.

Kafka::Message - interface to access Kafka message properties.

Kafka::Int64 - functions to work with 64 bit elements of the protocol on 32 bit systems.

Kafka::Protocol - functions to process messages in the Apache Kafka's Protocol.

Kafka::IO - low-level interface for communication with Kafka server.

Kafka::Exceptions - module designated to handle Kafka exceptions.

Kafka::Internals - internal constants and functions used by several package modules.

A wealth of detail about the Apache Kafka and the Kafka Protocol:

Main page at http://kafka.apache.org/

Kafka Protocol at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

SOURCE CODE

Kafka package is hosted on GitHub: https://github.com/TrackingSoft/Kafka

AUTHOR

Sergey Gladkov

Please use GitHub project link above to report problems or contact authors.

CONTRIBUTORS

Alexander Solovey

Jeremy Jordan

Sergiy Zuban

Nikolay Shulyakovskiy

Vlad Marchenko

Damien Krotkine

Greg Franklin

COPYRIGHT AND LICENSE

Copyright (C) 2012-2017 by TrackingSoft LLC.

This package is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic at http://dev.perl.org/licenses/artistic.html.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Download Details:

Author: TrackingSoft
Source Code: https://github.com/TrackingSoft/Kafka

License: View license

#perl #kafka 

What is GEEK

Buddha Community

Kafka: Perl Implementation Of Kafka API (official CPAN Module)

Top 10 API Security Threats Every API Team Should Know

As more and more data is exposed via APIs either as API-first companies or for the explosion of single page apps/JAMStack, API security can no longer be an afterthought. The hard part about APIs is that it provides direct access to large amounts of data while bypassing browser precautions. Instead of worrying about SQL injection and XSS issues, you should be concerned about the bad actor who was able to paginate through all your customer records and their data.

Typical prevention mechanisms like Captchas and browser fingerprinting won’t work since APIs by design need to handle a very large number of API accesses even by a single customer. So where do you start? The first thing is to put yourself in the shoes of a hacker and then instrument your APIs to detect and block common attacks along with unknown unknowns for zero-day exploits. Some of these are on the OWASP Security API list, but not all.

Insecure pagination and resource limits

Most APIs provide access to resources that are lists of entities such as /users or /widgets. A client such as a browser would typically filter and paginate through this list to limit the number items returned to a client like so:

First Call: GET /items?skip=0&take=10 
Second Call: GET /items?skip=10&take=10

However, if that entity has any PII or other information, then a hacker could scrape that endpoint to get a dump of all entities in your database. This could be most dangerous if those entities accidently exposed PII or other sensitive information, but could also be dangerous in providing competitors or others with adoption and usage stats for your business or provide scammers with a way to get large email lists. See how Venmo data was scraped

A naive protection mechanism would be to check the take count and throw an error if greater than 100 or 1000. The problem with this is two-fold:

  1. For data APIs, legitimate customers may need to fetch and sync a large number of records such as via cron jobs. Artificially small pagination limits can force your API to be very chatty decreasing overall throughput. Max limits are to ensure memory and scalability requirements are met (and prevent certain DDoS attacks), not to guarantee security.
  2. This offers zero protection to a hacker that writes a simple script that sleeps a random delay between repeated accesses.
skip = 0
while True:    response = requests.post('https://api.acmeinc.com/widgets?take=10&skip=' + skip),                      headers={'Authorization': 'Bearer' + ' ' + sys.argv[1]})    print("Fetched 10 items")    sleep(randint(100,1000))    skip += 10

How to secure against pagination attacks

To secure against pagination attacks, you should track how many items of a single resource are accessed within a certain time period for each user or API key rather than just at the request level. By tracking API resource access at the user level, you can block a user or API key once they hit a threshold such as “touched 1,000,000 items in a one hour period”. This is dependent on your API use case and can even be dependent on their subscription with you. Like a Captcha, this can slow down the speed that a hacker can exploit your API, like a Captcha if they have to create a new user account manually to create a new API key.

Insecure API key generation

Most APIs are protected by some sort of API key or JWT (JSON Web Token). This provides a natural way to track and protect your API as API security tools can detect abnormal API behavior and block access to an API key automatically. However, hackers will want to outsmart these mechanisms by generating and using a large pool of API keys from a large number of users just like a web hacker would use a large pool of IP addresses to circumvent DDoS protection.

How to secure against API key pools

The easiest way to secure against these types of attacks is by requiring a human to sign up for your service and generate API keys. Bot traffic can be prevented with things like Captcha and 2-Factor Authentication. Unless there is a legitimate business case, new users who sign up for your service should not have the ability to generate API keys programmatically. Instead, only trusted customers should have the ability to generate API keys programmatically. Go one step further and ensure any anomaly detection for abnormal behavior is done at the user and account level, not just for each API key.

Accidental key exposure

APIs are used in a way that increases the probability credentials are leaked:

  1. APIs are expected to be accessed over indefinite time periods, which increases the probability that a hacker obtains a valid API key that’s not expired. You save that API key in a server environment variable and forget about it. This is a drastic contrast to a user logging into an interactive website where the session expires after a short duration.
  2. The consumer of an API has direct access to the credentials such as when debugging via Postman or CURL. It only takes a single developer to accidently copy/pastes the CURL command containing the API key into a public forum like in GitHub Issues or Stack Overflow.
  3. API keys are usually bearer tokens without requiring any other identifying information. APIs cannot leverage things like one-time use tokens or 2-factor authentication.

If a key is exposed due to user error, one may think you as the API provider has any blame. However, security is all about reducing surface area and risk. Treat your customer data as if it’s your own and help them by adding guards that prevent accidental key exposure.

How to prevent accidental key exposure

The easiest way to prevent key exposure is by leveraging two tokens rather than one. A refresh token is stored as an environment variable and can only be used to generate short lived access tokens. Unlike the refresh token, these short lived tokens can access the resources, but are time limited such as in hours or days.

The customer will store the refresh token with other API keys. Then your SDK will generate access tokens on SDK init or when the last access token expires. If a CURL command gets pasted into a GitHub issue, then a hacker would need to use it within hours reducing the attack vector (unless it was the actual refresh token which is low probability)

Exposure to DDoS attacks

APIs open up entirely new business models where customers can access your API platform programmatically. However, this can make DDoS protection tricky. Most DDoS protection is designed to absorb and reject a large number of requests from bad actors during DDoS attacks but still need to let the good ones through. This requires fingerprinting the HTTP requests to check against what looks like bot traffic. This is much harder for API products as all traffic looks like bot traffic and is not coming from a browser where things like cookies are present.

Stopping DDoS attacks

The magical part about APIs is almost every access requires an API Key. If a request doesn’t have an API key, you can automatically reject it which is lightweight on your servers (Ensure authentication is short circuited very early before later middleware like request JSON parsing). So then how do you handle authenticated requests? The easiest is to leverage rate limit counters for each API key such as to handle X requests per minute and reject those above the threshold with a 429 HTTP response. There are a variety of algorithms to do this such as leaky bucket and fixed window counters.

Incorrect server security

APIs are no different than web servers when it comes to good server hygiene. Data can be leaked due to misconfigured SSL certificate or allowing non-HTTPS traffic. For modern applications, there is very little reason to accept non-HTTPS requests, but a customer could mistakenly issue a non HTTP request from their application or CURL exposing the API key. APIs do not have the protection of a browser so things like HSTS or redirect to HTTPS offer no protection.

How to ensure proper SSL

Test your SSL implementation over at Qualys SSL Test or similar tool. You should also block all non-HTTP requests which can be done within your load balancer. You should also remove any HTTP headers scrub any error messages that leak implementation details. If your API is used only by your own apps or can only be accessed server-side, then review Authoritative guide to Cross-Origin Resource Sharing for REST APIs

Incorrect caching headers

APIs provide access to dynamic data that’s scoped to each API key. Any caching implementation should have the ability to scope to an API key to prevent cross-pollution. Even if you don’t cache anything in your infrastructure, you could expose your customers to security holes. If a customer with a proxy server was using multiple API keys such as one for development and one for production, then they could see cross-pollinated data.

#api management #api security #api best practices #api providers #security analytics #api management policies #api access tokens #api access #api security risks #api access keys

Autumn  Blick

Autumn Blick

1601381326

Public ASX100 APIs: The Essential List

We’ve conducted some initial research into the public APIs of the ASX100 because we regularly have conversations about what others are doing with their APIs and what best practices look like. Being able to point to good local examples and explain what is happening in Australia is a key part of this conversation.

Method

The method used for this initial research was to obtain a list of the ASX100 (as of 18 September 2020). Then work through each company looking at the following:

  1. Whether the company had a public API: this was found by googling “[company name] API” and “[company name] API developer” and “[company name] developer portal”. Sometimes the company’s website was navigated or searched.
  2. Some data points about the API were noted, such as the URL of the portal/documentation and the method they used to publish the API (portal, documentation, web page).
  3. Observations were recorded that piqued the interest of the researchers (you will find these below).
  4. Other notes were made to support future research.
  5. You will find a summary of the data in the infographic below.

Data

With regards to how the APIs are shared:

#api #api-development #api-analytics #apis #api-integration #api-testing #api-security #api-gateway

An API-First Approach For Designing Restful APIs | Hacker Noon

I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.

So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.

First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.

Preparing the ground

Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.

Tools

If you desire to reproduce the examples that will be shown here, you will need some of those items below.

  • NodeJS
  • OpenAPI Specification
  • Text Editor (I’ll use VSCode)
  • Command Line

Use Case

To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.

#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api

Chloe  Butler

Chloe Butler

1667474400

Kafka: Perl Implementation Of Kafka API (official CPAN Module)

NAME

Kafka - Apache Kafka low-level synchronous API, which does not use Zookeeper.

VERSION

This documentation refers to Kafka package version 1.08 .

SYNOPSIS

use 5.010;
use strict;
use warnings;

use Scalar::Util qw(
    blessed
);
use Try::Tiny;

use Kafka qw(
    $BITS64
);
use Kafka::Connection;
use Kafka::Producer;
use Kafka::Consumer;

# A simple example of Kafka usage

# common information
say 'This is Kafka package ', $Kafka::VERSION;
say 'You have a ', $BITS64 ? '64' : '32', ' bit system';

my ( $connection, $producer, $consumer );
try {

    #-- Connect to local cluster
    $connection = Kafka::Connection->new( host => 'localhost' );
    #-- Producer
    $producer = Kafka::Producer->new( Connection => $connection );
    #-- Consumer
    $consumer = Kafka::Consumer->new( Connection  => $connection );

} catch {
    my $error = $_;
    if ( blessed( $error ) && $error->isa( 'Kafka::Exception' ) ) {
        warn 'Error: (', $error->code, ') ',  $error->message, "\n";
        exit;
    } else {
        die $error;
    }
};

# cleaning up
undef $consumer;
undef $producer;
$connection->close;
undef $connection;

# another brief code example of the Kafka package
# is provided in the "An Example" section.

ABSTRACT

The Kafka package is a set of Perl modules which provides a simple and consistent application programming interface (API) to Apache Kafka 0.9+, a high-throughput distributed messaging system.

DESCRIPTION

The user modules in this package provide an object oriented API. The IO agents, requests sent, and responses received from the Apache Kafka or mock servers are all represented by objects. This makes a simple and powerful interface to these services.

The main features of the package are:

  • Contains various reusable components (modules) that can be used separately or together.
  • Provides an object oriented model of communication.
  • Supports parsing the Apache Kafka protocol.
  • Supports the Apache Kafka Requests and Responses. Within this package the following implements of Kafka's protocol are implemented: PRODUCE, FETCH, OFFSETS, and METADATA.
  • Simple producer and consumer clients.
  • A simple interface to control the test Kafka server cluster (in the test directory).
  • Simple mock server instance (located in the test directory) for testing without Apache Kafka server.
  • Support for working with 64 bit elements of the Kafka protocol on 32 bit systems.
  • Taint mode support. The input data is not checked for tainted. Returns untainted data.

APACHE KAFKA'S STYLE COMMUNICATION

The Kafka package is based on Kafka's 0.9+ Protocol specification document at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

The Kafka's protocol is based on a request/response paradigm. A client establishes a connection with a server and sends a request to the server in the form of a request method, followed by a messages containing request modifiers. The server responds with a success or error code, followed by a messages containing entity meta-information and content.

Messages are the fundamental unit of communication. They are published to a topic by a producer, which means they are physically sent to a server acting as a broker. Some number of consumers subscribe to a topic, and each published message is delivered to all the consumers. The messages stream is partitioned on the brokers as a set of distinct partitions. The semantic meaning of these partitions is left up to the producer and the producer specifies which partition a message belongs to. Within a partition the messages are stored in the order in which they arrive at the broker, and will be given out to consumers in that same order. In Apache Kafka, the consumers are responsible for maintaining state information (offset) on what has been consumed. A consumer can deliberately rewind back to an old offset and re-consume data. Each message is uniquely identified by a 64-bit integer offset giving the position of the start of this message in the stream of all messages ever sent to that topic on that partition. Reads are done by giving the 64-bit logical offset of a message and a max chunk size.

The request is then passed through the client to a server and we get the response in return to a consumer request that we can examine. A request is always independent of any previous requests, i.e. the service is stateless. This API is completely stateless, with the topic and partition being passed in on every request.

The Connection Object

Clients use the Connection object to communicate with the Apache Kafka cluster. The Connection object is an interface layer between your application code and the Apache Kafka cluster.

Connection object is required to create instances of classes Kafka::Producer or Kafka::Consumer.

Kafka Connection API is implemented by Kafka::Connection class.

use Kafka::Connection;

# connect to local cluster with the defaults
my $connection = Kafka::Connection->new( host => 'localhost' );

The main attributes of the Connection object are:

  • host and port are the IO object attributes denoting any server from the Kafka cluster a client wants to connect.
  • timeout specifies how much time remote servers is given to respond before disconnection occurs and internal exception is thrown.

The IO Object

The Kafka::Connection object use internal class Kafka::IO to maintain communication with the particular server of Kafka cluster The IO object is an interface layer between Kafka::Connection object and the network.

Kafka IO API is implemented by Kafka::IO class. Note that end user normally should have no need to use Kafka::IO but work with Kafka::Connection instead.

use Kafka::IO;

# connect to local server with the defaults
my $io = Kafka::IO->new( host => 'localhost' );

The main attributes of the IO object are:

  • host and port are the IO object attributes denoting the server and the port of Apache Kafka server.
  • timeout specifies how much time is given remote servers to respond before the IO object disconnects and generates an internal exception.

The Producer Object

Kafka producer API is implemented by Kafka::Producer class.

use Kafka::Producer;

#-- Producer
my $producer = Kafka::Producer->new( Connection => $connection );

# Sending a single message
$producer->send(
    'mytopic',          # topic
    0,                  # partition
    'Single message'    # message
);

# Sending a series of messages
$producer->send(
    'mytopic',          # topic
    0,                  # partition
    [                   # messages
        'The first message',
        'The second message',
        'The third message',
    ]
);

The main methods and attributes of the producer request are:

  • The request method of the producer object is send().
  • topic and partition define respective parameters of the messages we want to send.
  • messages is an arbitrary amount of data (a simple data string or reference to an array of the data strings).

The Consumer Object

Kafka consumer API is implemented by Kafka::Consumer class.

use Kafka::Consumer;

$consumer = Kafka::Consumer->new( Connection => $connection );

The request methods of the consumer object are offsets() and fetch().

offsets method returns a reference to the list of offsets of received messages.

fetch method returns a reference to the list of received Kafka::Message objects.

use Kafka qw(
    $DEFAULT_MAX_BYTES
    $DEFAULT_MAX_NUMBER_OF_OFFSETS
    $RECEIVE_EARLIEST_OFFSET
);

# Get a list of valid offsets up to max_number before the given time
my $offsets = $consumer->offsets(
    'mytopic',                      # topic
    0,                              # partition
    $RECEIVE_EARLIEST_OFFSET,      # time
    $DEFAULT_MAX_NUMBER_OF_OFFSETS  # max_number
);
say "Received offset: $_" foreach @$offsets;

# Consuming messages
my $messages = $consumer->fetch(
    'mytopic',                      # topic
    0,                              # partition
    0,                              # offset
    $DEFAULT_MAX_BYTES              # Maximum size of MESSAGE(s) to receive
);
foreach my $message ( @$messages ) {
    if ( $message->valid ) {
        say 'payload    : ', $message->payload;
        say 'key        : ', $message->key;
        say 'offset     : ', $message->offset;
        say 'next_offset: ', $message->next_offset;
    } else {
        say 'error      : ', $message->error;
    }
}

See Kafka::Consumer for additional information and documentation about class methods and arguments.

The Message Object

Kafka message API is implemented by Kafka::Message class.

if ( $message->valid ) {
    say 'payload    : ', $message->payload;
    say 'key        : ', $message->key;
    say 'offset     : ', $message->offset;
    say 'next_offset: ', $message->next_offset;
} else {
    say 'error      : ', $message->error;
}

Methods available for Kafka::Message object :

  • payload A simple message received from the Apache Kafka server.
  • key An optional message key that was used for partition assignment.
  • valid A message entry is valid.
  • error A description of the message inconsistence.
  • offset The offset beginning of the message in the Apache Kafka server.
  • next_offset The offset beginning of the next message in the Apache Kafka server.

The Exception Object

A designated class Kafka::Exception is used to provide a more detailed and structured information when error is detected.

The following attributes are declared within Kafka::Exception: code, message.

Additional subclasses of Kafka::Exception designed to report errors in respective Kafka classes: Kafka::Exception::Connection, Kafka::Exception::Consumer, Kafka::Exception::IO, Kafka::Exception::Int64, Kafka::Exception::Producer.

Authors suggest using of Try::Tiny's try and catch to handle exceptions while working with Kafka module.

EXPORT

None by default.

Additional constants

Additional constants are available for import, which can be used to define some type of parameters, and to identify various error cases.

$KAFKA_SERVER_PORT

default Apache Kafka server port - 9092.

$REQUEST_TIMEOUT

1.5 sec - timeout in secs, for gethostbyname, connect, blocking receive and send calls (could be any integer or floating-point type).

$DEFAULT_MAX_BYTES

1MB - maximum size of message(s) to receive.

$SEND_MAX_ATTEMPTS

4 - The leader may be unavailable transiently, which can fail the sending of a message. This property specifies the number of attempts to send of a message.

Do not use $Kafka::SEND_MAX_ATTEMPTS in Kafka::Producer-<gtsend> request to prevent duplicates.

$RETRY_BACKOFF

200 - (ms)

According to Apache Kafka documentation:

Producer Configs - Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.

Consumer Configs - Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.

$RECEIVE_LATEST_OFFSET

DEPRECATED: please use $RECEIVE_LATEST_OFFSETS, as when using this constant to retrieve offsets, you can get more than one. It's kept for backward compatibility.

-1 : special value that denotes latest available offset.

$RECEIVE_LATEST_OFFSETS

-1 : special value that denotes latest available offsets.

$RECEIVE_EARLIEST_OFFSET

-2 : special value that denotes earliest available offset.

$RECEIVE_EARLIEST_OFFSETS

DEPRECATED: please use $RECEIVE_EARLIEST_OFFSET, as when using this constant to retrieve offset, you can get only one. It's kept for backward compatibility.

-2 : special value that denotes earliest available offset.

$DEFAULT_MAX_NUMBER_OF_OFFSETS

100 - maximum number of offsets to retrieve.

$MIN_BYTES_RESPOND_IMMEDIATELY

The minimum number of bytes of messages that must be available to give a response.

0 - the server will always respond immediately.

$MIN_BYTES_RESPOND_HAS_DATA

The minimum number of bytes of messages that must be available to give a response.

10 - the server will respond as soon as at least one partition has at least 10 bytes of data (Offset => int64 + MessageSize => int32) or the specified timeout occurs.

$NOT_SEND_ANY_RESPONSE

Indicates how many acknowledgements the servers should receive before responding to the request.

0 - the server does not send any response.

$WAIT_WRITTEN_TO_LOCAL_LOG

Indicates how long the servers should wait for the data to be written to the local long before responding to the request.

1 - the server will wait the data is written to the local log before sending a response.

$BLOCK_UNTIL_IS_COMMITTED

Wait for message to be committed by all sync replicas.

-1 - the server will block until the message is committed by all in sync replicas before sending a response.

$DEFAULT_MAX_WAIT_TIME

The maximum amount of time (seconds, may be fractional) to wait when no sufficient amount of data is available at the time the request is dispatched.

0.1 - allow the server to wait up to 0.1s to try to accumulate data before responding.

$MESSAGE_SIZE_OVERHEAD

34 - size of protocol overhead (data added by protocol) for each message.

IP version

Specify IP protocol version for resolving of IP address and host names.

$IP_V4

Interpret address as IPv4 and force resolving of host name in IPv4.

$IP_V6

Interpret address as IPv6 and force resolving of host name in IPv6.

Compression

According to Apache Kafka documentation:

Kafka currently supports three compression codecs with the following codec numbers:

$COMPRESSION_NONE

None = 0

$COMPRESSION_GZIP

GZIP = 1

$COMPRESSION_SNAPPY

Snappy = 2

$COMPRESSION_LZ4

LZ4 = 3 (That module supports only Kafka 0.10 or higher, as initial implementation of LZ4 in Kafka did not follow the standard LZ4 framing specification).

Error codes

Possible error codes (corresponds to descriptions in %ERROR):

$ERROR_MISMATCH_ARGUMENT

-1000 - Invalid argument

$ERROR_CANNOT_SEND

-1001 - Cannot send

$ERROR_SEND_NO_ACK

-1002 - No acknowledgement for sent request

ERROR_CANNOT_RECV

-1003 - Cannot receive

ERROR_CANNOT_BIND

-1004 - Cannot connect to broker

$ERROR_METADATA_ATTRIBUTES

-1005 - Unknown metadata attributes

$ERROR_UNKNOWN_APIKEY

-1006 - Unknown ApiKey

$ERROR_CANNOT_GET_METADATA

-1007 - Cannot get Metadata

$ERROR_LEADER_NOT_FOUND

-1008 - Leader not found

$ERROR_MISMATCH_CORRELATIONID

-1009 - Mismatch CorrelationId

$ERROR_NO_KNOWN_BROKERS

-1010 - There are no known brokers

$ERROR_REQUEST_OR_RESPONSE

-1011 - Bad request or response element

$ERROR_TOPIC_DOES_NOT_MATCH

-1012 - Topic does not match the requested

$ERROR_PARTITION_DOES_NOT_MATCH

-1013 - Partition does not match the requested

$ERROR_NOT_BINARY_STRING

-1014 - Unicode data is not allowed

$ERROR_COMPRESSION

-1015 - Compression error

$ERROR_RESPONSEMESSAGE_NOT_RECEIVED

-1016 - 'ResponseMessage' not received

$ERROR_INCOMPATIBLE_HOST_IP_VERSION

-1017 - Incompatible host name and IP version

$ERROR_NO_CONNECTION

-1018 - No IO connection

$ERROR_GROUP_COORDINATOR_NOT_FOUND

-1019 - Group Coordinator not found

Contains the descriptions of possible error codes obtained via ERROR_CODE box of Apache Kafka Wire Format protocol response.

$ERROR_NO_ERROR

0 - q{}

No error - it worked!

$ERROR_UNKNOWN

-1 - An unexpected server error.

$ERROR_OFFSET_OUT_OF_RANGE

1 - The requested offset is not within the range of offsets maintained by the server.

$ERROR_INVALID_MESSAGE

2 - This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt.

Synonym name $ERROR_CORRUPT_MESSAGE .

$ERROR_UNKNOWN_TOPIC_OR_PARTITION

3 - This server does not host this topic-partition.

$ERROR_INVALID_FETCH_SIZE

4 - The requested fetch size is invalid.

Synonym name $ERROR_INVALID_MESSAGE_SIZE .

$ERROR_LEADER_NOT_AVAILABLE

5 - Unable to write due to ongoing Kafka leader selection.

This error is thrown if we are in the middle of a leadership election and there is no current leader for this partition, hence it is unavailable for writes.

$ERROR_NOT_LEADER_FOR_PARTITION

6 - Server is not a leader for partition.

This error is thrown if the client attempts to send messages to a replica that is not the leader for some partition. It indicates that the clients metadata is out of date.

$ERROR_REQUEST_TIMED_OUT

7 - Request time-out.

This error is thrown if the request exceeds the user-specified time limit in the request.

$ERROR_BROKER_NOT_AVAILABLE

8 - Broker is not available.

This is not a client facing error and is used mostly by tools when a broker is not alive.

$ERROR_REPLICA_NOT_AVAILABLE

9 - The replica is not available for the requested topic-partition.

If replica is expected on a broker, but is not (this can be safely ignored).

$ERROR_MESSAGE_TOO_LARGE

10 - The request included a message larger than the max message size the server will accept.

The server has a configurable maximum message size to avoid unbounded memory allocation. This error is thrown if the client attempt to produce a message larger than this maximum.

Synonym name $ERROR_MESSAGE_SIZE_TOO_LARGE .

$ERROR_STALE_CONTROLLER_EPOCH

11 - The controller moved to another broker.

According to Apache Kafka documentation: Internal error code for broker-to-broker communication.

Synonym name $ERROR_STALE_CONTROLLER_EPOCH_CODE .

$ERROR_OFFSET_METADATA_TOO_LARGE

12 - Specified metadata offset is too big

If you specify a value larger than configured maximum for offset metadata.

Synonym name $ERROR_OFFSET_METADATA_TOO_LARGE_CODE .

$ERROR_NETWORK_EXCEPTION

13 - The server disconnected before a response was received.

$ERROR_GROUP_LOAD_IN_PROGRESS

14 - The coordinator is loading and hence can't process requests for this group.

Synonym name $ERROR_GROUP_LOAD_IN_PROGRESS_CODE, $ERROR_LOAD_IN_PROGRESS_CODE .

$ERROR_GROUP_COORDINATOR_NOT_AVAILABLE

15 - The group coordinator is not available.

Synonym name $ERROR_GROUP_COORDINATOR_NOT_AVAILABLE_CODE, $ERROR_CONSUMER_COORDINATOR_NOT_AVAILABLE_CODE .

$ERROR_NOT_COORDINATOR_FOR_GROUP

16 - This is not the correct coordinator for this group.

Synonym name $ERROR_NOT_COORDINATOR_FOR_GROUP_CODE, $ERROR_NOT_COORDINATOR_FOR_CONSUMER_CODE .

$ERROR_INVALID_TOPIC_EXCEPTION

17 - The request attempted to perform an operation on an invalid topic.

Synonym name $ERROR_INVALID_TOPIC_CODE .

$ERROR_RECORD_LIST_TOO_LARGE

18 - The request included message batch larger than the configured segment size on the server.

Synonym name $ERROR_RECORD_LIST_TOO_LARGE_CODE .

$ERROR_NOT_ENOUGH_REPLICAS

19 - Messages are rejected since there are fewer in-sync replicas than required.

Synonym name $ERROR_NOT_ENOUGH_REPLICAS_CODE .

$ERROR_NOT_ENOUGH_REPLICAS_AFTER_APPEND

20 - Messages are written to the log, but to fewer in-sync replicas than required.

Synonym name $ERROR_NOT_ENOUGH_REPLICAS_AFTER_APPEND_CODE .

$ERROR_INVALID_REQUIRED_ACKS

21 - Produce request specified an invalid value for required acks.

Synonym name $ERROR_INVALID_REQUIRED_ACKS_CODE .

$ERROR_ILLEGAL_GENERATION

22 - Specified group generation id is not valid.

Synonym name $ERROR_ILLEGAL_GENERATION_CODE .

$ERROR_INCONSISTENT_GROUP_PROTOCOL

23 - The group member's supported protocols are incompatible with those of existing members.

Synonym name $ERROR_INCONSISTENT_GROUP_PROTOCOL_CODE .

$ERROR_INVALID_GROUP_ID

24 - The configured groupId is invalid.

Synonym name $ERROR_INVALID_GROUP_ID_CODE .

$ERROR_UNKNOWN_MEMBER_ID

25 - The coordinator is not aware of this member.

Synonym name $ERROR_UNKNOWN_MEMBER_ID_CODE .

$ERROR_INVALID_SESSION_TIMEOUT

26 - The session timeout is not within the range allowed by the broker (as configured by group.min.session.timeout.ms and group.max.session.timeout.ms).

Synonym name $ERROR_INVALID_SESSION_TIMEOUT_CODE .

$ERROR_REBALANCE_IN_PROGRESS

27 - The group is rebalancing, so a rejoin is needed.

Synonym name $ERROR_REBALANCE_IN_PROGRESS_CODE .

$ERROR_INVALID_COMMIT_OFFSET_SIZE

28 - The committing offset data size is not valid.

Synonym name $ERROR_INVALID_COMMIT_OFFSET_SIZE_CODE .

$ERROR_TOPIC_AUTHORIZATION_FAILED

29 - Not authorized to access topics: [Topic authorization failed.].

Synonym name $ERROR_TOPIC_AUTHORIZATION_FAILED_CODE .

$ERROR_GROUP_AUTHORIZATION_FAILED

30 - Not authorized to access group: Group authorization failed.

Synonym name $ERROR_GROUP_AUTHORIZATION_FAILED_CODE .

$ERROR_CLUSTER_AUTHORIZATION_FAILED

31 - Cluster authorization failed.

Synonym name $ERROR_CLUSTER_AUTHORIZATION_FAILED_CODE .

$ERROR_INVALID_TIMESTAMP

32 - The timestamp of the message is out of acceptable range.

$ERROR_UNSUPPORTED_SASL_MECHANISM

33 - The broker does not support the requested SASL mechanism.

$ERROR_ILLEGAL_SASL_STATE

34 - Request is not valid given the current SASL state.

$ERROR_UNSUPPORTED_VERSION

35 - The version of API is not supported.

%ERROR

Contains the descriptions for possible error codes.

BITS64

Know you are working on 64 or 32 bit system

An Example

use 5.010;
use strict;
use warnings;

use Scalar::Util qw(
    blessed
);
use Try::Tiny;

use Kafka qw(
    $KAFKA_SERVER_PORT
    $REQUEST_TIMEOUT
    $RECEIVE_EARLIEST_OFFSET
    $DEFAULT_MAX_NUMBER_OF_OFFSETS
    $DEFAULT_MAX_BYTES
);
use Kafka::Connection;
use Kafka::Producer;
use Kafka::Consumer;

my ( $connection, $producer, $consumer );
try {

    #-- Connection
    $connection = Kafka::Connection->new( host => 'localhost' );

    #-- Producer
    $producer = Kafka::Producer->new( Connection => $connection );

    # Sending a single message
    $producer->send(
        'mytopic',                      # topic
        0,                              # partition
        'Single message'                # message
    );

    # Sending a series of messages
    $producer->send(
        'mytopic',                      # topic
        0,                              # partition
        [                               # messages
            'The first message',
            'The second message',
            'The third message',
        ]
    );

    #-- Consumer
    $consumer = Kafka::Consumer->new( Connection => $connection );

    # Get a list of valid offsets up max_number before the given time
    my $offsets = $consumer->offsets(
        'mytopic',                      # topic
        0,                              # partition
        $RECEIVE_EARLIEST_OFFSET,      # time
        $DEFAULT_MAX_NUMBER_OF_OFFSETS  # max_number
    );

    if ( @$offsets ) {
        say "Received offset: $_" foreach @$offsets;
    } else {
        warn "Error: Offsets are not received\n";
    }

    # Consuming messages
    my $messages = $consumer->fetch(
        'mytopic',                      # topic
        0,                              # partition
        0,                              # offset
        $DEFAULT_MAX_BYTES              # Maximum size of MESSAGE(s) to receive
    );

    if ( $messages ) {
        foreach my $message ( @$messages ) {
            if ( $message->valid ) {
                say 'payload    : ', $message->payload;
                say 'key        : ', $message->key;
                say 'offset     : ', $message->offset;
                say 'next_offset: ', $message->next_offset;
            } else {
                say 'error      : ', $message->error;
            }
        }
    }

} catch {
    my $error = $_;
    if ( blessed( $error ) && $error->isa( 'Kafka::Exception' ) ) {
        warn 'Error: (', $error->code, ') ',  $error->message, "\n";
        exit;
    } else {
        die $error;
    }
};

# Closes and cleans up
undef $consumer;
undef $producer;
$connection->close;
undef $connection;

DEPENDENCIES

In order to install and use this package you will need Perl version 5.10 or later. Some modules within this package depend on other packages that are distributed separately from Perl. We recommend that you have the following packages installed before you install Kafka:

Compress::Snappy
Compress::LZ4Frame
Const::Fast
Data::Compare
Data::HexDump::Range
Data::Validate::Domain
Data::Validate::IP
Exception::Class
List::Utils
Params::Util
Scalar::Util::Numeric
String::CRC32
Sys::SigAction
Try::Tiny

Kafka package has the following optional dependencies:

Capture::Tiny
Clone
Config::IniFiles
File::HomeDir
Proc::Daemon
Proc::ProcessTable
Sub::Install
Test::Deep
Test::Exception
Test::NoWarnings
Test::TCP

If the optional modules are missing, some "prereq" tests are skipped.

DIAGNOSTICS

Debug output can be enabled by setting level via one of the following environment variables:

PERL_KAFKA_DEBUG=1 - debug is enabled for the whole Kafka package.

PERL_KAFKA_DEBUG=IO:1 - enable debug only for Kafka::IO only.

PERL_KAFKA_DEBUG=Connection:1 - enable debug only for particular Kafka::Connection.

It's possible to set different debug levels, like in the following example:

PERL_KAFKA_DEBUG=Connection:1,IO:2

See documentation for a particular module for explanation of various debug levels.

BUGS AND LIMITATIONS

Connection constructor:

Make sure that you always connect to brokers using EXACTLY the same address or host name as specified in broker configuration (host.name in server.properties). Avoid using default value (when host.name is commented) in server.properties - always use explicit value instead.

Producer and Consumer methods only work with one topic and one partition at a time. Also module does not implement the Offset Commit/Fetch API.

Producer's, Consumer's, Connection's string arguments must be binary strings. Using Unicode strings may cause an error or data corruption.

This module does not support Kafka protocol versions earlier than 0.8.

Kafka::IO->new' uses Sys::SigAction and alarm() to limit some internal operations. This means that if an external alarm() was set, signal delivery may be delayed.

With non-empty timeout, we use alarm() internally in Kafka::IO and try preserving existing alarm() if possible. However, if Time::HiRes::ualarm() is set before calling Kafka modules, its behaviour is unspecified (i.e. it could be reset or preserved etc.).

For gethostbyname operations the non-empty timeout is rounded to the nearest greater positive integer; any timeouts less than 1 second are rounded to 1 second.

You can disable the use of alarm() by setting timeout => undef in the constructor.

The Kafka package was written, tested, and found working on recent Linux distributions.

There are no known bugs in this package.

Please report problems to the "AUTHOR".

Patches are welcome.

MORE DOCUMENTATION

All modules contain detailed information on the interfaces they provide.

SEE ALSO

The basic operation of the Kafka package modules:

Kafka - constants and messages used by the Kafka package modules.

Kafka::Connection - interface to connect to a Kafka cluster.

Kafka::Producer - interface for producing client.

Kafka::Consumer - interface for consuming client.

Kafka::Message - interface to access Kafka message properties.

Kafka::Int64 - functions to work with 64 bit elements of the protocol on 32 bit systems.

Kafka::Protocol - functions to process messages in the Apache Kafka's Protocol.

Kafka::IO - low-level interface for communication with Kafka server.

Kafka::Exceptions - module designated to handle Kafka exceptions.

Kafka::Internals - internal constants and functions used by several package modules.

A wealth of detail about the Apache Kafka and the Kafka Protocol:

Main page at http://kafka.apache.org/

Kafka Protocol at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol

SOURCE CODE

Kafka package is hosted on GitHub: https://github.com/TrackingSoft/Kafka

AUTHOR

Sergey Gladkov

Please use GitHub project link above to report problems or contact authors.

CONTRIBUTORS

Alexander Solovey

Jeremy Jordan

Sergiy Zuban

Nikolay Shulyakovskiy

Vlad Marchenko

Damien Krotkine

Greg Franklin

COPYRIGHT AND LICENSE

Copyright (C) 2012-2017 by TrackingSoft LLC.

This package is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic at http://dev.perl.org/licenses/artistic.html.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Download Details:

Author: TrackingSoft
Source Code: https://github.com/TrackingSoft/Kafka

License: View license

#perl #kafka 

Marcelle  Smith

Marcelle Smith

1598083582

What Are Good Traits That Make Great API Product Managers

As more companies realize the benefits of an API-first mindset and treating their APIs as products, there is a growing need for good API product management practices to make a company’s API strategy a reality. However, API product management is a relatively new field with little established knowledge on what is API product management and what a PM should be doing to ensure their API platform is successful.

Many of the current practices of API product management have carried over from other products and platforms like web and mobile, but API products have their own unique set of challenges due to the way they are marketed and used by customers. While it would be rare for a consumer mobile app to have detailed developer docs and a developer relations team, you’ll find these items common among API product-focused companies. A second unique challenge is that APIs are very developer-centric and many times API PMs are engineers themselves. Yet, this can cause an API or developer program to lose empathy for what their customers actually want if good processes are not in place. Just because you’re an engineer, don’t assume your customers will want the same features and use cases that you want.

This guide lays out what is API product management and some of the things you should be doing to be a good product manager.

#api #analytics #apis #product management #api best practices #api platform #api adoption #product managers #api product #api metrics