1670577756
In this Keras article, we will learn about Top 3 Ways To Build Machine Learning Models in Keras. If you’ve looked at Keras models on Github, you’ve probably noticed that there are some different ways to create models in Keras. There’s the Sequential model, which allows you to define an entire model in a single line, usually with some line breaks for readability. Then, there’s the functional interface that allows for more complicated model architectures, and there’s also the Model subclass which helps reusability. This article will explore the different ways to create models in Keras, along with their advantages and drawbacks. This will equip you with the knowledge you need to create your own machine learning models in Keras.
After you complete this tutorial, you will learn:
Let’s get started!
This tutorial is split into three parts, covering the different ways to build machine learning models in Keras:
The Sequential Model is just as the name implies. It consists of a sequence of layers, one after the other. From the Keras documentation,
“A Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.”
It is a simple, easy-to-use way to start building your Keras model. To start, import Tensorflow and then the Sequential model:
import tensorflow as tf
from tensorflow.keras import Sequential
Then, you can start building your machine learning model by stacking various layers together. For this example, let’s build a LeNet5 model with the classic CIFAR-10 image dataset as the input:
from tensorflow.keras.layers import Dense, Input, Flatten, Conv2D, MaxPool2D
model = Sequential([
Input(shape=(32,32,3,)),
Conv2D(filters=6, kernel_size=(5,5), padding="same", activation="relu"),
MaxPool2D(pool_size=(2,2)),
Conv2D(filters=16, kernel_size=(5,5), padding="same", activation="relu"),
MaxPool2D(pool_size=(2, 2)),
Conv2D(filters=120, kernel_size=(5,5), padding="same", activation="relu"),
Flatten(),
Dense(units=84, activation="relu"),
Dense(units=10, activation="softmax"),
])
print (model.summary())
Notice that you are just passing in an array of the layers you want your model to contain into the Sequential model constructor. Looking at the model.summary(), you can see the model’s architecture.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_3 (Conv2D) (None, 32, 32, 6) 456
max_pooling2d_2 (MaxPooling (None, 16, 16, 6) 0
2D)
conv2d_4 (Conv2D) (None, 16, 16, 16) 2416
max_pooling2d_3 (MaxPooling (None, 8, 8, 16) 0
2D)
conv2d_5 (Conv2D) (None, 8, 8, 120) 48120
flatten_1 (Flatten) (None, 7680) 0
dense_2 (Dense) (None, 84) 645204
dense_3 (Dense) (None, 10) 850
=================================================================
Total params: 697,046
Trainable params: 697,046
Non-trainable params: 0
_________________________________________________________________
And just to test out the model, let’s go ahead and load the CIFAR-10 dataset and run model.compile and model.fit:
from tensorflow import keras
(trainX, trainY), (testX, testY) = keras.datasets.cifar10.load_data()
model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics="acc")
history = model.fit(x=trainX, y=trainY, batch_size=256, epochs=10, validation_data=(testX, testY))
This gives us this output:
Epoch 1/10
196/196 [==============================] - 13s 10ms/step - loss: 2.7669 - acc: 0.3648 - val_loss: 1.4869 - val_acc: 0.4713
Epoch 2/10
196/196 [==============================] - 2s 8ms/step - loss: 1.3883 - acc: 0.5097 - val_loss: 1.3654 - val_acc: 0.5205
Epoch 3/10
196/196 [==============================] - 2s 8ms/step - loss: 1.2239 - acc: 0.5694 - val_loss: 1.2908 - val_acc: 0.5472
Epoch 4/10
196/196 [==============================] - 2s 8ms/step - loss: 1.1020 - acc: 0.6120 - val_loss: 1.2640 - val_acc: 0.5622
Epoch 5/10
196/196 [==============================] - 2s 8ms/step - loss: 0.9931 - acc: 0.6498 - val_loss: 1.2850 - val_acc: 0.5555
Epoch 6/10
196/196 [==============================] - 2s 9ms/step - loss: 0.8888 - acc: 0.6903 - val_loss: 1.3150 - val_acc: 0.5646
Epoch 7/10
196/196 [==============================] - 2s 8ms/step - loss: 0.7882 - acc: 0.7229 - val_loss: 1.4273 - val_acc: 0.5426
Epoch 8/10
196/196 [==============================] - 2s 8ms/step - loss: 0.6915 - acc: 0.7582 - val_loss: 1.4574 - val_acc: 0.5604
Epoch 9/10
196/196 [==============================] - 2s 8ms/step - loss: 0.5934 - acc: 0.7931 - val_loss: 1.5304 - val_acc: 0.5631
Epoch 10/10
196/196 [==============================] - 2s 8ms/step - loss: 0.5113 - acc: 0.8214 - val_loss: 1.6355 - val_acc: 0.5512
That’s pretty good for a first pass at a model. Putting the code for LeNet5 using a Sequential model together, you have:
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input, Flatten, Conv2D, MaxPool2D
(trainX, trainY), (testX, testY) = keras.datasets.cifar10.load_data()
model = Sequential([
Input(shape=(32,32,3,)),
Conv2D(filters=6, kernel_size=(5,5), padding="same", activation="relu"),
MaxPool2D(pool_size=(2,2)),
Conv2D(filters=16, kernel_size=(5,5), padding="same", activation="relu"),
MaxPool2D(pool_size=(2, 2)),
Conv2D(filters=120, kernel_size=(5,5), padding="same", activation="relu"),
Flatten(),
Dense(units=84, activation="relu"),
Dense(units=10, activation="softmax"),
])
print (model.summary())
model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics="acc")
history = model.fit(x=trainX, y=trainY, batch_size=256, epochs=10, validation_data=(testX, testY))
Now, let’s explore what the other ways of constructing Keras models can do, starting with the functional interface!
The next method of constructing Keras models you will explore uses Keras’s functional interface. The functional interface uses the layers as functions instead, taking in a Tensor and outputting a Tensor as well. The functional interface is a more flexible way of representing a Keras model as you are not restricted only to sequential models with layers stacked on top of one another. Instead, you can build models that branch into multiple paths, have multiple inputs, etc.
Consider an Add
layer that takes inputs from two or more paths and adds the tensors together.
Add layer with two inputs
Since this cannot be represented as a linear stack of layers due to the multiple inputs, you are unable to define it using a Sequential object. Here’s where Keras’s functional interface comes in. You can define an Add layer with two input tensors as such:
from tensorflow.keras.layers import Add
add_layer = Add()([layer1, layer2])
Now that you’ve seen a quick example of the functional interface, let’s take a look at what the LeNet5 model that you defined by instantiating a Sequential class would look like using a functional interface.
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.models import Model
input_layer = Input(shape=(32,32,3,))
x = Conv2D(filters=6, kernel_size=(5,5), padding="same", activation="relu")(input_layer)
x = MaxPool2D(pool_size=(2,2))(x)
x = Conv2D(filters=16, kernel_size=(5,5), padding="same", activation="relu")(x)
x = MaxPool2D(pool_size=(2, 2))(x)
x = Conv2D(filters=120, kernel_size=(5,5), padding="same", activation="relu")(x)
x = Flatten()(x)
x = Dense(units=84, activation="relu")(x)
x = Dense(units=10, activation="softmax")(x)
model = Model(inputs=input_layer, outputs=x)
print(model.summary())
And looking at the model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 32, 32, 3)] 0
conv2d_6 (Conv2D) (None, 32, 32, 6) 456
max_pooling2d_2 (MaxPooling (None, 16, 16, 6) 0
2D)
conv2d_7 (Conv2D) (None, 16, 16, 16) 2416
max_pooling2d_3 (MaxPooling (None, 8, 8, 16) 0
2D)
conv2d_8 (Conv2D) (None, 8, 8, 120) 48120
flatten_2 (Flatten) (None, 7680) 0
dense_4 (Dense) (None, 84) 645204
dense_5 (Dense) (None, 10) 850
=================================================================
Total params: 697,046
Trainable params: 697,046
Non-trainable params: 0
_________________________________________________________________
As you can see, the model architecture is the same for both LeNet5 models you implemented using the functional interface or the Sequential class.
Now that you’ve seen how to use Keras’s functional interface, let’s look at a model architecture that you can implement using the functional interface but not with the Sequential class. For this example, look at the residual block introduced in ResNet. Visually, the residual block looks like this:
Residual block, source: https://arxiv.org/pdf/1512.03385.pdf
You can see that a model defined using the Sequential class would be unable to construct such a block due to the skip connection, which prevents this block from being represented as a simple stack of layers. Using the functional interface is one way you can define a ResNet block:
def residual_block(x, filters):
# store the input tensor to be added later as the identity
identity = x
# change the strides to do like pooling layer (need to see whether we connect before or after this layer though)
x = Conv2D(filters = filters, kernel_size=(3, 3), strides = (1, 1), padding="same")(x)
x = BatchNormalization()(x)
x = relu(x)
x = Conv2D(filters = filters, kernel_size=(3, 3), padding="same")(x)
x = BatchNormalization()(x)
x = Add()([identity, x])
x = relu(x)
return x
Then, you can build a simple network using these residual blocks using the functional interface:
input_layer = Input(shape=(32,32,3,))
x = Conv2D(filters=32, kernel_size=(3, 3), padding="same", activation="relu")(input_layer)
x = residual_block(x, 32)
x = Conv2D(filters=64, kernel_size=(3, 3), strides=(2, 2), padding="same", activation="relu")(x)
x = residual_block(x, 64)
x = Conv2D(filters=128, kernel_size=(3, 3), strides=(2, 2), padding="same", activation="relu")(x)
x = residual_block(x, 128)
x = Flatten()(x)
x = Dense(units=84, activation="relu")(x)
x = Dense(units=10, activation="softmax")(x)
model = Model(inputs=input_layer, outputs = x)
print(model.summary())
model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics="acc")
history = model.fit(x=trainX, y=trainY, batch_size=256, epochs=10, validation_data=(testX, testY))
Running this code and looking at the model summary and training results:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0 []
conv2d (Conv2D) (None, 32, 32, 32) 896 ['input_1[0][0]']
conv2d_1 (Conv2D) (None, 32, 32, 32) 9248 ['conv2d[0][0]']
batch_normalization (BatchNorm (None, 32, 32, 32) 128 ['conv2d_1[0][0]']
alization)
tf.nn.relu (TFOpLambda) (None, 32, 32, 32) 0
['batch_normalization[0][0]']
conv2d_2 (Conv2D) (None, 32, 32, 32) 9248 ['tf.nn.relu[0][0]']
batch_normalization_1 (BatchNo (None, 32, 32, 32) 128 ['conv2d_2[0][0]']
rmalization)
add (Add) (None, 32, 32, 32) 0 ['conv2d[0][0]',
'batch_normalization_1[0][0]']
tf.nn.relu_1 (TFOpLambda) (None, 32, 32, 32) 0 ['add[0][0]']
conv2d_3 (Conv2D) (None, 16, 16, 64) 18496 ['tf.nn.relu_1[0][0]']
conv2d_4 (Conv2D) (None, 16, 16, 64) 36928 ['conv2d_3[0][0]']
batch_normalization_2 (BatchNo (None, 16, 16, 64) 256 ['conv2d_4[0][0]']
rmalization)
tf.nn.relu_2 (TFOpLambda) (None, 16, 16, 64) 0
['batch_normalization_2[0][0]']
conv2d_5 (Conv2D) (None, 16, 16, 64) 36928 ['tf.nn.relu_2[0][0]']
batch_normalization_3 (BatchNo (None, 16, 16, 64) 256 ['conv2d_5[0][0]']
rmalization)
add_1 (Add) (None, 16, 16, 64) 0 ['conv2d_3[0][0]',
'batch_normalization_3[0][0]']
tf.nn.relu_3 (TFOpLambda) (None, 16, 16, 64) 0 ['add_1[0][0]']
conv2d_6 (Conv2D) (None, 8, 8, 128) 73856 ['tf.nn.relu_3[0][0]']
conv2d_7 (Conv2D) (None, 8, 8, 128) 147584 ['conv2d_6[0][0]']
batch_normalization_4 (BatchNo (None, 8, 8, 128) 512 ['conv2d_7[0][0]']
rmalization)
tf.nn.relu_4 (TFOpLambda) (None, 8, 8, 128) 0
['batch_normalization_4[0][0]']
conv2d_8 (Conv2D) (None, 8, 8, 128) 147584 ['tf.nn.relu_4[0][0]']
batch_normalization_5 (BatchNo (None, 8, 8, 128) 512 ['conv2d_8[0][0]']
rmalization)
add_2 (Add) (None, 8, 8, 128) 0 ['conv2d_6[0][0]',
'batch_normalization_5[0][0]']
tf.nn.relu_5 (TFOpLambda) (None, 8, 8, 128) 0 ['add_2[0][0]']
flatten (Flatten) (None, 8192) 0 ['tf.nn.relu_5[0][0]']
dense (Dense) (None, 84) 688212 ['flatten[0][0]']
dense_1 (Dense) (None, 10) 850 ['dense[0][0]']
==================================================================================================
Total params: 1,171,622
Trainable params: 1,170,726
Non-trainable params: 896
__________________________________________________________________________________________________
None
Epoch 1/10
196/196 [==============================] - 21s 46ms/step - loss: 3.4463
acc: 0.3635 - val_loss: 1.8015 - val_acc: 0.3459
Epoch 2/10
196/196 [==============================] - 8s 43ms/step - loss: 1.3267 - acc: 0.5200 - val_loss: 1.3895 - val_acc: 0.5069
Epoch 3/10
196/196 [==============================] - 8s 43ms/step - loss: 1.1095 - acc: 0.6062 - val_loss: 1.2008 - val_acc: 0.5651
Epoch 4/10
196/196 [==============================] - 9s 44ms/step - loss: 0.9618 - acc: 0.6585 - val_loss: 1.5411 - val_acc: 0.5226
Epoch 5/10
196/196 [==============================] - 9s 44ms/step - loss: 0.8656 - acc: 0.6968 - val_loss: 1.1012 - val_acc: 0.6234
Epoch 6/10
196/196 [==============================] - 8s 43ms/step - loss: 0.7622 - acc: 0.7361 - val_loss: 1.1355 - val_acc: 0.6168
Epoch 7/10
196/196 [==============================] - 9s 44ms/step - loss: 0.6801 - acc: 0.7602 - val_loss: 1.1561 - val_acc: 0.6187
Epoch 8/10
196/196 [==============================] - 8s 43ms/step - loss: 0.6106 - acc: 0.7905 - val_loss: 1.1100 - val_acc: 0.6401
Epoch 9/10
196/196 [==============================] - 9s 43ms/step - loss: 0.5367 - acc: 0.8146 - val_loss: 1.2989 - val_acc: 0.6058
Epoch 10/10
196/196 [==============================] - 9s 47ms/step - loss: 0.4776 - acc: 0.8348 - val_loss: 1.0098 - val_acc: 0.6757
And combining the code for our simple network using residual blocks:
import tensorflow as tf
from tensorflow import keras
from keras.layers import Input, Conv2D, BatchNormalization, Add, MaxPool2D, Flatten, Dense
from keras.activations import relu
from tensorflow.keras.models import Model
def residual_block(x, filters):
# store the input tensor to be added later as the identity
identity = x
# change the strides to do like pooling layer (need to see whether we connect before or after this layer though)
x = Conv2D(filters = filters, kernel_size=(3, 3), strides = (1, 1), padding="same")(x)
x = BatchNormalization()(x)
x = relu(x)
x = Conv2D(filters = filters, kernel_size=(3, 3), padding="same")(x)
x = BatchNormalization()(x)
x = Add()([identity, x])
x = relu(x)
return x
(trainX, trainY), (testX, testY) = keras.datasets.cifar10.load_data()
input_layer = Input(shape=(32,32,3,))
x = Conv2D(filters=32, kernel_size=(3, 3), padding="same", activation="relu")(input_layer)
x = residual_block(x, 32)
x = Conv2D(filters=64, kernel_size=(3, 3), strides=(2, 2), padding="same", activation="relu")(x)
x = residual_block(x, 64)
x = Conv2D(filters=128, kernel_size=(3, 3), strides=(2, 2), padding="same", activation="relu")(x)
x = residual_block(x, 128)
x = Flatten()(x)
x = Dense(units=84, activation="relu")(x)
x = Dense(units=10, activation="softmax")(x)
model = Model(inputs=input_layer, outputs = x)
print(model.summary())
model.compile(optimizer="adam", loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics="acc")
history = model.fit(x=trainX, y=trainY, batch_size=256, epochs=10, validation_data=(testX, testY))
Keras also provides an object-oriented approach to creating models, which helps with reusability and allows you to represent the models you want to create as classes. This representation might be more intuitive since you can think about models as a set of layers strung together to form your network.
To begin subclassing keras.Model, you first need to import it:
from tensorflow.keras.models import Model
Then, you can start subclassing keras.Model. First, you need to build the layers that you want to use in your method calls since you only want to instantiate these layers once instead of each time you call your model. To keep in line with previous examples, let’s build a LeNet5 model here as well.
class LeNet5(tf.keras.Model):
def __init__(self):
super(LeNet5, self).__init__()
#creating layers in initializer
self.conv1 = Conv2D(filters=6, kernel_size=(5,5), padding="same", activation="relu")
self.max_pool2x2 = MaxPool2D(pool_size=(2,2))
self.conv2 = Conv2D(filters=16, kernel_size=(5,5), padding="same", activation="relu")
self.conv3 = Conv2D(filters=120, kernel_size=(5,5), padding="same", activation="relu")
self.flatten = Flatten()
self.fc2 = Dense(units=84, activation="relu")
self.fc3 = Dense(units=10, activation="softmax")
Then, override the call method to define what happens when the model is called. You override it with your model, which uses the layers you have built in the initializer.`
def call(self, input_tensor):
# don't create layers here, need to create the layers in initializer,
# otherwise you will get the tf.Variable can only be created once error
conv1 = self.conv1(input_tensor)
maxpool1 = self.max_pool2x2(conv1)
conv2 = self.conv2(maxpool1)
maxpool2 = self.max_pool2x2(conv2)
conv3 = self.conv3(maxpool2)
flatten = self.flatten(conv3)
fc2 = self.fc2(flatten)
fc3 = self.fc3(fc2)
return fc3
It is important to have all the layers created at the class constructor, not inside the call()
method. This is because the call()
method will be invoked multiple times with different input tensors. But you want to use the same layer objects in each call to optimize their weight. You can then instantiate your new LeNet5 class and use it as part of a model:
input_layer = Input(shape=(32,32,3,))
x = LeNet5()(input_layer)
model = Model(inputs=input_layer, outputs=x)
print(model.summary(expand_nested=True))
And you can see that the model has the same number of parameters as the previous two versions of LeNet5 that were built previously and has the same structure within it.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
le_net5 (LeNet5) (None, 10) 697046
|¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯|
| conv2d (Conv2D) multiple 456 |
| |
| max_pooling2d (MaxPooling2D multiple 0 |
| ) |
| |
| conv2d_1 (Conv2D) multiple 2416 |
| |
| conv2d_2 (Conv2D) multiple 48120 |
| |
| flatten (Flatten) multiple 0 |
| |
| dense (Dense) multiple 645204 |
| |
| dense_1 (Dense) multiple 850 |
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
=================================================================
Total params: 697,046
Trainable params: 697,046
Non-trainable params: 0
_________________________________________________________________
Combining all the code to create your LeNet5 subclass of keras.Model:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Flatten, Conv2D, MaxPool2D
from tensorflow.keras.models import Model
class LeNet5(tf.keras.Model):
def __init__(self):
super(LeNet5, self).__init__()
#creating layers in initializer
self.conv1 = Conv2D(filters=6, kernel_size=(5,5), padding="same", activation="relu")
self.max_pool2x2 = MaxPool2D(pool_size=(2,2))
self.conv2 = Conv2D(filters=16, kernel_size=(5,5), padding="same", activation="relu")
self.conv3 = Conv2D(filters=120, kernel_size=(5,5), padding="same", activation="relu")
self.flatten = Flatten()
self.fc2 = Dense(units=84, activation="relu")
self.fc3=Dense(units=10, activation="softmax")
def call(self, input_tensor):
#don't add layers here, need to create the layers in initializer, otherwise you will get the tf.Variable can only be created once error
x = self.conv1(input_tensor)
x = self.max_pool2x2(x)
x = self.conv2(x)
x = self.max_pool2x2(x)
x = self.conv3(x)
x = self.flatten(x)
x = self.fc2(x)
x = self.fc3(x)
return x
input_layer = Input(shape=(32,32,3,))
x = LeNet5()(input_layer)
model = Model(inputs=input_layer, outputs=x)
print(model.summary(expand_nested=True))
In this post, you have seen three different ways to create models in Keras. In particular, this includes using the Sequential class, functional interface, and subclassing keras.Model. You have also seen examples of the same LeNet5 model being built using the different methods and a use case that can be done using the functional interface but not with the Sequential class.
Original article sourced at: https://machinelearningmastery.com
1653475560
msgpack.php
A pure PHP implementation of the MessagePack serialization format.
The recommended way to install the library is through Composer:
composer require rybakit/msgpack
To pack values you can either use an instance of a Packer
:
$packer = new Packer();
$packed = $packer->pack($value);
or call a static method on the MessagePack
class:
$packed = MessagePack::pack($value);
In the examples above, the method pack
automatically packs a value depending on its type. However, not all PHP types can be uniquely translated to MessagePack types. For example, the MessagePack format defines map
and array
types, which are represented by a single array
type in PHP. By default, the packer will pack a PHP array as a MessagePack array if it has sequential numeric keys, starting from 0
and as a MessagePack map otherwise:
$mpArr1 = $packer->pack([1, 2]); // MP array [1, 2]
$mpArr2 = $packer->pack([0 => 1, 1 => 2]); // MP array [1, 2]
$mpMap1 = $packer->pack([0 => 1, 2 => 3]); // MP map {0: 1, 2: 3}
$mpMap2 = $packer->pack([1 => 2, 2 => 3]); // MP map {1: 2, 2: 3}
$mpMap3 = $packer->pack(['a' => 1, 'b' => 2]); // MP map {a: 1, b: 2}
However, sometimes you need to pack a sequential array as a MessagePack map. To do this, use the packMap
method:
$mpMap = $packer->packMap([1, 2]); // {0: 1, 1: 2}
Here is a list of type-specific packing methods:
$packer->packNil(); // MP nil
$packer->packBool(true); // MP bool
$packer->packInt(42); // MP int
$packer->packFloat(M_PI); // MP float (32 or 64)
$packer->packFloat32(M_PI); // MP float 32
$packer->packFloat64(M_PI); // MP float 64
$packer->packStr('foo'); // MP str
$packer->packBin("\x80"); // MP bin
$packer->packArray([1, 2]); // MP array
$packer->packMap(['a' => 1]); // MP map
$packer->packExt(1, "\xaa"); // MP ext
Check the "Custom types" section below on how to pack custom types.
The Packer
object supports a number of bitmask-based options for fine-tuning the packing process (defaults are in bold):
Name | Description |
---|---|
FORCE_STR | Forces PHP strings to be packed as MessagePack UTF-8 strings |
FORCE_BIN | Forces PHP strings to be packed as MessagePack binary data |
DETECT_STR_BIN | Detects MessagePack str/bin type automatically |
FORCE_ARR | Forces PHP arrays to be packed as MessagePack arrays |
FORCE_MAP | Forces PHP arrays to be packed as MessagePack maps |
DETECT_ARR_MAP | Detects MessagePack array/map type automatically |
FORCE_FLOAT32 | Forces PHP floats to be packed as 32-bits MessagePack floats |
FORCE_FLOAT64 | Forces PHP floats to be packed as 64-bits MessagePack floats |
The type detection mode (
DETECT_STR_BIN
/DETECT_ARR_MAP
) adds some overhead which can be noticed when you pack large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly specify the value type. The library provides 2 auxiliary classes for this,Map
andBin
. Check the "Custom types" section below for details.
Examples:
// detect str/bin type and pack PHP 64-bit floats (doubles) to MP 32-bit floats
$packer = new Packer(PackOptions::DETECT_STR_BIN | PackOptions::FORCE_FLOAT32);
// these will throw MessagePack\Exception\InvalidOptionException
$packer = new Packer(PackOptions::FORCE_STR | PackOptions::FORCE_BIN);
$packer = new Packer(PackOptions::FORCE_FLOAT32 | PackOptions::FORCE_FLOAT64);
To unpack data you can either use an instance of a BufferUnpacker
:
$unpacker = new BufferUnpacker();
$unpacker->reset($packed);
$value = $unpacker->unpack();
or call a static method on the MessagePack
class:
$value = MessagePack::unpack($packed);
If the packed data is received in chunks (e.g. when reading from a stream), use the tryUnpack
method, which attempts to unpack data and returns an array of unpacked messages (if any) instead of throwing an InsufficientDataException
:
while ($chunk = ...) {
$unpacker->append($chunk);
if ($messages = $unpacker->tryUnpack()) {
return $messages;
}
}
If you want to unpack from a specific position in a buffer, use seek
:
$unpacker->seek(42); // set position equal to 42 bytes
$unpacker->seek(-8); // set position to 8 bytes before the end of the buffer
To skip bytes from the current position, use skip
:
$unpacker->skip(10); // set position to 10 bytes ahead of the current position
To get the number of remaining (unread) bytes in the buffer:
$unreadBytesCount = $unpacker->getRemainingCount();
To check whether the buffer has unread data:
$hasUnreadBytes = $unpacker->hasRemaining();
If needed, you can remove already read data from the buffer by calling:
$releasedBytesCount = $unpacker->release();
With the read
method you can read raw (packed) data:
$packedData = $unpacker->read(2); // read 2 bytes
Besides the above methods BufferUnpacker
provides type-specific unpacking methods, namely:
$unpacker->unpackNil(); // PHP null
$unpacker->unpackBool(); // PHP bool
$unpacker->unpackInt(); // PHP int
$unpacker->unpackFloat(); // PHP float
$unpacker->unpackStr(); // PHP UTF-8 string
$unpacker->unpackBin(); // PHP binary string
$unpacker->unpackArray(); // PHP sequential array
$unpacker->unpackMap(); // PHP associative array
$unpacker->unpackExt(); // PHP MessagePack\Type\Ext object
The BufferUnpacker
object supports a number of bitmask-based options for fine-tuning the unpacking process (defaults are in bold):
Name | Description |
---|---|
BIGINT_AS_STR | Converts overflowed integers to strings [1] |
BIGINT_AS_GMP | Converts overflowed integers to GMP objects [2] |
BIGINT_AS_DEC | Converts overflowed integers to Decimal\Decimal objects [3] |
1. The binary MessagePack format has unsigned 64-bit as its largest integer data type, but PHP does not support such integers, which means that an overflow can occur during unpacking.
2. Make sure the GMP extension is enabled.
3. Make sure the Decimal extension is enabled.
Examples:
$packedUint64 = "\xcf"."\xff\xff\xff\xff"."\xff\xff\xff\xff";
$unpacker = new BufferUnpacker($packedUint64);
var_dump($unpacker->unpack()); // string(20) "18446744073709551615"
$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_GMP);
var_dump($unpacker->unpack()); // object(GMP) {...}
$unpacker = new BufferUnpacker($packedUint64, UnpackOptions::BIGINT_AS_DEC);
var_dump($unpacker->unpack()); // object(Decimal\Decimal) {...}
In addition to the basic types, the library provides functionality to serialize and deserialize arbitrary types. This can be done in several ways, depending on your use case. Let's take a look at them.
If you need to serialize an instance of one of your classes into one of the basic MessagePack types, the best way to do this is to implement the CanBePacked interface in the class. A good example of such a class is the Map
type class that comes with the library. This type is useful when you want to explicitly specify that a given PHP array should be packed as a MessagePack map without triggering an automatic type detection routine:
$packer = new Packer();
$packedMap = $packer->pack(new Map([1, 2, 3]));
$packedArray = $packer->pack([1, 2, 3]);
More type examples can be found in the src/Type directory.
As with type objects, type transformers are only responsible for serializing values. They should be used when you need to serialize a value that does not implement the CanBePacked interface. Examples of such values could be instances of built-in or third-party classes that you don't own, or non-objects such as resources.
A transformer class must implement the CanPack interface. To use a transformer, it must first be registered in the packer. Here is an example of how to serialize PHP streams into the MessagePack bin
format type using one of the supplied transformers, StreamTransformer
:
$packer = new Packer(null, [new StreamTransformer()]);
$packedBin = $packer->pack(fopen('/path/to/file', 'r+'));
More type transformer examples can be found in the src/TypeTransformer directory.
In contrast to the cases described above, extensions are intended to handle extension types and are responsible for both serialization and deserialization of values (types).
An extension class must implement the Extension interface. To use an extension, it must first be registered in the packer and the unpacker.
The MessagePack specification divides extension types into two groups: predefined and application-specific. Currently, there is only one predefined type in the specification, Timestamp.
Timestamp
The Timestamp extension type is a predefined type. Support for this type in the library is done through the TimestampExtension
class. This class is responsible for handling Timestamp
objects, which represent the number of seconds and optional adjustment in nanoseconds:
$timestampExtension = new TimestampExtension();
$packer = new Packer();
$packer = $packer->extendWith($timestampExtension);
$unpacker = new BufferUnpacker();
$unpacker = $unpacker->extendWith($timestampExtension);
$packedTimestamp = $packer->pack(Timestamp::now());
$timestamp = $unpacker->reset($packedTimestamp)->unpack();
$seconds = $timestamp->getSeconds();
$nanoseconds = $timestamp->getNanoseconds();
When using the MessagePack
class, the Timestamp extension is already registered:
$packedTimestamp = MessagePack::pack(Timestamp::now());
$timestamp = MessagePack::unpack($packedTimestamp);
Application-specific extensions
In addition, the format can be extended with your own types. For example, to make the built-in PHP DateTime
objects first-class citizens in your code, you can create a corresponding extension, as shown in the example. Please note, that custom extensions have to be registered with a unique extension ID (an integer from 0
to 127
).
More extension examples can be found in the examples/MessagePack directory.
To learn more about how extension types can be useful, check out this article.
If an error occurs during packing/unpacking, a PackingFailedException
or an UnpackingFailedException
will be thrown, respectively. In addition, an InsufficientDataException
can be thrown during unpacking.
An InvalidOptionException
will be thrown in case an invalid option (or a combination of mutually exclusive options) is used.
Run tests as follows:
vendor/bin/phpunit
Also, if you already have Docker installed, you can run the tests in a docker container. First, create a container:
./dockerfile.sh | docker build -t msgpack -
The command above will create a container named msgpack
with PHP 8.1 runtime. You may change the default runtime by defining the PHP_IMAGE
environment variable:
PHP_IMAGE='php:8.0-cli' ./dockerfile.sh | docker build -t msgpack -
See a list of various images here.
Then run the unit tests:
docker run --rm -v $PWD:/msgpack -w /msgpack msgpack
To ensure that the unpacking works correctly with malformed/semi-malformed data, you can use a testing technique called Fuzzing. The library ships with a help file (target) for PHP-Fuzzer and can be used as follows:
php-fuzzer fuzz tests/fuzz_buffer_unpacker.php
To check performance, run:
php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
=============================================
Test/Target Packer BufferUnpacker
---------------------------------------------
nil .................. 0.0030 ........ 0.0139
false ................ 0.0037 ........ 0.0144
true ................. 0.0040 ........ 0.0137
7-bit uint #1 ........ 0.0052 ........ 0.0120
7-bit uint #2 ........ 0.0059 ........ 0.0114
7-bit uint #3 ........ 0.0061 ........ 0.0119
5-bit sint #1 ........ 0.0067 ........ 0.0126
5-bit sint #2 ........ 0.0064 ........ 0.0132
5-bit sint #3 ........ 0.0066 ........ 0.0135
8-bit uint #1 ........ 0.0078 ........ 0.0200
8-bit uint #2 ........ 0.0077 ........ 0.0212
8-bit uint #3 ........ 0.0086 ........ 0.0203
16-bit uint #1 ....... 0.0111 ........ 0.0271
16-bit uint #2 ....... 0.0115 ........ 0.0260
16-bit uint #3 ....... 0.0103 ........ 0.0273
32-bit uint #1 ....... 0.0116 ........ 0.0326
32-bit uint #2 ....... 0.0118 ........ 0.0332
32-bit uint #3 ....... 0.0127 ........ 0.0325
64-bit uint #1 ....... 0.0140 ........ 0.0277
64-bit uint #2 ....... 0.0134 ........ 0.0294
64-bit uint #3 ....... 0.0134 ........ 0.0281
8-bit int #1 ......... 0.0086 ........ 0.0241
8-bit int #2 ......... 0.0089 ........ 0.0225
8-bit int #3 ......... 0.0085 ........ 0.0229
16-bit int #1 ........ 0.0118 ........ 0.0280
16-bit int #2 ........ 0.0121 ........ 0.0270
16-bit int #3 ........ 0.0109 ........ 0.0274
32-bit int #1 ........ 0.0128 ........ 0.0346
32-bit int #2 ........ 0.0118 ........ 0.0339
32-bit int #3 ........ 0.0135 ........ 0.0368
64-bit int #1 ........ 0.0138 ........ 0.0276
64-bit int #2 ........ 0.0132 ........ 0.0286
64-bit int #3 ........ 0.0137 ........ 0.0274
64-bit int #4 ........ 0.0180 ........ 0.0285
64-bit float #1 ...... 0.0134 ........ 0.0284
64-bit float #2 ...... 0.0125 ........ 0.0275
64-bit float #3 ...... 0.0126 ........ 0.0283
fix string #1 ........ 0.0035 ........ 0.0133
fix string #2 ........ 0.0094 ........ 0.0216
fix string #3 ........ 0.0094 ........ 0.0222
fix string #4 ........ 0.0091 ........ 0.0241
8-bit string #1 ...... 0.0122 ........ 0.0301
8-bit string #2 ...... 0.0118 ........ 0.0304
8-bit string #3 ...... 0.0119 ........ 0.0315
16-bit string #1 ..... 0.0150 ........ 0.0388
16-bit string #2 ..... 0.1545 ........ 0.1665
32-bit string ........ 0.1570 ........ 0.1756
wide char string #1 .. 0.0091 ........ 0.0236
wide char string #2 .. 0.0122 ........ 0.0313
8-bit binary #1 ...... 0.0100 ........ 0.0302
8-bit binary #2 ...... 0.0123 ........ 0.0324
8-bit binary #3 ...... 0.0126 ........ 0.0327
16-bit binary ........ 0.0168 ........ 0.0372
32-bit binary ........ 0.1588 ........ 0.1754
fix array #1 ......... 0.0042 ........ 0.0131
fix array #2 ......... 0.0294 ........ 0.0367
fix array #3 ......... 0.0412 ........ 0.0472
16-bit array #1 ...... 0.1378 ........ 0.1596
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.1865 ........ 0.2283
fix map #1 ........... 0.0725 ........ 0.1048
fix map #2 ........... 0.0319 ........ 0.0405
fix map #3 ........... 0.0356 ........ 0.0665
fix map #4 ........... 0.0465 ........ 0.0497
16-bit map #1 ........ 0.2540 ........ 0.3028
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.2372 ........ 0.2710
fixext 1 ............. 0.0283 ........ 0.0358
fixext 2 ............. 0.0291 ........ 0.0371
fixext 4 ............. 0.0302 ........ 0.0355
fixext 8 ............. 0.0288 ........ 0.0384
fixext 16 ............ 0.0293 ........ 0.0359
8-bit ext ............ 0.0302 ........ 0.0439
16-bit ext ........... 0.0334 ........ 0.0499
32-bit ext ........... 0.1845 ........ 0.1888
32-bit timestamp #1 .. 0.0337 ........ 0.0547
32-bit timestamp #2 .. 0.0335 ........ 0.0560
64-bit timestamp #1 .. 0.0371 ........ 0.0575
64-bit timestamp #2 .. 0.0374 ........ 0.0542
64-bit timestamp #3 .. 0.0356 ........ 0.0533
96-bit timestamp #1 .. 0.0362 ........ 0.0699
96-bit timestamp #2 .. 0.0381 ........ 0.0701
96-bit timestamp #3 .. 0.0367 ........ 0.0687
=============================================
Total 2.7618 4.0820
Skipped 4 4
Failed 0 0
Ignored 0 0
With JIT:
php -n -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
=============================================
Test/Target Packer BufferUnpacker
---------------------------------------------
nil .................. 0.0005 ........ 0.0054
false ................ 0.0004 ........ 0.0059
true ................. 0.0004 ........ 0.0059
7-bit uint #1 ........ 0.0010 ........ 0.0047
7-bit uint #2 ........ 0.0010 ........ 0.0046
7-bit uint #3 ........ 0.0010 ........ 0.0046
5-bit sint #1 ........ 0.0025 ........ 0.0046
5-bit sint #2 ........ 0.0023 ........ 0.0046
5-bit sint #3 ........ 0.0024 ........ 0.0045
8-bit uint #1 ........ 0.0043 ........ 0.0081
8-bit uint #2 ........ 0.0043 ........ 0.0079
8-bit uint #3 ........ 0.0041 ........ 0.0080
16-bit uint #1 ....... 0.0064 ........ 0.0095
16-bit uint #2 ....... 0.0064 ........ 0.0091
16-bit uint #3 ....... 0.0064 ........ 0.0094
32-bit uint #1 ....... 0.0085 ........ 0.0114
32-bit uint #2 ....... 0.0077 ........ 0.0122
32-bit uint #3 ....... 0.0077 ........ 0.0120
64-bit uint #1 ....... 0.0085 ........ 0.0159
64-bit uint #2 ....... 0.0086 ........ 0.0157
64-bit uint #3 ....... 0.0086 ........ 0.0158
8-bit int #1 ......... 0.0042 ........ 0.0080
8-bit int #2 ......... 0.0042 ........ 0.0080
8-bit int #3 ......... 0.0042 ........ 0.0081
16-bit int #1 ........ 0.0065 ........ 0.0095
16-bit int #2 ........ 0.0065 ........ 0.0090
16-bit int #3 ........ 0.0056 ........ 0.0085
32-bit int #1 ........ 0.0067 ........ 0.0107
32-bit int #2 ........ 0.0066 ........ 0.0106
32-bit int #3 ........ 0.0063 ........ 0.0104
64-bit int #1 ........ 0.0072 ........ 0.0162
64-bit int #2 ........ 0.0073 ........ 0.0174
64-bit int #3 ........ 0.0072 ........ 0.0164
64-bit int #4 ........ 0.0077 ........ 0.0161
64-bit float #1 ...... 0.0053 ........ 0.0135
64-bit float #2 ...... 0.0053 ........ 0.0135
64-bit float #3 ...... 0.0052 ........ 0.0135
fix string #1 ....... -0.0002 ........ 0.0044
fix string #2 ........ 0.0035 ........ 0.0067
fix string #3 ........ 0.0035 ........ 0.0077
fix string #4 ........ 0.0033 ........ 0.0078
8-bit string #1 ...... 0.0059 ........ 0.0110
8-bit string #2 ...... 0.0063 ........ 0.0121
8-bit string #3 ...... 0.0064 ........ 0.0124
16-bit string #1 ..... 0.0099 ........ 0.0146
16-bit string #2 ..... 0.1522 ........ 0.1474
32-bit string ........ 0.1511 ........ 0.1483
wide char string #1 .. 0.0039 ........ 0.0084
wide char string #2 .. 0.0073 ........ 0.0123
8-bit binary #1 ...... 0.0040 ........ 0.0112
8-bit binary #2 ...... 0.0075 ........ 0.0123
8-bit binary #3 ...... 0.0077 ........ 0.0129
16-bit binary ........ 0.0096 ........ 0.0145
32-bit binary ........ 0.1535 ........ 0.1479
fix array #1 ......... 0.0008 ........ 0.0061
fix array #2 ......... 0.0121 ........ 0.0165
fix array #3 ......... 0.0193 ........ 0.0222
16-bit array #1 ...... 0.0607 ........ 0.0479
16-bit array #2 ........... S ............. S
32-bit array .............. S ............. S
complex array ........ 0.0749 ........ 0.0824
fix map #1 ........... 0.0329 ........ 0.0431
fix map #2 ........... 0.0161 ........ 0.0189
fix map #3 ........... 0.0205 ........ 0.0262
fix map #4 ........... 0.0252 ........ 0.0205
16-bit map #1 ........ 0.1016 ........ 0.0927
16-bit map #2 ............. S ............. S
32-bit map ................ S ............. S
complex map .......... 0.1096 ........ 0.1030
fixext 1 ............. 0.0157 ........ 0.0161
fixext 2 ............. 0.0175 ........ 0.0183
fixext 4 ............. 0.0156 ........ 0.0185
fixext 8 ............. 0.0163 ........ 0.0184
fixext 16 ............ 0.0164 ........ 0.0182
8-bit ext ............ 0.0158 ........ 0.0207
16-bit ext ........... 0.0203 ........ 0.0219
32-bit ext ........... 0.1614 ........ 0.1539
32-bit timestamp #1 .. 0.0195 ........ 0.0249
32-bit timestamp #2 .. 0.0188 ........ 0.0260
64-bit timestamp #1 .. 0.0207 ........ 0.0281
64-bit timestamp #2 .. 0.0212 ........ 0.0291
64-bit timestamp #3 .. 0.0207 ........ 0.0295
96-bit timestamp #1 .. 0.0222 ........ 0.0358
96-bit timestamp #2 .. 0.0228 ........ 0.0353
96-bit timestamp #3 .. 0.0210 ........ 0.0319
=============================================
Total 1.6432 1.9674
Skipped 4 4
Failed 0 0
Ignored 0 0
You may change default benchmark settings by defining the following environment variables:
Name | Default |
---|---|
MP_BENCH_TARGETS | pure_p,pure_u , see a list of available targets |
MP_BENCH_ITERATIONS | 100_000 |
MP_BENCH_DURATION | not set |
MP_BENCH_ROUNDS | 3 |
MP_BENCH_TESTS | -@slow , see a list of available tests |
For example:
export MP_BENCH_TARGETS=pure_p
export MP_BENCH_ITERATIONS=1000000
export MP_BENCH_ROUNDS=5
# a comma separated list of test names
export MP_BENCH_TESTS='complex array, complex map'
# or a group name
# export MP_BENCH_TESTS='-@slow' // @pecl_comp
# or a regexp
# export MP_BENCH_TESTS='/complex (array|map)/'
Another example, benchmarking both the library and the PECL extension:
MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
===========================================================================
Test/Target Packer BufferUnpacker msgpack_pack msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0031 ........ 0.0141 ...... 0.0055 ........ 0.0064
false ................ 0.0039 ........ 0.0154 ...... 0.0056 ........ 0.0053
true ................. 0.0038 ........ 0.0139 ...... 0.0056 ........ 0.0044
7-bit uint #1 ........ 0.0061 ........ 0.0110 ...... 0.0059 ........ 0.0046
7-bit uint #2 ........ 0.0065 ........ 0.0119 ...... 0.0042 ........ 0.0029
7-bit uint #3 ........ 0.0054 ........ 0.0117 ...... 0.0045 ........ 0.0025
5-bit sint #1 ........ 0.0047 ........ 0.0103 ...... 0.0038 ........ 0.0022
5-bit sint #2 ........ 0.0048 ........ 0.0117 ...... 0.0038 ........ 0.0022
5-bit sint #3 ........ 0.0046 ........ 0.0102 ...... 0.0038 ........ 0.0023
8-bit uint #1 ........ 0.0063 ........ 0.0174 ...... 0.0039 ........ 0.0031
8-bit uint #2 ........ 0.0063 ........ 0.0167 ...... 0.0040 ........ 0.0029
8-bit uint #3 ........ 0.0063 ........ 0.0168 ...... 0.0039 ........ 0.0030
16-bit uint #1 ....... 0.0092 ........ 0.0222 ...... 0.0049 ........ 0.0030
16-bit uint #2 ....... 0.0096 ........ 0.0227 ...... 0.0042 ........ 0.0046
16-bit uint #3 ....... 0.0123 ........ 0.0274 ...... 0.0059 ........ 0.0051
32-bit uint #1 ....... 0.0136 ........ 0.0331 ...... 0.0060 ........ 0.0048
32-bit uint #2 ....... 0.0130 ........ 0.0336 ...... 0.0070 ........ 0.0048
32-bit uint #3 ....... 0.0127 ........ 0.0329 ...... 0.0051 ........ 0.0048
64-bit uint #1 ....... 0.0126 ........ 0.0268 ...... 0.0055 ........ 0.0049
64-bit uint #2 ....... 0.0135 ........ 0.0281 ...... 0.0052 ........ 0.0046
64-bit uint #3 ....... 0.0131 ........ 0.0274 ...... 0.0069 ........ 0.0044
8-bit int #1 ......... 0.0077 ........ 0.0236 ...... 0.0058 ........ 0.0044
8-bit int #2 ......... 0.0087 ........ 0.0244 ...... 0.0058 ........ 0.0048
8-bit int #3 ......... 0.0084 ........ 0.0241 ...... 0.0055 ........ 0.0049
16-bit int #1 ........ 0.0112 ........ 0.0271 ...... 0.0048 ........ 0.0045
16-bit int #2 ........ 0.0124 ........ 0.0292 ...... 0.0057 ........ 0.0049
16-bit int #3 ........ 0.0118 ........ 0.0270 ...... 0.0058 ........ 0.0050
32-bit int #1 ........ 0.0137 ........ 0.0366 ...... 0.0058 ........ 0.0051
32-bit int #2 ........ 0.0133 ........ 0.0366 ...... 0.0056 ........ 0.0049
32-bit int #3 ........ 0.0129 ........ 0.0350 ...... 0.0052 ........ 0.0048
64-bit int #1 ........ 0.0145 ........ 0.0254 ...... 0.0034 ........ 0.0025
64-bit int #2 ........ 0.0097 ........ 0.0214 ...... 0.0034 ........ 0.0025
64-bit int #3 ........ 0.0096 ........ 0.0287 ...... 0.0059 ........ 0.0050
64-bit int #4 ........ 0.0143 ........ 0.0277 ...... 0.0059 ........ 0.0046
64-bit float #1 ...... 0.0134 ........ 0.0281 ...... 0.0057 ........ 0.0052
64-bit float #2 ...... 0.0141 ........ 0.0281 ...... 0.0057 ........ 0.0050
64-bit float #3 ...... 0.0144 ........ 0.0282 ...... 0.0057 ........ 0.0050
fix string #1 ........ 0.0036 ........ 0.0143 ...... 0.0066 ........ 0.0053
fix string #2 ........ 0.0107 ........ 0.0222 ...... 0.0065 ........ 0.0068
fix string #3 ........ 0.0116 ........ 0.0245 ...... 0.0063 ........ 0.0069
fix string #4 ........ 0.0105 ........ 0.0253 ...... 0.0083 ........ 0.0077
8-bit string #1 ...... 0.0126 ........ 0.0318 ...... 0.0075 ........ 0.0088
8-bit string #2 ...... 0.0121 ........ 0.0295 ...... 0.0076 ........ 0.0086
8-bit string #3 ...... 0.0125 ........ 0.0293 ...... 0.0130 ........ 0.0093
16-bit string #1 ..... 0.0159 ........ 0.0368 ...... 0.0117 ........ 0.0086
16-bit string #2 ..... 0.1547 ........ 0.1686 ...... 0.1516 ........ 0.1373
32-bit string ........ 0.1558 ........ 0.1729 ...... 0.1511 ........ 0.1396
wide char string #1 .. 0.0098 ........ 0.0237 ...... 0.0066 ........ 0.0065
wide char string #2 .. 0.0128 ........ 0.0291 ...... 0.0061 ........ 0.0082
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0040 ........ 0.0129 ...... 0.0120 ........ 0.0058
fix array #2 ......... 0.0279 ........ 0.0390 ...... 0.0143 ........ 0.0165
fix array #3 ......... 0.0415 ........ 0.0463 ...... 0.0162 ........ 0.0187
16-bit array #1 ...... 0.1349 ........ 0.1628 ...... 0.0334 ........ 0.0341
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0345 ........ 0.0391 ...... 0.0143 ........ 0.0168
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0459 ........ 0.0473 ...... 0.0151 ........ 0.0163
16-bit map #1 ........ 0.2518 ........ 0.2962 ...... 0.0400 ........ 0.0490
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.2380 ........ 0.2682 ...... 0.0545 ........ 0.0579
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total 1.5625 2.3866 0.7735 0.7243
Skipped 4 4 4 4
Failed 0 0 24 17
Ignored 24 24 0 7
With JIT:
MP_BENCH_TARGETS=pure_p,pure_u,pecl_p,pecl_u \
php -n -dextension=msgpack.so -dzend_extension=opcache.so \
-dpcre.jit=1 -dopcache.jit_buffer_size=64M -dopcache.jit=tracing -dopcache.enable=1 -dopcache.enable_cli=1 \
tests/bench.php
Example output
Filter: MessagePack\Tests\Perf\Filter\ListFilter
Rounds: 3
Iterations: 100000
===========================================================================
Test/Target Packer BufferUnpacker msgpack_pack msgpack_unpack
---------------------------------------------------------------------------
nil .................. 0.0001 ........ 0.0052 ...... 0.0053 ........ 0.0042
false ................ 0.0007 ........ 0.0060 ...... 0.0057 ........ 0.0043
true ................. 0.0008 ........ 0.0060 ...... 0.0056 ........ 0.0041
7-bit uint #1 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0041
7-bit uint #2 ........ 0.0021 ........ 0.0043 ...... 0.0062 ........ 0.0041
7-bit uint #3 ........ 0.0022 ........ 0.0044 ...... 0.0061 ........ 0.0040
5-bit sint #1 ........ 0.0030 ........ 0.0048 ...... 0.0062 ........ 0.0040
5-bit sint #2 ........ 0.0032 ........ 0.0046 ...... 0.0062 ........ 0.0040
5-bit sint #3 ........ 0.0031 ........ 0.0046 ...... 0.0062 ........ 0.0040
8-bit uint #1 ........ 0.0054 ........ 0.0079 ...... 0.0062 ........ 0.0050
8-bit uint #2 ........ 0.0051 ........ 0.0079 ...... 0.0064 ........ 0.0044
8-bit uint #3 ........ 0.0051 ........ 0.0082 ...... 0.0062 ........ 0.0044
16-bit uint #1 ....... 0.0077 ........ 0.0094 ...... 0.0065 ........ 0.0045
16-bit uint #2 ....... 0.0077 ........ 0.0094 ...... 0.0063 ........ 0.0045
16-bit uint #3 ....... 0.0077 ........ 0.0095 ...... 0.0064 ........ 0.0047
32-bit uint #1 ....... 0.0088 ........ 0.0119 ...... 0.0063 ........ 0.0043
32-bit uint #2 ....... 0.0089 ........ 0.0117 ...... 0.0062 ........ 0.0039
32-bit uint #3 ....... 0.0089 ........ 0.0118 ...... 0.0063 ........ 0.0044
64-bit uint #1 ....... 0.0097 ........ 0.0155 ...... 0.0063 ........ 0.0045
64-bit uint #2 ....... 0.0095 ........ 0.0153 ...... 0.0061 ........ 0.0045
64-bit uint #3 ....... 0.0096 ........ 0.0156 ...... 0.0063 ........ 0.0047
8-bit int #1 ......... 0.0053 ........ 0.0083 ...... 0.0062 ........ 0.0044
8-bit int #2 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0044
8-bit int #3 ......... 0.0052 ........ 0.0080 ...... 0.0062 ........ 0.0043
16-bit int #1 ........ 0.0089 ........ 0.0097 ...... 0.0069 ........ 0.0046
16-bit int #2 ........ 0.0075 ........ 0.0093 ...... 0.0063 ........ 0.0043
16-bit int #3 ........ 0.0075 ........ 0.0094 ...... 0.0062 ........ 0.0046
32-bit int #1 ........ 0.0086 ........ 0.0122 ...... 0.0063 ........ 0.0044
32-bit int #2 ........ 0.0087 ........ 0.0120 ...... 0.0066 ........ 0.0046
32-bit int #3 ........ 0.0086 ........ 0.0121 ...... 0.0060 ........ 0.0044
64-bit int #1 ........ 0.0096 ........ 0.0149 ...... 0.0060 ........ 0.0045
64-bit int #2 ........ 0.0096 ........ 0.0157 ...... 0.0062 ........ 0.0044
64-bit int #3 ........ 0.0096 ........ 0.0160 ...... 0.0063 ........ 0.0046
64-bit int #4 ........ 0.0097 ........ 0.0157 ...... 0.0061 ........ 0.0044
64-bit float #1 ...... 0.0079 ........ 0.0153 ...... 0.0056 ........ 0.0044
64-bit float #2 ...... 0.0079 ........ 0.0152 ...... 0.0057 ........ 0.0045
64-bit float #3 ...... 0.0079 ........ 0.0155 ...... 0.0057 ........ 0.0044
fix string #1 ........ 0.0010 ........ 0.0045 ...... 0.0071 ........ 0.0044
fix string #2 ........ 0.0048 ........ 0.0075 ...... 0.0070 ........ 0.0060
fix string #3 ........ 0.0048 ........ 0.0086 ...... 0.0068 ........ 0.0060
fix string #4 ........ 0.0050 ........ 0.0088 ...... 0.0070 ........ 0.0059
8-bit string #1 ...... 0.0081 ........ 0.0129 ...... 0.0069 ........ 0.0062
8-bit string #2 ...... 0.0086 ........ 0.0128 ...... 0.0069 ........ 0.0065
8-bit string #3 ...... 0.0086 ........ 0.0126 ...... 0.0115 ........ 0.0065
16-bit string #1 ..... 0.0105 ........ 0.0137 ...... 0.0128 ........ 0.0068
16-bit string #2 ..... 0.1510 ........ 0.1486 ...... 0.1526 ........ 0.1391
32-bit string ........ 0.1517 ........ 0.1475 ...... 0.1504 ........ 0.1370
wide char string #1 .. 0.0044 ........ 0.0085 ...... 0.0067 ........ 0.0057
wide char string #2 .. 0.0081 ........ 0.0125 ...... 0.0069 ........ 0.0063
8-bit binary #1 ........... I ............. I ........... F ............. I
8-bit binary #2 ........... I ............. I ........... F ............. I
8-bit binary #3 ........... I ............. I ........... F ............. I
16-bit binary ............. I ............. I ........... F ............. I
32-bit binary ............. I ............. I ........... F ............. I
fix array #1 ......... 0.0014 ........ 0.0059 ...... 0.0132 ........ 0.0055
fix array #2 ......... 0.0146 ........ 0.0156 ...... 0.0155 ........ 0.0148
fix array #3 ......... 0.0211 ........ 0.0229 ...... 0.0179 ........ 0.0180
16-bit array #1 ...... 0.0673 ........ 0.0498 ...... 0.0343 ........ 0.0388
16-bit array #2 ........... S ............. S ........... S ............. S
32-bit array .............. S ............. S ........... S ............. S
complex array ............. I ............. I ........... F ............. F
fix map #1 ................ I ............. I ........... F ............. I
fix map #2 ........... 0.0148 ........ 0.0180 ...... 0.0156 ........ 0.0179
fix map #3 ................ I ............. I ........... F ............. I
fix map #4 ........... 0.0252 ........ 0.0201 ...... 0.0214 ........ 0.0167
16-bit map #1 ........ 0.1027 ........ 0.0836 ...... 0.0388 ........ 0.0510
16-bit map #2 ............. S ............. S ........... S ............. S
32-bit map ................ S ............. S ........... S ............. S
complex map .......... 0.1104 ........ 0.1010 ...... 0.0556 ........ 0.0602
fixext 1 .................. I ............. I ........... F ............. F
fixext 2 .................. I ............. I ........... F ............. F
fixext 4 .................. I ............. I ........... F ............. F
fixext 8 .................. I ............. I ........... F ............. F
fixext 16 ................. I ............. I ........... F ............. F
8-bit ext ................. I ............. I ........... F ............. F
16-bit ext ................ I ............. I ........... F ............. F
32-bit ext ................ I ............. I ........... F ............. F
32-bit timestamp #1 ....... I ............. I ........... F ............. F
32-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #1 ....... I ............. I ........... F ............. F
64-bit timestamp #2 ....... I ............. I ........... F ............. F
64-bit timestamp #3 ....... I ............. I ........... F ............. F
96-bit timestamp #1 ....... I ............. I ........... F ............. F
96-bit timestamp #2 ....... I ............. I ........... F ............. F
96-bit timestamp #3 ....... I ............. I ........... F ............. F
===========================================================================
Total 0.9642 1.0909 0.8224 0.7213
Skipped 4 4 4 4
Failed 0 0 24 17
Ignored 24 24 0 7
Note that the msgpack extension (v2.1.2) doesn't support ext, bin and UTF-8 str types.
The library is released under the MIT License. See the bundled LICENSE file for details.
Author: rybakit
Source Code: https://github.com/rybakit/msgpack.php
License: MIT License
1620898103
Check out the 5 latest technologies of machine learning trends to boost business growth in 2021 by considering the best version of digital development tools. It is the right time to accelerate user experience by bringing advancement in their lifestyle.
#machinelearningapps #machinelearningdevelopers #machinelearningexpert #machinelearningexperts #expertmachinelearningservices #topmachinelearningcompanies #machinelearningdevelopmentcompany
Visit Blog- https://www.xplace.com/article/8743
#machine learning companies #top machine learning companies #machine learning development company #expert machine learning services #machine learning experts #machine learning expert
1624247507
Being an award-winning machine learning company in India, we provide advanced machine learning solutions at 60% less cost. We have India’s best machine learning and artificial intelligence development team that helps businesses think, predict & act smartly in this digital era.
Till now, we have completed 1000+ machine learning & artificial intelligence projects and have garnered thousands of clients all across the globe. We are backed by 500+ full time developers having 5+ years of average experience.
#machine learning companies in india #machine learning companies #machine learning services #top machine learning companies in india #best machine learning companies
1617678040
Machine Learning can analyze millions of data sets and recognize patterns within minutes. While we know what Machine Learning is and what it does, there’s little that is known about the different Machine Learning models types. Algorithms and models form the basis for Machine Learning programs and applications that enterprises use for their technical projects.
There are multiple types of Machine Learning models. Each model provides specific instructions to the program for executing a task. In general, there are three different techniques under which different models can be classified.
#machinelearningalgorithms #machine-learning #machine-learning-python #machine-learning-ai #machine-learning-models
1604154094
Hire machine learning developers in India ,DxMinds Technologies is the best product engineering company in India making innovative solutions using Machine learning and deep learning. We are among the best to hire machine learning experts in India work in different industry domains like Healthcare retail, banking and finance ,oil and gas, ecommerce, telecommunication ,FMCG, fashion etc.
**
Services**
Product Engineering & Development
Re-engineering
Maintenance / Support / Sustenance
Integration / Data Management
QA & Automation
Reach us 917483546629
Hire machine learning developers in India ,DxMinds Technologies is the best product engineering company in India making innovative solutions using Machine learning and deep learning. We are among the best to hire machine learning experts in India work in different industry domains like Healthcare retail, banking and finance ,oil and gas, ecommerce, telecommunication ,FMCG, fashion etc.
Services
Product Engineering & Development
Re-engineering
Maintenance / Support / Sustenance
Integration / Data Management
QA & Automation
Reach us 917483546629
#hire machine learning developers in india #hire dedicated machine learning developers in india #hire machine learning programmers in india #hire machine learning programmers #hire dedicated machine learning developers #hire machine learning developers