1632370916
PrettyTensor
A PrettyTensor is a wrapper on a Tensor that simplifies graph building.
A PrettyTensor behaves like a Tensor, but also supports a chainable object syntax to quickly define neural networks and other layered architectures in TensorFlow.
result = (pretty_tensor.wrap(input_data)
.flatten()
.fully_connected(200, activation_fn=tf.nn.relu)
.fully_connected(10, activation_fn=None)
.softmax(labels, name=softmax_name))
PrettyTensor has 3 modes of operation that share the ability to chain methods.
In the normal mode, everytime a method is called a new PrettyTensor is created. This allows for easy chaining and yet you can still use any particular object multiple times. This makes it easy to branch your network.
In sequential mode, an internal variable - the head - keeps track of the most recent output tensor, thus allowing for breaking call chains into multiple statements:
seq = pretty_tensor.wrap(input_data).sequential()
seq.flatten()
seq.fully_connected(200, activation_fn=tf.nn.relu)
seq.fully_connected(10, activation_fn=None)
result = seq.softmax(labels, name=softmax_name))
To return to the normal mode, just use as_layer()
.
It is important to note that in sequential mode, self is always returned! This means that the following 2 definitions are equivalent:
def network1(input_data):
seq = pretty_tensor.wrap(input_data).sequential()
seq.flatten()
seq.fully_connected(200, activation_fn=(tf.nn.relu,))
seq.fully_connected(10, activation_fn=None)
def network2(input_data):
seq = pretty_tensor.wrap(input_data).sequential()
x = seq.flatten()
y = x.fully_connected(200, activation_fn=(tf.nn.relu,))
# x refers to the sequential here, whose head points at y!
z = x.fully_connected(10, activation_fn=None)
More complex networks can be built using the the first class methods of branch and join. branch
creates a separate PrettyTensor object that points to the current head when it is called and this allows the user to define a separate tower that either ends in a regression target, output or rejoins the network. Rejoining allows the user define composite layers like inception. join
on the other hand can be used to join multiple inputs or to rejoin a composite layer. The default join operation is to concat on the last dimension (depth-concat), but custom joins such as Add are also supported.
In addition to the atoms of branch and join, PrettyTensor provides a clean syntax called subdivide
when the user needs to branch and rejoin for a composite layer. subdivide
breaks the input into the requested number of towers and then automatically rejoins the towers after the block completes. This makes it so that the indentation matches the logical structure of the network.
seq = pretty_tensor.wrap(input_data).sequential()
with seq.subdivide(2) as [a, b]:
a.conv2d([1, 1], 64)
b.conv2d([1, 1], 64).conv2d([3, 3], 64)
seq.flatten()
seq.fully_connected(200, activation_fn=(tf.nn.relu,))
seq.fully_connected(10, activation_fn=None)
result = seq.softmax(labels, name=softmax_name)
[TOC]
Computes the absolute value of a tensor.
Given a tensor of real numbers x
, this operation returns a tensor containing the absolute value of each element in x
. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).
A Tensor
or SparseTensor
the same size and type as x
with absolute values.
Adds a loss and returns a wrapper for that loss.
Applies the given operation to this before without adding any summaries.
A new layer with operation applied.
Applies the given operation to input_layer
and create a summary.
A new layer with operation applied.
Returns a PrettyTensor snapshotted to the current tensor or sequence.
The primary use case of this is to break out of a sequential.
An immutable PrettyTensor.
Attaches the template to this such that _key=this layer.
Note: names were chosen to avoid conflicts with any likely unbound_var keys.
A new layer with operation applied.
Performs average pooling.
kernel
is the patch that will be pooled and it describes the pooling along each of the 4 dimensions. stride
is how big to take each step.
Because more often than not, pooling is only done on the width and height of the image, the following shorthands are supported:
[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 2, 1]
).pt.PAD_SAME
' or pt.PAD_VALID
to control the padding.Handle to this layer.
Batch normalize this layer.
This only supports global batch normalization and it can be enabled for all convolutional layers by setting the default 'batch_normalize' to True. learned_moments_update_rate, variance_epsilon and scale_after_normalization need to either be set here or be set in defaults as well.
Handle to the generated layer.
Performs bilinear sampling. This must be a rank 4 Tensor.
Implements the differentiable sampling mechanism with bilinear kernel in https://arxiv.org/abs/1506.02025.
Given (x, y) coordinates for each output pixel, use bilinear sampling on the input_layer to fill the output.
Handle to this layer
Calculates the binary cross entropy of the input_ vs inputs.
Expects unscaled logits. Do not pass in results of sigmoid operation.
Tensor
with a weight per example.Tensor
that is the same shape as the input_ that can be used to scale individual prediction losses. See tf.tile
to turn a per-column weight vector into a per_output_weights
Tensor
.Binary cross entropy loss after sigmoid operation.
Cleaves a tensor into a sequence, this is the inverse of squash.
Recurrent methods unroll across an array of Tensors with each one being a timestep. This cleaves the first dim so that each it is an array of Tensors. It is the inverse of squash_sequence.
A PrettyTensor containing an array of tensors.
Concatenates input PrettyTensor with other_tensors along the specified dim.
This adds the Pretty Tensor passed via input_layer to the front of the list of tensors to concat.
A new PrettyTensor.
Adds a convolution to the stack of operations.
kernel
is the patch that will be pooled and it describes the pooling along each of the 4 dimensions. The stride is how big to take each step.
[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 2, 1]
).pt.Phase
.Handle to the generated layer.
Calculates the Cross Entropy of input_ vs labels.
A loss.
Adds a depth-wise convolution to the stack of operations.
A depthwise convolution performs the convolutions one channel at a time and produces an output with depth channel_multiplier * input_depth
.
kernel
is the patch that will be pooled and it describes the pooling along each of the 4 dimensions. The stride is how big to take each step.
[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 2, 1]
).pt.DIM_SAME
to use 0s for the out of bounds area or pt.DIM_VALID
to shrink the output size and only uses valid input pixels.pt.Phase
.Handle to the generated layer.
Performs a diagonal matrix multiplication with a learned vector.
This creates the parameter vector.
pt.Phase
.A Pretty Tensor handle to the layer.
Aplies dropout if this is in the train phase.
Looks up values in a learned embedding lookup.
embedding_count
embedding tensors are created each with shape embedding_shape
. The values are by defaulted initialized with a standard deviation of 1, but in some cases zero is a more appropropriate initial value. The embeddings themselves are learned through normal backpropagation.
You can initialize these to a fixed embedding and follow with stop_gradients() to use a previously learned embedding.
N.B. This uses tf.nn.embedding_lookup under the hood, so by default the lookup is id % embedding_count
pt.Phase
.input_layer
Evaluates this tensor in a Session
.
Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.
N.B. Before invoking Tensor.eval()
, its graph must have been launched in a session, and either a default session must be available, or session
must be specified explicitly.
Tensor
objects to feed values. See Session.run()
for a description of the valid feed values.Session
to be used to evaluate this tensor. If none, the default session will be used.A numpy array corresponding to the value of this tensor.
Calculates the total ratio of correct predictions across all examples seen.
In test and infer mode, this creates variables in the graph collection pt.GraphKeys.TEST_VARIABLES and does not add them to tf.GraphKeys.ALL_VARIABLES. This means that you must initialize them separately from tf.global_variables_initializer().
In the case of topk == 1
, this breaks ties left-to-right, in all other cases it follows tf.nn.in_top_k
. Note: the tie behavior will change in the future.
Tensor
containing the target for this layer.A Pretty Tensor with the ratio of correct to total examples seen.
Calculates the total of correct predictions and example count.
In test and infer mode, this creates variables in the graph collection pt.GraphKeys.TEST_VARIABLES and does not add them to tf.GraphKeys.ALL_VARIABLES. This means that you must initialize them separately from tf.global_variables_initializer().
In the case of topk == 1
, this breaks ties left-to-right, in all other cases it follows tf.nn.in_top_k
. Note: the tie behavior will change in the future.
Tensor
containing the target for this layer or an integer Tensor
with the sparse one-hot indices.A Pretty Tensor that contains correct_predictions, num_examples.
Calculates the total of correct predictions and example count.
In test and infer mode, this creates variables in the graph collection pt.GraphKeys.TEST_VARIABLES and does not add them to tf.GraphKeys.ALL_VARIABLES. This means that you must initialize them separately from tf.global_variables_initializer().
This breaks ties left-to-right.
Tensor
containing the target for this layer or an integer Tensor
with the sparse one-hot indices.A Pretty Tensor that contains correct_predictions, num_examples.
Calculates the total ratio of correct predictions across all examples seen.
In test and infer mode, this creates variables in the graph collection pt.GraphKeys.TEST_VARIABLES and does not add them to tf.GraphKeys.ALL_VARIABLES. This means that you must initialize them separately from tf.global_variables_initializer().
This breaks ties left-to-right.
Tensor
with the sparse one-hot indices as [batch, num_true].A Pretty Tensor with the ratio of correct to total examples seen.
Computes the precision and recall of the prediction vs the labels.
Precision and Recall.
Flattens this.
If preserve_batch is True, the result is rank 2 and the first dim (batch) is unchanged. Otherwise the result is rank 1.
A LayerWrapper with the flattened tensor.
Adds the parameters for a fully connected layer and returns a tensor.
The current PrettyTensor must have rank 2.
pt.Phase
.A Pretty Tensor handle to the layer.
Gated recurrent unit memory cell (GRU).
pt.Phase
.A RecurrentResult.
Returns True if this holds a sequence and False if it holds a Tensor.
Returns true if this is a sequential builder.
NB: A sequential builder is a mode of construction and is different from whether or not this holds a sequence of tensors.
Whether this is a sequential builder.
Joins the provided PrettyTensors with this using the join function.
self.
l1 normalizes x.
x normalized along dim.
Applies an L1 Regression (Sum of Absolute Error) to the target.
Normalizes along dimension dim
using an L2 norm.
For a 1-D tensor with dim = 0
, computes
output = x / sqrt(max(sum(x**2), epsilon))
For x
with more dimensions, independently normalizes each 1-D slice along dimension dim
.
sqrt(epsilon)
as the divisor if norm < sqrt(epsilon)
.A Tensor
with the same shape as x
.
Applies an L2 Regression (Sum of Squared Error) to the target.
Creates a leaky_relu.
This is an alternate non-linearity to relu. The leaky part of the relu may prevent dead Neurons in a model since the gradient doesn't go completely to 0.
x if x > 0 otherwise 0.01 * x.
Computes natural logarithm of x element-wise.
I.e., \(y = \log_e x\).
A Tensor
. Has the same type as x
.
Long short-term memory cell (LSTM).
pt.Phase
.A RecurrentResult.
Maps the given function across this sequence.
To map an entire template across the sequence, use the as_fn
method on the template.
A new sequence Pretty Tensor.
Performs max pooling.
kernel
is the patch that will be pooled and it describes the pooling along each of the 4 dimensions. stride
is how big to take each step.
Because more often than not, pooling is only done on the width and height of the image, the following shorthands are supported:
[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 3, 1]
).[b, c, r, d] = [1, 3, 2, 1]
).pt.PAD_SAME
or pt.PAD_VALID
to control the padding.Handle to this layer.
Computes the "logical and" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
For example:
# 'x' is [[True, True] # [False, False]] tf.reduce_all(x) ==> False tf.reduce_all(x, 0) ==> [False, False] tf.reduce_all(x, 1) ==> [True, False]
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.all @end_compatibility
Computes the "logical or" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
For example:
# 'x' is [[True, True]
# [False, False]]
tf.reduce_any(x) ==> True
tf.reduce_any(x, 0) ==> [True, True]
tf.reduce_any(x, 1) ==> [True, False]
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.any @end_compatibility
Joins a string Tensor across the given dimensions.
Computes the string join across dimensions in the given string Tensor of shape [d_0, d_1, ..., d_n-1]
. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with -1
being equivalent to n - 1
.
For example:
# tensor `a` is [["a", "b"], ["c", "d"]]
tf.reduce_join(a, 0) ==> ["ac", "bd"]
tf.reduce_join(a, 1) ==> ["ab", "cd"]
tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"]
tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"]
tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]]
tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]]
tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"]
tf.reduce_join(a, [0, 1]) ==> ["acbd"]
tf.reduce_join(a, [1, 0]) ==> ["abcd"]
tf.reduce_join(a, []) ==> ["abcd"]
Tensor
of type int32
. The dimensions to reduce over. Dimensions are reduced in the order specified. Omitting axis
is equivalent to passing [n-1, n-2, ..., 0]
. Negative indices from -n
to -1
are supported.bool
. Defaults to False
. If True
, retain reduced dimensions with length 1
.string
. Defaults to ""
. The separator to use when joining.A Tensor
of type string
. Has shape equal to that of the input with reduced dimensions removed or set to 1
depending on keep_dims
.
Computes the maximum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.max @end_compatibility
Computes the mean of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
For example:
# 'x' is [[1., 1.]
# [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1., 2.]
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.mean @end_compatibility
Computes the minimum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.min @end_compatibility
Computes the product of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.prod @end_compatibility
Computes the sum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in axis
. Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in axis
. If keep_dims
is true, the reduced dimensions are retained with length 1.
If axis
has no entries, all dimensions are reduced, and a tensor with a single element is returned.
For example:
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
None
(the default), reduces all dimensions.The reduced tensor.
@compatibility(numpy) Equivalent to np.sum @end_compatibility
Computes rectified linear: max(features, 0)
.
A Tensor
. Has the same type as features
.
Computes Rectified Linear 6: min(max(features, 0), 6)
.
A Tensor
with the same type as features
.
Reshapes this tensor to the given spec.
This provides additional functionality over the basic tf.reshape
. In particular, it provides the ability to specify some dimensions as unchanged (pt.DIM_SAME
) which can greatly aid in inferring the extra dimensions (pt.DIM_REST
) and help maintain more shape information going forward.
A shape_spec can be a list or tuple of numbers specifying the new shape, but also may include the following shorthands for using values from the shape of the input:
pt.DIM_SAME
('_') will use the corresponding value from the current shape.pt.DIM_REST
('*') can be used to specify the remainder of the values.A compact syntax is also supported for setting shapes. If the new shape is only composed of DIM_SAME, DIM_REST/-1 and single digit integers, then a string can be passed in. Integers larger than 9 must be passed in as part of a sequence.
tf.reshape
is that DIM_SAME
allows more shape inference possibilities. For example: given a shape of [None, 3, 7] if flattening were desired then the caller would have to compute the shape and request a reshape of [-1, 21] to flatten. Instead of brittle or repeated code, this can be inferred if we know that the first dim is being copied.Another example that is impossible to express as a list of integers is if the starting shape were [None, 3, None] and we wanted to do the same flattening. While the shape cannot be inferred, this can still be expressed as '_*' (A.K.A. [DIM_SAME, DIM_REST]).
A Pretty Tensor with the reshaped tensor.
Unrolls gru_cell
over the input.
This takes an input that is a list of length timesteps where each element is a Tensor
of [batch, *Dims]
and unrolls the recurrent cell. The input and state to the cell are managed by this method, but the rest of the arguments are passed through.
Gated recurrent unit memory cell (GRU).
pt.Phase
.A RecurrentResult.
Unrolls lstm_cell
over the input.
This takes an input that is a list of length timesteps where each element is a Tensor
of [batch, *Dims]
and unrolls the recurrent cell. The input and state to the cell are managed by this method, but the rest of the arguments are passed through.
Long short-term memory cell (LSTM).
pt.Phase
.A RecurrentResult.
Computes sigmoid of x
element-wise.
Specifically, y = 1 / (1 + exp(-x))
.
A Tensor with the same type as x
if x.dtype != qint32
otherwise the return type is quint8
.
@compatibility(numpy) Equivalent to np.scipy.special.expit @end_compatibility
Extracts a slice from a tensor.
This operation extracts a slice of size size
from a tensor input
starting at the location specified by begin
. The slice size
is represented as a tensor shape, where size[i]
is the number of elements of the 'i'th dimension of 'input' that you want to slice. The starting location (begin
) for the slice is represented as an offset in each dimension of input
. In other words, begin[i]
is the offset into the 'i'th dimension of 'input' that you want to slice from.
begin
is zero-based; 'size' is one-based. If size[i]
is -1, all remaining elements in dimension i are included in the slice. In other words, this is equivalent to setting:
size[i] = input.dim_size(i) - begin[i]
This operation requires that:
0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]
Examples:
# 'input' is [[[1, 1, 1], [2, 2, 2]],
# [[3, 3, 3], [4, 4, 4]],
# [[5, 5, 5], [6, 6, 6]]]
tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],
[4, 4, 4]]]
tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
[[5, 5, 5]]]
A tensor with the selected slice.
Applies softmax and if labels is not None, then it also adds a loss.
A tuple of the a handle to softmax and a handle to the loss tensor.
Computes the softmax.
Creates a fully-connected linear layer followed by a softmax.
This returns (softmax, loss)
where loss
is the cross entropy loss.
fully_connected
).fully_connected
).A named tuple holding:
softmax: The result of this layer with softmax normalization.
loss: The cross entropy loss.
Applies softmax and if labels is not None, then it adds a sampled loss.
This is a faster way to train a softmax classifier over a huge number of classes. It is generally an underestimate of the full softmax loss.
At inference time, you can compute full softmax probabilities with the expression tf.nn.softmax(tf.matmul(inputs, weights) + biases)
.
See tf.nn.sampled_softmax_loss
for more details.
Also see Section 3 of Jean et al., 2014 (pdf) for the math.
Note: If you depend on the softmax part of the loss, then you will lose most of the speed benefits of sampling the loss. It should be used for evaluation only and not executed on every update op.
Note: This is not checkpoint compatible with softmax_classifier
since it optimizes a transpose by pushing it down to the fully_connected
layer.
int
. The number of possible classes.Tensor
of type int64
and shape [batch_size, num_true]
. The target classes. Note that this format differs from the labels
argument of nn.softmax_cross_entropy_with_logits
.int
. The number of classes to randomly sample per batch.int
. The number of target classes per training example, defaults to the second dim of labels if known or 1.sampled_candidates
, true_expected_count
, sampled_expected_count
) returned by a *_candidate_sampler
function. (if None, we default to log_uniform_candidate_sampler
)bool
. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True.fully_connected
). Note: This is the transpose of a normal fully_connected input layer!fully_connected
).A tuple of handles to the logits (fully connected layer) and loss.
Computes softplus: log(exp(features) + 1)
.
A Tensor
. Has the same type as features
.
Computes softsign: features / (abs(features) + 1)
.
A Tensor
. Has the same type as features
.
Calculates the Cross Entropy of input_ vs labels.
Tensor
with class ordinalsA loss.
Splits this Tensor along the split_dim into num_splits Equal chunks.
Examples:
[1, 2, 3, 4] -> [1, 2], [3, 4]
[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [2, 2]], [[3, 3], [4, 4]]
A list of PrettyTensors.
Computes square root of x element-wise.
I.e., (y = \sqrt{x} = x^{1/2}).
A Tensor
or SparseTensor
, respectively. Has the same type as x
.
Computes square of x element-wise.
I.e., (y = x * x = x^2).
A Tensor
or SparseTensor
. Has the same type as x
.
"Squashes a sequence into a single Tensor with dim 1 being time*batch.
A sequence is an array of Tensors, which is not appropriate for most operations, this squashes them together into Tensor.
Defaults are assigned such that cleave_sequence requires no args.
Removes dimensions of size 1 from the shape of a tensor.
This operation returns a tensor of the same type with all singleton dimensions removed. If you don't want to remove all singleton dimensions, you can remove specific size 1 dimensions by specifying a list of squeeze_dims.
The sequeezed tensor.
Stacks a list of rank-R
tensors into one rank-(R+1)
tensor.
Packs the list of tensors in values
into a tensor with rank one higher than each tensor in values
, by packing them along the axis
dimension. Given a list of length N
of tensors of shape (A, B, C)
;
if axis == 0
then the output
tensor will have the shape (N, A, B, C)
. if axis == 1
then the output
tensor will have the shape (A, N, B, C)
. Etc.
For example:
# 'x' is [1, 4]
# 'y' is [2, 5]
# 'z' is [3, 6]
stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
stack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
This is the opposite of unstack. The numpy equivalent is
tf.stack([x, y, z]) = np.asarray([x, y, z])
int
. The axis to stack along. Defaults to the first dimension. Supports negative indexes.output: A stacked Tensor
with the same type as values
.
axis
is out of the range [-(R+1), R+1).Cuts off the gradient at this point.
This works on both sequence and regular Pretty Tensors.
Computes hyperbolic tangent of x
element-wise.
A Tensor or SparseTensor respectively with the same type as x
if x.dtype != qint32
otherwise the return type is quint8
.
Returns the shape of a tensor.
This operation returns a 1-D integer tensor representing the shape of input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]
int32
or int64
). Defaults to tf.int32
.A Tensor
of type out_type
.
Converts a vector that specified one-hot per batch into a dense version.
One dense vector for each item in the batch.
Casts a tensor to type float64
.
A Tensor
or SparseTensor
with same shape as x
with type float64
.
x
cannot be cast to the float64
.Casts a tensor to type float32
.
A Tensor
or SparseTensor
with same shape as x
with type float32
.
x
cannot be cast to the float32
.Casts a tensor to type int32
.
A Tensor
or SparseTensor
with same shape as x
with type int32
.
x
cannot be cast to the int32
.Casts a tensor to type int64
.
A Tensor
or SparseTensor
with same shape as x
with type int64
.
x
cannot be cast to the int64
.Unpacks the given dimension of a rank-R
tensor into rank-(R-1)
tensors.
Unpacks num
tensors from value
by chipping it along the axis
dimension. If num
is not specified (the default), it is inferred from value
's shape. If value.shape[axis]
is not known, ValueError
is raised.
For example, given a tensor of shape (A, B, C, D)
;
If axis == 0
then the i'th tensor in output
is the slice value[i, :, :, :]
and each tensor in output
will have shape (B, C, D)
. (Note that the dimension unpacked along is gone, unlike split
).
If axis == 1
then the i'th tensor in output
is the slice value[:, i, :, :]
and each tensor in output
will have shape (A, C, D)
. Etc.
This is the opposite of pack. The numpy equivalent is
tf.unstack(x, n) = list(x)
int
. The length of the dimension axis
. Automatically inferred if None
(the default).int
. The axis to unstack along. Defaults to the first dimension. Supports negative indexes.The list of Tensor
objects unstacked from value
.
num
is unspecified and cannot be inferred.axis
is out of the range [-R, R).Unzips this Tensor along the split_dim into num_splits Equal chunks.
Examples:
[1, 2, 3, 4] -> [1, 3], [2, 4]
[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [3, 3]], [[2, 2], [4, 4]]
A list of PrettyTensors.
defaults_scope(activation_fn=None, batch_normalize=None, l2loss=None, learned_moments_update_rate=None, parameter_modifier=None, phase=None, scale_after_normalization=None, summary_collections=None, trainable_variables=None, unroll=None, variable_collections=None, variance_epsilon=None)
Creates a scope for the defaults that are used in a with
block.
Note: defaults_scope
supports nesting where later defaults can be overridden. Also, an explicitly given keyword argument on a method always takes precedence.
In addition to setting defaults for some methods, this also can control:
summary_collections
: Choose which collection to place summaries in or disable with None
.trainable_variables
: Boolean indicating if variables are trainable.variable_collections
: Default collections in which to place variables; tf.GraphKeys.GLOBAL_VARIABLES
is always included.The supported defaults and methods that use them are:
activation_fn
:
batch_normalize
:
l2loss
:
learned_moments_update_rate
:
parameter_modifier
:
phase
:
scale_after_normalization
:
unroll
:
variance_epsilon
:
Sets the name scope for future operations.
Returns a PrettyTensor that points to sequence.
Returns a PrettyTensor that points to tensor.
Author: google
Official Website: https://github.com/google/prettytensor/blob/master/docs/PrettyTensor.md
1656899776
Simple scrolling events for d3 graphs. Based on stack
graph-scroll takes a selection of explanatory text sections and dispatches active
events as different sections are scrolled into to view. These active
events can be used to update a chart's state.
d3.graphScroll()
.sections(d3.selectAll('#sections > div'))
.on('active', function(i){ console.log(i + 'th section active') })
The top most element scrolled fully into view is classed graph-scroll-active
. This makes it easy to highlight the active section with css:
#sections > div{
opacity: .3
}
#sections div.graph-scroll-active{
opacity: 1;
}
To support headers and intro images/text, we use a container element containing the explanatory text and graph.
<h1>Page Title</div>
<div id='container'>
<div id='graph'></div>
<div id='sections'>
<div>Section 0</div>
<div>Section 1</div>
<div>Section 2</div>
</div>
</div>
<h1>Footer</h1>
If these elements are passed to graphScroll as selections with container
and graph
, every element in the graph selection will be classed graph-scroll-graph
if the top of the container is out of view.
d3.graphScroll()
.graph(d3.selectAll('#graph'))
.container(d3.select('#container'))
.sections(d3.selectAll('#sections > div'))
.on('active', function(i){ console.log(i + 'th section active') })
When the graph starts to scroll out of view, position: sticky
keeps the graph element stuck to the top of the page while the text scrolls by.
#container{
position: relative;
}
#sections{
width: 340px;
}
#graph{
margin-left: 40px;
width: 500px;
position: sticky;
top: 0px;
float: right;
}
On mobile centering the graph and sections while adding a some padding for the first slide is a good option:
@media (max-width: 925px) {
#graph{
width: 100%;
margin-left: 0px;
float: none;
}
#sections{
position: relative;
margin: 0px auto;
padding-top: 400px;
}
}
Adjust the amount of pixels before a new section is triggered is also helpful on mobile (Defaults to 200 pixels):
graphScroll.offset(300)
To update or replace a graphScroll instance, pass a string to eventId
to remove the old event listeners:
graphScroll.eventId('uniqueId1')
Author: 1wheel
Source Code: https://github.com/1wheel/graph-scroll
License: MIT license
1595924640
Parts of the world are still in lockdown, while others are returning to some semblance of normalcy. Either way, while the last few months have given some things pause, they have boosted others. It seems like developments in the world of Graphs are among those that have been boosted.
An abundance of educational material on all things graph has been prepared and delivered online, and is now freely accessible, with more on the way.
Graph databases have been making progress and announcements, repositioning themselves by a combination of releasing new features, securing additional funds, and entering strategic partnerships.
A key graph database technology, RDF*, which enables compatibility between RDF and property graph databases, is gaining momentum and tool support.
And more cutting edge research combining graph AI and knowledge graphs is seeing the light, too. Buckle up and enjoy some graph therapy.
Stanford’s series of online seminars featured some of the world’s leading experts on all things graph. If you missed it, or if you’d like to have an overview of what was said, you can find summaries for each lecture in this series of posts by Bob Kasenchak and Ahren Lehnert. Videos from the lectures are available here.
Stanford University’s computer science department is offering a free class on Knowledge Graphs available to the public. Stanford is also making recordings of the class available via the class website.
Another opportunity to get up to speed with educational material: The entire program of the course “Information Service Engineering” at KIT - Karlsruhe Institute of Technology, is delivered online and made freely available on YouTube. It includes topics such as ontology design, knowledge graph programming, basic graph theory, and more.
Knowledge representation as a prerequisite for knowledge graphs. Learn about knowledge representation, ontologies, RDF(S), OWL, SPARQL, etc.
Ontology may sound like a formal term, while knowledge graph is a more approachable one. But the 2 are related, and so is ontology and AI. Without a consistent, thoughtful approach to developing, applying, evolving an ontology, AI systems lack underpinning that would allow them to be smart enough to make an impact.
The ontology is an investment that will continue to pay off, argue Seth Earley and Josh Bernoff in Harvard Business Review, making the case for how businesses may benefit from a knowldge-centric approach
Even after multiple generations of investments and billions of dollars of digital transformations, organizations struggle to use data to improve customer service, reduce costs, and speed the core processes that provide competitive advantage. AI was supposed to help with that.
Besides AI, knowledge graphs have a part to play in the Cloud, too. State is good, and lack of support for Stateful Cloud-native applications is a roadblock for many enterprise use-cases, writes Dave Duggal.
Graph knowledge bases are an old idea now being revisited to model complex, distributed domains. Combining high-level abstraction with Cloud-native design principles offers efficient “Context-as-a-Service” for hydrating stateless services. Graph knowledge-based systems can enable composition of Cloud-native services into event-driven dataflow processes.
Kubernetes also touches upon Organizational Knowledge, and that may be modeled as a Knowledge Graph.
Extending graph knowledge bases to model distributed systems creates a new kind of information system, one intentionally designed for today’s IT challenges.
The Enterprise Knowledge Graph Foundation was recently established to define best practices and mature the marketplace for EKG adoption, with a launch webinar on June the 23rd.
The Foundation defines its mission as including adopting semantic standards, developing best practices for accelerated EKG deployment, curating a repository of reusable models and resources, building a mechanism for engagement and shared knowledge, and advancing the business cases for EKG adoption.
The Enterprise Knowledge Graph Maturity Model (EKG/MM) is the industry-standard definition of the capabilities required for an enterprise knowledge graph. It establishes standard criteria for measuring progress and sets out the practical questions that all involved stakeholders ask to ensure trust, confidence and usage flexibility of data. Each capability area provides a business summary denoting its importance, a definition of the added value from semantic standards and scoring criteria based on five levels of defined maturity.
Enterprise Knowledge Graphs is what the Semantic Web Company (SWC) and Ontotext have been about for a long time, too. Two of the vendors in this space that have been around for the longer time just announced a strategic partnership: Ontotext, a graph database and platform provider, meets SWC, a management and added value layer that sits on top.
SWC and Ontotext CEOs emphasize how their portfolios are complementary, while the press release states that the companies have implemented a seamless integration of the PoolParty Semantic Suite™ v.8 with the GraphDB™ and Ontotext Platform, which offers benefits for many use cases.
#database #artificial intelligence #graph databases #rdf #graph analytics #knowledge graph #graph technology
1595932020
Parts of the world are still in lockdown, while others are returning to some semblance of normalcy. Either way, while the last few months have given some things pause, they have boosted others. It seems like developments in the world of Graphs are among those that have been boosted.
An abundance of educational material on all things graph has been prepared and delivered online, and is now freely accessible, with more on the way.
Graph databases have been making progress and announcements, repositioning themselves by a combination of releasing new features, securing additional funds, and entering strategic partnerships.
A key graph database technology, RDF*, which enables compatibility between RDF and property graph databases, is gaining momentum and tool support.
And more cutting edge research combining graph AI and knowledge graphs is seeing the light, too. Buckle up and enjoy some graph therapy.
Stanford’s series of online seminars featured some of the world’s leading experts on all things graph. If you missed it, or if you’d like to have an overview of what was said, you can find summaries for each lecture in this series of posts by Bob Kasenchak and Ahren Lehnert. Videos from the lectures are available here.
Stanford University’s computer science department is offering a free class on Knowledge Graphs available to the public. Stanford is also making recordings of the class available via the class website.
Another opportunity to get up to speed with educational material: The entire program of the course “Information Service Engineering” at KIT - Karlsruhe Institute of Technology, is delivered online and made freely available on YouTube. It includes topics such as ontology design, knowledge graph programming, basic graph theory, and more.
Knowledge representation as a prerequisite for knowledge graphs. Learn about knowledge representation, ontologies, RDF(S), OWL, SPARQL, etc.
Ontology may sound like a formal term, while knowledge graph is a more approachable one. But the 2 are related, and so is ontology and AI. Without a consistent, thoughtful approach to developing, applying, evolving an ontology, AI systems lack underpinning that would allow them to be smart enough to make an impact.
The ontology is an investment that will continue to pay off, argue Seth Earley and Josh Bernoff in Harvard Business Review, making the case for how businesses may benefit from a knowldge-centric approach
Even after multiple generations of investments and billions of dollars of digital transformations, organizations struggle to use data to improve customer service, reduce costs, and speed the core processes that provide competitive advantage. AI was supposed to help with that.
Besides AI, knowledge graphs have a part to play in the Cloud, too. State is good, and lack of support for Stateful Cloud-native applications is a roadblock for many enterprise use-cases, writes Dave Duggal.
Graph knowledge bases are an old idea now being revisited to model complex, distributed domains. Combining high-level abstraction with Cloud-native design principles offers efficient “Context-as-a-Service” for hydrating stateless services. Graph knowledge-based systems can enable composition of Cloud-native services into event-driven dataflow processes.
Kubernetes also touches upon Organizational Knowledge, and that may be modeled as a Knowledge Graph.
Extending graph knowledge bases to model distributed systems creates a new kind of information system, one intentionally designed for today’s IT challenges.
The Enterprise Knowledge Graph Foundation was recently established to define best practices and mature the marketplace for EKG adoption, with a launch webinar on June the 23rd.
The Foundation defines its mission as including adopting semantic standards, developing best practices for accelerated EKG deployment, curating a repository of reusable models and resources, building a mechanism for engagement and shared knowledge, and advancing the business cases for EKG adoption.
The Enterprise Knowledge Graph Maturity Model (EKG/MM) is the industry-standard definition of the capabilities required for an enterprise knowledge graph. It establishes standard criteria for measuring progress and sets out the practical questions that all involved stakeholders ask to ensure trust, confidence and usage flexibility of data. Each capability area provides a business summary denoting its importance, a definition of the added value from semantic standards and scoring criteria based on five levels of defined maturity.
Enterprise Knowledge Graphs is what the Semantic Web Company (SWC) and Ontotext have been about for a long time, too. Two of the vendors in this space that have been around for the longer time just announced a strategic partnership: Ontotext, a graph database and platform provider, meets SWC, a management and added value layer that sits on top.
SWC and Ontotext CEOs emphasize how their portfolios are complementary, while the press release states that the companies have implemented a seamless integration of the PoolParty Semantic Suite™ v.8 with the GraphDB™ and Ontotext Platform, which offers benefits for many use cases.
#database #artificial intelligence #graph databases #rdf #graph analytics #knowledge graph #graph technology
1621327800
As 2020 is coming to an end, let’s see it off in style. Our journey in the world of Graph Analytics, Graph Databases, Knowledge Graphs and Graph AI culminate.
The representation of the relationships among data, information, knowledge and --ultimately-- wisdom, known as the data pyramid, has long been part of the language of information science. Digital transformation has made this relevant beyond the confines of information science. COVID-19 has brought years’ worth of digital transformation in just a few short months.
In this new knowledge-based digital world, encoding and making use of business and operational knowledge is the key to making progress and staying competitive. So how do we go from data to information, and from information to knowledge? This is the key question Knowledge Connexions aims to address.
Graphs in all shapes and forms are a key part of this.
Knowledge Connexions is a visionary event featuring a rich array of technological building blocks to support the transition to a knowledge-based economy: Connecting data, people and ideas, building a global knowledge ecosystem.
The Year of the Graph will be there, in the workshop “From databases to platforms: the evolution of Graph databases”. George Anadiotis, Alan Morrison, Steve Sarsfield, Juan Sequeda and Steven Xi bring many years of expertise in the domain, and will analyze Graph Databases from all possible angles.
This is the first step in the relaunch of the Year of the Graph Database Report. Year of the Graph Newsletter subscribers just got a 25% discount code. To be always in the know, subscribe to the newsletter, and follow the newly launched Year of the Graph account on Twitter! In addition to getting the famous YotG news stream every day, you will also get a 25% discount code.
#database #machine learning #artificial intelligence #data science #graph databases #graph algorithms #graph analytics #emerging technologies #knowledge graphs #semantic technologies
1624264440
Graph databases have moved to the forefront of trendy technologies. There are a lot of mature companies with graph database technologies and a lot of new players seem to be arriving on the scene almost daily, and for good reason; graphs are a more natural way to represent many kinds of data. They excel at modeling and managing the connections between data elements and this opens up new possibilities for what we can accomplish with our data.
In this blog, we will introduce the concept of a weighted graph and weighted graph queries in the InfiniteGraph database, and show how InfiniteGraph offers several unique advantages for performing weighted graph queries.
#tutorial #big data #graph database #graph database analytics #infinite graph #weighted graphs