Use tf.nn.sparse_softmax_cross_entropy_with_logits
, but beware that it can't accept the output of tf.nn.softmax
. Instead, calculate the unscaled activations, and then the cost:
logits = tf.matmul(state_below, U) + b
cost = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
In this case: state_below
and U
should be 2D matrices, b
should be a vector of a size equal to the number of classes, and labels
should be a 2D matrix of int32
or int64
. This function also supports activation tensors with more than two dimensions.