CosineEmbeddingLoss#
- class torch.nn.modules.loss.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[源代码]#
创建了一个标准,用于衡量输入张量 and and a Tensor label with values 1 or -1. Use () to maximize the cosine similarity of two inputs, and () otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is
- 参数
margin (float, optional) – Should be a number from to , to is suggested. If
margin
is missing, the default value is .size_average (bool, optional) – 已弃用 (参见
reduction
)。默认情况下,损失值在批次中的每个损失元素上取平均值。请注意,对于某些损失,每个样本有多个元素。如果字段size_average
设置为False
,则损失值在每个小批次中而是求和。当reduce
为False
时忽略。默认值:True
reduce (bool, optional) – 已弃用 (参见
reduction
)。默认情况下,损失值在每个小批次中根据size_average
对观测值进行平均或求和。当reduce
为False
时,返回每个批次元素的损失值,并忽略size_average
。默认值:True
reduction (str, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- 形状
Input1: or , where N is the batch size and D is the embedding dimension.
Input2: or , same shape as Input1.
Target: or .
Output: If
reduction
is'none'
, then , otherwise scalar.
示例
>>> loss = nn.CosineEmbeddingLoss() >>> input1 = torch.randn(3, 5, requires_grad=True) >>> input2 = torch.randn(3, 5, requires_grad=True) >>> target = torch.ones(3) >>> output = loss(input1, input2, target) >>> output.backward()