1 year ago

#77592

test-img

Penguin

How to minimize the number of values that are not 0?

If my output is a tensor of values:

torch.tensor([0.0, 1.2, 0.1, 0.01, 2.3, 99.2, -21.2])

I'm trying to create a differentiable loss function that will minimize the number of values that are not 0. That is, the actual values don't matter, I just need to have less values that are not 0.

How can I get the needed loss value?

So far I tried L1 loss (taking the mean absolute value of this tensor), but this just minimize the values and not necessarily make more 0s. So for example, the results of the L1 loss can be torch.tensor([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]). However, this should be worse than torch.tensor([120.1, 0.0, 0.0, 0.0, 0.0, 0.0]) which is not the case.

Update: I came up with a loss function that will capture some of what I needed, but it's not perfect. In particular, the gradient will be 0 (and hence won't propagate) when the values are far away from 0.

The loss is torch.tanh(torch.abs(torch.tensor([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]))*1e3).mean(). The idea is to make the value positive with the abs, then large with some constant value 1e3, then the tanh will make each 1. This should ignore the value of the numbers and just focus on how many zeros. But like I said, this is really not optimal as the gradient will be 0 (and hence won't propagate) when the values are far away from 0. So I'll keep this question open as this is only a temporary, not great solution (but maybe help generate ideas for a different one)

python

machine-learning

optimization

pytorch

loss-function

0 Answers

Your Answer

Accepted video resources