2 years ago
#57608
Sumny
Gradient of a neuronal network
I have a trained neural network that predicts the noise of a given image. Now I want to use it to calculate a subgradient of my NN wrt the norm of the output.
I want to use this in a larger algorithm, but since I can not get it to work as expected, I created this minimal example.
model = load_model(os.path.join(nn_path, 'NN' + name +'.h5')) #trained NN
y0_tensor = np.load(os.path.join(data_path, 'fbp_val' + name +'.npy'))[0] #typical input of the NN
max_iter = 15
alpha = 0.45
s = 0.2
for iters in range(max_iter):
with tf.GradientTape() as tape:
tape.watch(y0_tensor)
prediction = model(y0_tensor)
norm = tf.norm(prediction)
grad = tape.gradient(norm, y0_tensor) #d norm / d y0_tensor
y0_tensor = y0_tensor - s * alpha * grad
I expected it to become more and more similar to the output of the NN with an increasing number of iterations. But it just seems to add noise to it.
Note that I am not fixed on using GardientType. I also tried to use keras.backend.gradients
with no success.
For more background information, here is what I am trying to do:
Note that the subgradient of the regularization term can be evaluated by standard software for network training with the backpropagation algorithm. (Source: https://iopscience.iop.org/article/10.1088/1361-6420/ab6d57, chapter 4.2)
tensorflow
neural-network
gradienttape
0 Answers
Your Answer