Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom loss function using intermediate layer output and output layers #182

Open
dfdf opened this issue Dec 1, 2015 · 2 comments
Open

Comments

@dfdf
Copy link

dfdf commented Dec 1, 2015

I need to create a loss function that uses values from a intermediate layer based on DeepID2.

Basically I have this network:

net = NeuralNet(
    layers=[
        ('input', layers.InputLayer),
        ('conv1', Conv2DLayer),
        ('pool1', MaxPool2DLayer),
        ('dropout1', layers.DropoutLayer),
        ('conv2', Conv2DLayer),
        ('pool2', MaxPool2DLayer),
        ('dropout2', layers.DropoutLayer),
        ('conv3', Conv2DLayer),
        ('pool3', MaxPool2DLayer),
        ('dropout3', layers.DropoutLayer),
        ('conv4', Conv2DLayer),
        ('dropout4', layers.DropoutLayer),
        ('flatten1', layers.FlattenLayer),
        ('flatten2', layers.FlattenLayer),
        ('concat',layers.ConcatLayer),
        ('hidden4', layers.DenseLayer),
        ('output', layers.DenseLayer),
        ],

And I need to change the loss function to something like:

1 - Input two images file for each mini-batch

2 - If the images are from the same person (same ID) the loss function will be:

Loss_function:

Error from Image 1 (categorical_crossentropy[Image 1, ID1])
plus
Error from Image 2 ( categorical_crossentropy[Image 2, ID1])
plus
Error between the values of hidden4_layer for the Image 1 and Image 2
(Ex: get_output from hidden4 for image 1 and image 2 and apply squared_error )

3 - If the images are from different person:

Loss_function:

Error from Image 1 (categorical_crossentropy[Image 1, ID1])
plus
Error from Image 2 ( categorical_crossentropy[Image 2, ID2])

Is possible to do this in nolearn?

@BenjaminBossan
Copy link
Collaborator

It is definitely possible, though not out of the box. Have a look at the objective function in nolearn.lasagne.base. It is responsible for determining the loss. There you may specify a loss that depends on whatever layer(s) you wish. Then pass the new objective to nolearn's NeuralNet at initialization.

@charbelc
Copy link

dfdf have you solved the problem?
If yes i will appreciate if you share what have you learned. I have the same problem...
Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants