Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading snapshot with GPU backend #219

Closed
greenflash1357 opened this issue Oct 27, 2016 · 1 comment
Closed

Loading snapshot with GPU backend #219

greenflash1357 opened this issue Oct 27, 2016 · 1 comment

Comments

@greenflash1357
Copy link
Contributor

I have trained a network and want to load the parameters from a snapshot.
It works perfectly fine when using CPU backend. But for GPU backend, I get this error message:

TypeError: read_refs: in typeassert, expected Array{Array{T,N},1}, got Array{Array{T,N},1}
 in read_refs(::JLD.JldDataset, ::Type{Array{Array{T,N},1}}, ::Int32, ::Int32, ::Tuple{Int64}) at JLD\src\JLD.jl:491
 in read_array(::JLD.JldDataset, ::HDF5.HDF5Datatype, ::Int32, ::Int32, ::Tuple{Int64}) at JLD\src\JLD.jl:427
 in read(::JLD.JldDataset) at JLD\src\JLD.jl:392
 in read_ref(::JLD.JldFile, ::HDF5.HDF5ReferenceObj) at JLD\src\JLD.jl:518
 in macro expansion at JLD\src\jld_types.jl:451 [inlined]
 in jlconvert(::Type{JLD.AssociativeWrapper{AbstractString,Array{Array{T,N},1},Dict{AbstractString,Array{Array{T,N},1}}}}, ::JLD.JldFile, ::Ptr{UInt8}) at JLD\src\jld_types.jl:581
 in read_scalar(::JLD.JldDataset, ::HDF5.HDF5Datatype, ::Type{T}) at JLD\src\JLD.jl:418
 in read(::JLD.JldDataset)

The network has also been trained with GPU backend.
Network architecture is:

data_layer = MemoryDataLayer(name="data", tops=[:data], batch_size=64, data=Array[zeros(Float32,31,31,3,64)])
conv1_layer  = ConvolutionLayer(name="conv1", n_filter=96, kernel=(7,7), bottoms=[:data], tops=[:conv1], neuron=Neurons.ReLU())
pool1_layer  = PoolingLayer(name="pool1", kernel=(2,2), stride=(2,2), bottoms=[:conv1], tops=[:pool1])
conv2_layer = ConvolutionLayer(name="conv2", n_filter=256, kernel=(5,5), bottoms=[:pool1], tops=[:conv2], neuron=Neurons.ReLU())
pool2_layer = PoolingLayer(name="pool2", kernel=(2,2), stride=(2,2), bottoms=[:conv2], tops=[:pool2])
fc1_layer   = InnerProductLayer(name="ip1", output_dim=1024, neuron=Neurons.ReLU(), bottoms=[:pool2], tops=[:ip1])
fc2_layer   = InnerProductLayer(name="ip2", output_dim=2, bottoms=[:ip1], tops=[:ip2])
softmax_layer = SoftmaxLayer(name="class", tops=[:prob], bottoms=[:ip2])
mem_out = MemoryOutputLayer(name="output", bottoms=[:prob])
@greenflash1357
Copy link
Contributor Author

JuliaIO/JLD.jl#170 seems to fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant