-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update caffe with the latest master version from BLVC #4
base: master
Are you sure you want to change the base?
Conversation
…ted caffe target This is the first step towards "modern" IMPORTED-targets-only CMake setup. The find_package modules still need to be rewritten and upstreamed in form of config exports where possible.
Despite Caffe itself does not use OpenMP, explicitly linking to OpenMP should be done when one statically links to a BLAS library which uses OpenMP internally and does not provide proper CMake imported targets with proper dependencies (nobody this so far).
Rationale: these are duplicated in CMakeLists code, and they cannot be removed from there because many definitions need to be exported to the library clients. See issue #4625.
Benchmarking should not impact perf until timer is read
A bias/scaling can be applied wherever desired by defining the respective layers, and `ScaleLayer` can handle both as a memory optimization.
Document that Ubuntu 16.04 Requires CUDA 8
batch norm statistics are not learnable parameters subject to solver updates, so they must be shielded from the solver. `BatchNorm` layer now masks its statistics for itself by zeroing parameter learning rates instead of relying on the layer definition. n.b. declaring `param`s for batch norm layers is no longer allowed.
automatically strip old batch norm layer definitions including `param` messages. the batch norm layer used to require manually masking its state from the solver by setting `param { lr_mult: 0 }` messages for each of its statistics. this is now handled automatically by the layer.
[examples] Fixed typos in examples/cpp_classification/readme
Batch Norm: Further Documentation and Simplified Definition
fix layerSetUp of scale_layer to not add bias blob when already present
[TravisCI] google/protobuf renamed the 3.0 branch
Ignore Visual Studio Metadata
slightly relax batch norm check
NV changed path to cuDNN
Deprecate WindowData layer type
Test for python forward and backward with start and end layer
[docs] groom Caffe site
Docker update to cuDNN 6
Explicit std::string to bp::object conversion
Handling destruction of empty Net objects
Rewrite crop layer GPU implementation
Downgrade boost requirement from 1.55 to 1.54
docs/debian guide: update compiler combination table
…brary cmake: rename libproto.a -> libcaffeproto.a
List branches in readme
….cpp to add two headers and Blockqueue with DataReader and Datum
EDITED: sorry, I confused the repo. I'll check it and merge it if everything works properly. Thanks! |
I have a main concern... Caffe removed the However, for what I could understand after taking a look to your code, you have simply copied again the old Since then we are using two different methods to parallelize the GPUs (the one the CPM files use vs. the one everything else in Caffe uses), is this completely thread safe or even single-thread safe? I am asking because I am not an expert in NCCL not Caffe data_reader, but keep using both sounds to me like a bug-prone idea... |
Hi all:
I have update the caffe with the latest master (2017/7/26) version from BLVC.
Would you please merge it?
Thanks.
Feng