Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use native dawn dev package for development #338

Merged
merged 31 commits into from
Jul 21, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions modules/dnn/src/webgpu/include/buffer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ class Buffer
public:
Buffer(const std::shared_ptr<wgpu::Device> device);
Buffer(const std::shared_ptr<wgpu::Device> device,
const void* data, size_t size,
wgpu::BufferUsage usage = wgpu::BufferUsage::Storage |
wgpu::BufferUsage::CopyDst | wgpu::BufferUsage::CopySrc );
Buffer( const void* data, size_t size,
wgpu::BufferUsage usage = wgpu::BufferUsage::Uniform | wgpu::BufferUsage::CopyDst);
const void* data, size_t size,
wgpu::BufferUsage usage = wgpu::BufferUsage::Storage |
wgpu::BufferUsage::CopyDst | wgpu::BufferUsage::CopySrc);
Buffer(const void* data, size_t size,
wgpu::BufferUsage usage = wgpu::BufferUsage::Uniform | wgpu::BufferUsage::CopyDst);
~Buffer()
{
buffer_.Release();
Expand Down
4 changes: 2 additions & 2 deletions modules/dnn/src/webgpu/include/tensor.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ class Buffer;
class Tensor{
wzw-intel marked this conversation as resolved.
Show resolved Hide resolved
public:
Tensor(Format fmt = wFormatFp32);
Tensor( const void* data, std::vector<int>& shape,
Tensor(const void* data, std::vector<int>& shape,
Format fmt = wFormatFp32);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every Tensor should support both read and write, the "usage" parameter should be eliminated. Use "wgpu::BufferUsage::Storage | wgpu::BufferUsage::CopyDst | wgpu::BufferUsage::CopySrc" for internal buffer's usage.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plase correct the wrong indents of paramters, not only for this function.

Copy link
Collaborator Author

@NALLEIN NALLEIN Jul 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plase correct the wrong indents of paramters, not only for this function.

Thanks for your patience, I've modified it. By the way, I've add the layer test for softmax in this repo, please help to check whether it works on your device.

git clone https://github.com/NALLEIN/opencv.git
cd opencv/
git checkout -b layerTest origin/layerTest
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -DWITH_WEBGPU=ON -D CMAKE_INSTALL_PREFIX=/usr/local ..
make -j8
$(PATH_TO_OPENCV)/build/bin/opencv_test_dnn --gtest_filter=Layer_Test_Softmax.Accuracy

The test case is here. If there is nothing wrong with the test, I will start to complete other ops for evaluation 2.
@huningxin @wzw-intel PTAL.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test passed on my machine with nVidia GPU.

const void* mapRead();
void unMap();
Expand All @@ -23,7 +23,7 @@ class Tensor{
// Change shape and format to as passed in.
// Copy data if data != NULL
// Allocate new internal buffer if new size > old size or alloc flag is true
Tensor reshape( const void* data, const std::vector<int>& shape,
Tensor reshape(const void* data, const std::vector<int>& shape,
bool alloc = false,
Format fmt = wFormatInvalid);
Tensor fillData(const void * data);
Expand Down
4 changes: 2 additions & 2 deletions modules/dnn/src/webgpu/src/buffer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Buffer::Buffer(std::shared_ptr<wgpu::Device> device)
usage_ = wgpu::BufferUsage::Storage;
}

Buffer::Buffer( std::shared_ptr<wgpu::Device> device,
Buffer::Buffer(std::shared_ptr<wgpu::Device> device,
const void* data, size_t size,
wgpu::BufferUsage usage)
{
Expand All @@ -25,7 +25,7 @@ Buffer::Buffer( std::shared_ptr<wgpu::Device> device,
if(data) buffer_.SetSubData(0, size_, data);
}

Buffer::Buffer( const void* data, size_t size,
Buffer::Buffer(const void* data, size_t size,
wgpu::BufferUsage usage)
{
createContext();
Expand Down
10 changes: 5 additions & 5 deletions modules/dnn/src/webgpu/src/op_softmax.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,11 @@ bool OpSoftmax::forward(Tensor& in, Tensor& out)
uniformBuffer_ = new Buffer(&param, sizeof(SoftmaxParam));
}

bindTensor( in, 0, bgEntries);
bindTensor( *max_tensor_, 1, bgEntries);
bindTensor( *sum_tensor_, 2, bgEntries);
bindTensor( out, 3, bgEntries);
bindUniform( *uniformBuffer_, 4, bgEntries);
bindTensor(in, 0, bgEntries);
bindTensor(*max_tensor_, 1, bgEntries);
bindTensor(*sum_tensor_, 2, bgEntries);
bindTensor(out, 3, bgEntries);
bindUniform(*uniformBuffer_, 4, bgEntries);

createBindGroup();
createCommandBuffer();
Expand Down