Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sindhu/bfloat16 support #399

Merged
merged 29 commits into from
Jan 30, 2020
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
694e63c
initial commit
sindhu-nervana Dec 11, 2019
2ab87c7
add bfloat16 test
sindhu-nervana Dec 16, 2019
b267ba1
Shrestha/var in compute (#388)
Dec 20, 2019
3ffb02e
disable the test
sindhu-nervana Dec 20, 2019
367d3db
Kanvi/Add asserts in some python tests (#398)
kanvi-nervana Dec 20, 2019
453a304
Merge branch 'master' into sindhu/bfloat16_support
sindhu-nervana Dec 20, 2019
22e7755
Merge branch 'master' into sindhu/bfloat16_support
Dec 24, 2019
c6220b7
Merge branch 'master' into sindhu/bfloat16_support
kanvi-nervana Dec 31, 2019
bea7c4d
Merge branch 'master' into sindhu/bfloat16_support
Jan 17, 2020
4cfb27f
added test
Jan 21, 2020
266b24a
changes
Jan 22, 2020
062a3c3
added another test
Jan 24, 2020
f00e298
added another bfloat test. encapsulate always assigned device CPU
Jan 24, 2020
5644eb6
Merge remote-tracking branch 'origin/master' into sindhu/bfloat16_sup…
Jan 24, 2020
0a4ffdd
removed couts, rearranged the tests
Jan 24, 2020
80c46f8
device checks
Jan 25, 2020
eb145c7
fix by registering dummy bfloat kernel
Jan 28, 2020
4d91711
Merge remote-tracking branch 'origin/master' into sindhu/bfloat16_sup…
Jan 28, 2020
5f08083
hanging include
Jan 28, 2020
e50323a
changes
Jan 28, 2020
e35892d
minor
Jan 28, 2020
a95c92f
Register Stub Kernels
Jan 29, 2020
5d313e3
fix bazel build
Jan 29, 2020
f636278
update comment
Jan 29, 2020
d2a161f
added comments to the test
Jan 29, 2020
1e4923c
corrected the macros
Jan 29, 2020
0bb58e0
fix template
Jan 29, 2020
957bf01
Merge remote-tracking branch 'origin/master' into sindhu/bfloat16_sup…
Jan 29, 2020
9fce56c
incorporate review comments
Jan 29, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 11 additions & 5 deletions ngraph_bridge/ngraph_utils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,9 @@ Status TensorToStream(std::ostream& ostream, const Tensor& tensor) {
case DT_BOOL:
TensorDataToStream<bool>(ostream, n_elements, data);
break;
case DT_BFLOAT16:
TensorDataToStream<bool>(ostream, n_elements, data);
Copy link
Contributor

@sayantan-nervana sayantan-nervana Jan 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It says <bool> in the template. copy-paste error perhaps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Not sure what the corresponding data type for bfloat is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can throw an error or return a bad status for now I guess

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

break;
default:
return errors::Internal("TensorToStream got unsupported data type ",
DataType_Name(tensor.dtype()));
Expand Down Expand Up @@ -272,6 +275,8 @@ Status TFDataTypeToNGraphElementType(DataType tf_dt,
break;
case DataType::DT_QINT32:
*ng_et = ng::element::i32;
case DataType::DT_BFLOAT16:
*ng_et = ng::element::bf16;
break;
default:
return errors::Unimplemented("Unsupported TensorFlow data type: ",
Expand Down Expand Up @@ -322,15 +327,16 @@ void print_node_histogram(const std::unordered_map<string, int>& histogram,

const gtl::ArraySlice<DataType>& NGraphDTypes() {
static gtl::ArraySlice<DataType> result{
DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32, DT_INT64, DT_UINT8,
DT_UINT16, DT_UINT32, DT_UINT64, DT_BOOL, DT_QINT8, DT_QUINT8};
DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32,
DT_INT64, DT_UINT8, DT_UINT16, DT_UINT32, DT_UINT64,
DT_BOOL, DT_QINT8, DT_QUINT8, DT_BFLOAT16};
return result;
}

const gtl::ArraySlice<DataType>& NGraphNumericDTypes() {
static gtl::ArraySlice<DataType> result{
DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32,
DT_INT64, DT_UINT8, DT_UINT16, DT_UINT32, DT_UINT64};
DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32, DT_INT64,
DT_UINT8, DT_UINT16, DT_UINT32, DT_UINT64, DT_BFLOAT16};
return result;
}

Expand All @@ -352,7 +358,7 @@ const gtl::ArraySlice<DataType>& NGraphSupportedQuantizedDTypes() {
}

const gtl::ArraySlice<DataType>& NGraphRealDTypes() {
static gtl::ArraySlice<DataType> result{DT_FLOAT, DT_DOUBLE};
static gtl::ArraySlice<DataType> result{DT_FLOAT, DT_DOUBLE, DT_BFLOAT16};
return result;
}

Expand Down
48 changes: 48 additions & 0 deletions test/python/test_bfloat16.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# ==============================================================================
# Copyright 2019 Intel Corporation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright 2019-2020 Intel Corporation

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""nGraph TensorFlow bridge bfloat16 matmul operation test

"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import pytest
import numpy as np

import tensorflow as tf
import os

from common import NgraphTest

#This test is just a sample test to test bf16 dtype
#This fails, should enable and expand once CPU backend adds bfloat16 support


class TestMatmulBfloat16(NgraphTest):

@pytest.mark.skip(reason="CPU backend does not support dtype bf16")
def test_matmul_bfloat16(self):
a = tf.placeholder(tf.bfloat16, [2, 3], name='a')
x = tf.placeholder(tf.bfloat16, [3, 4], name='x')
a_inp = np.random.rand(2, 3)
x_inp = np.random.rand(3, 4)
out = tf.matmul(a, x)

def run_test(sess):
return sess.run((out,), feed_dict={a: a_inp, x: x_inp})

assert self.with_ngraph(run_test) == self.without_ngraph(run_test)