You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FlexFlow currently assumes that the input/output tensors of some operators are 32-bit floating points. This issues tracks the progress of supporting general data types for FlexFlow's operators. The list of todos:
Add input/output/weight data type as properties in OpMeta. This allows forward/backward tasks to know the data types of the input/weight/output tensors
Use GenericTensorAccessor in forward/backward tasks
The text was updated successfully, but these errors were encountered:
@jiazhihao The instructions above are out of date, especially after #622. Can you convert this into a list of which operators to which we still want to add multi-precision support?
FlexFlow currently assumes that the input/output tensors of some operators are 32-bit floating points. This issues tracks the progress of supporting general data types for FlexFlow's operators. The list of todos:
OpMeta
. This allows forward/backward tasks to know the data types of the input/weight/output tensorsThe text was updated successfully, but these errors were encountered: