You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add functions for the logical operators (==,<, <=, >, >=) and their functions (logical_eq, etc.) to allow containers as arguments and support broadcasting. The signatures will be, for example,
If there are two scalar inputs, the existing function is called. If one of the arguments is a container, the other argument must be a container of the same shape or a scalar. With a scalar and container, the scalar is broadcast.
Vectorized logic
We also want to add two functions
int any(scalars);
int all(scalars);
where the first returns 0 if all of the arguments are 0 and 1 otherwise, and the second returns 0 if any of the arguments is 0 and 1 otherwise. We could also add a not function that performs elementwise negation.
Example & Expected Output
data {
int N;
real x1[N];
real x2[N];
}
transformed data {
real y =sum(x1 > x2);
}
Current error:
No matches for: logical_gt(real[ ], real[ ])
Current Version:
v4.6.2
The text was updated successfully, but these errors were encountered:
Thanks for opening the issue, @jessexknight. I edited to add signatures and a definition and to remove the unnecessary R and remove the erroneous comment about efficiency.
As far as efficiency, I'm afraid implementing a built-in in C++ won't be any faster than writing the loop in Stan, because Stan gets compiled down to C++. The only advantage to having built-ins is when we have vectorized autodiff, which we can accelerate.
Thanks - that is surprising about performance. I'm really out of my depth here with C++ etc., but would there be a performance difference if using something like apply_scalar_binary to existing scalar functions, vs a native function for this kind of thing from the Eigen library?
Yes, if we can get things compiled down to Eigen's vectorized operations then we can exploit their use of CPU vectorization (e.g., SSE and AVX operations at the CPU level). This can give a several times speedup. The term "vectorized" usually refers to using SSE, AVX, etc. on the CPU---our use of the term in Stan is non-standard. I doubt they've vectorized logical operations, but they've done a lot of common math functions like log and exp and sin and cos.
Vectorized CPU operations can be a lot faster.
apply_scalar_binary doesn't compile down that low but we might actually be able to rewrite it to better exploit these operations. I think the current implementation assumes a double or autodiff.
Description
As discussed here, the logical functions don't support vectorization yet.
New functions
Add functions for the logical operators (
==
,<
,<=
,>
,>=
) and their functions (logical_eq
, etc.) to allow containers as arguments and support broadcasting. The signatures will be, for example,where
scalar
isint | real | complex
scalars
isscalar[] | vector | row_vector | complex_vector | complex_row_vector
.If there are two scalar inputs, the existing function is called. If one of the arguments is a container, the other argument must be a container of the same shape or a scalar. With a scalar and container, the scalar is broadcast.
Vectorized logic
We also want to add two functions
where the first returns 0 if all of the arguments are 0 and 1 otherwise, and the second returns 0 if any of the arguments is 0 and 1 otherwise. We could also add a
not
function that performs elementwise negation.Example & Expected Output
Current error:
Current Version:
v4.6.2
The text was updated successfully, but these errors were encountered: