Skip to content

Commit

Permalink
Formatting changes
Browse files Browse the repository at this point in the history
  • Loading branch information
Ris-Bali committed Dec 23, 2023
1 parent 3c0580d commit aa0e053
Show file tree
Hide file tree
Showing 2 changed files with 84 additions and 80 deletions.
24 changes: 14 additions & 10 deletions docs/userDocs/source/user/UsingClad.rst
Original file line number Diff line number Diff line change
Expand Up @@ -222,16 +222,20 @@ that needs to be differentiated even when we want to differentiate w.r.t entire

.. code-block:: cpp
#include "clad/Differentiator/Differentiator.h"
double fn(double x, double arr[2]) { return x * arr[0] * arr[1]; }
int main() {
auto fn_hessian = clad::hessian(fn, "x, arr[0:1]");
// We have 3 independent variables thus we require space of 9.
double mat_fn[9] = {0};
clad::array_ref<double> mat_fn_ref(mat_fn, 9);
double num[2] = {1, 2};
fn_hessian.execute(3, num, mat_fn_ref);
}
#include "clad/Differentiator/Differentiator.h"
double fn(double x, double arr[2]) { return x * arr[0] * arr[1]; }
int main() {
auto fn_hessian = clad::hessian(fn, "x, arr[0:1]");
// We have 3 independent variables thus we require space of 9.
double mat_fn[9] = {0};
clad::array_ref<double> mat_fn_ref(mat_fn, 9);
double num[2] = {1, 2};
fn_hessian.execute(3, num, mat_fn_ref);
}
Jacobian Computation
----------------------
Expand Down
140 changes: 70 additions & 70 deletions docs/userDocs/source/user/tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,20 @@ API call.

.. code-block:: cpp
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
double func(int x) { return x * x; }
double func(int x) { return x * x; }
int main() {
/*Calling clad::differentiate to get the forward mode derivative of
the given mathematical function*/
auto d_func = clad::differentiate(func, "x");
// execute the generated derivative function.
std::cout << d_func.execute(/*x =*/3) << std::endl;
// Dump the generated derivative code to std output.
d_func.dump();
}
int main() {
/*Calling clad::differentiate to get the forward mode derivative of
the given mathematical function*/
auto d_func = clad::differentiate(func, "x");
// execute the generated derivative function.
std::cout << d_func.execute(/*x =*/3) << std::endl;
// Dump the generated derivative code to std output.
d_func.dump();
}
Here we are differentiating a function `func` which takes an input `x` and
returns a scaler value `x * x`.`.dump()` method is used to get a dump of generated
Expand All @@ -38,17 +38,17 @@ API call.

.. code-block:: cpp
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
double f(double x, double y, double z) { return x * y * z; }
double f(double x, double y, double z) { return x * y * z; }
int main() {
auto d_f = clad::gradient(f, "x, y");
double dx = 0, dy = 0;
d_f.execute(/*x=*/2, /*y=*/3, /*z=*/4, &dx, &dy);
std::cout << "dx : " << dx << "dy :" << dy << std::endl;
}
int main() {
auto d_f = clad::gradient(f, "x, y");
double dx = 0, dy = 0;
d_f.execute(/*x=*/2, /*y=*/3, /*z=*/4, &dx, &dy);
std::cout << "dx : " << dx << "dy :" << dy << std::endl;
}
In the above example we are differentiating w.r.t `x and y` we can also
differentiate w.r.t to single argument i.e. either `x` or `y` as `clad::gradient(f, "x")`
Expand All @@ -63,29 +63,29 @@ It returns the hessian matrix as a flattened vector in row major format.

.. code-block:: cpp
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
double f(double x, double y, double z) { return x * y * z; }
double f(double x, double y, double z) { return x * y * z; }
// Function with array input
// Function with array input
double f_arr(double x, double y, double z[2]) { return x * y * z[0] * z[1]; }
double f_arr(double x, double y, double z[2]) { return x * y * z[0] * z[1]; }
int main() {
// Workflow similar to clad::gradient for non-array input arguments.
auto f_hess = clad::hessian(f, "x, y");
double matrix_f[9] = {0};
clad::array_ref<double> matrix_f_ref(matrix_f, 9);
f_hess.execute(3, 4, 5, matrix_f_ref);
std::cout << "[" << matrix_f_ref[0] << ", " << matrix_f_ref[1]
<< matrix_f_ref[2] << "\n"
<< matrix_f_ref[3] << ", " << matrix_f_ref[4] << matrix_f_ref[5]
<< "\n"
<< matrix_f_ref[6] << ", " << matrix_f_ref[7] << matrix_f_ref[8]
<< "]"
<< "\n";
}
int main() {
// Workflow similar to clad::gradient for non-array input arguments.
auto f_hess = clad::hessian(f, "x, y");
double matrix_f[9] = {0};
clad::array_ref<double> matrix_f_ref(matrix_f, 9);
f_hess.execute(3, 4, 5, matrix_f_ref);
std::cout << "[" << matrix_f_ref[0] << ", " << matrix_f_ref[1]
<< matrix_f_ref[2] << "\n"
<< matrix_f_ref[3] << ", " << matrix_f_ref[4] << matrix_f_ref[5]
<< "\n"
<< matrix_f_ref[6] << ", " << matrix_f_ref[7] << matrix_f_ref[8]
<< "]"
<< "\n";
}
When arrays are involved we need to specify the array index that needs to be
differentiated. For example if we want to differentiate w.r.t to the first two
Expand All @@ -101,27 +101,27 @@ jacobian matrix as a flattened vector with elements arranged in row-major format

.. code-block:: cpp
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
void f(double x, double y, double z, double* output) {
output[0] = x * y;
output[1] = y * y * x;
output[2] = 6 * x * y * z;
}
void f(double x, double y, double z, double* output) {
output[0] = x * y;
output[1] = y * y * x;
output[2] = 6 * x * y * z;
}
int main() {
auto f_jac = clad::jacobian(f);
int main() {
auto f_jac = clad::jacobian(f);
double jac[9] = {0};
double output[3] = {0};
f_jac.execute(3, 4, 5, output, jac);
std::cout << jac[0] << " " << jac[1] << std::endl
<< jac[2] << " " << jac[3] << std::endl
<< jac[4] << " " << jac[5] << std::endl
<< jac[6] << " " << jac[7] << std::endl
<< jac[8] << std::endl;
}
double jac[9] = {0};
double output[3] = {0};
f_jac.execute(3, 4, 5, output, jac);
std::cout << jac[0] << " " << jac[1] << std::endl
<< jac[2] << " " << jac[3] << std::endl
<< jac[4] << " " << jac[5] << std::endl
<< jac[6] << " " << jac[7] << std::endl
<< jac[8] << std::endl;
}
The jacobian matrix size should be equal to `no. of independent variables times
the number of outputs in the original function` in the above example it would be
Expand All @@ -134,22 +134,22 @@ code using reverse mode AD.

.. code-block:: cpp
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
#include <iostream>
#include "clad/Differentiator/Differentiator.h"
double func(double x, double y) { return x * y; }
double func(double x, double y) { return x * y; }
int main() {
int main() {
auto dfunc_error = clad::estimate_error(func);
// Used to print generated code to standard output.
dfunc_error.dump();
double x, y, d_x, d_y, final_error = 0;
// Call execute
dfunc_error.execute(x, y, &d_x, &d_y, final_error);
auto dfunc_error = clad::estimate_error(func);
// Used to print generated code to standard output.
dfunc_error.dump();
double x, y, d_x, d_y, final_error = 0;
// Call execute
dfunc_error.execute(x, y, &d_x, &d_y, final_error);
std::cout << final_error;
}
std::cout << final_error;
}
The function signature is similar to `clad::gradient` except we need to add an
extra argument of type `double&` which is used to store the total floating point
Expand Down

0 comments on commit aa0e053

Please sign in to comment.