Skip to content

Commit

Permalink
do rename
Browse files Browse the repository at this point in the history
  • Loading branch information
KristofferC committed Feb 9, 2017
1 parent 7d0b7ba commit 9fa996e
Show file tree
Hide file tree
Showing 24 changed files with 151 additions and 101 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ git:
depth: 99999

after_success:
- julia -e 'include(Pkg.dir("ContMechTensors", "test", "coverage.jl"))'
- julia -e 'include(Pkg.dir("Tensors", "test", "coverage.jl"))'
2 changes: 1 addition & 1 deletion LICENSE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The ContMechTensors.jl package is licensed under the MIT "Expat" License:
The Tensors.jl package is licensed under the MIT "Expat" License:

> Copyright (c) 2016: Kristoffer Carlsson.
>
Expand Down
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ContMechTensors
# Tensors

*Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation.*

Expand All @@ -19,7 +19,7 @@ Supports Automatic Differentiation to easily compute first and second order deri
The package is registered in `METADATA.jl` and so can be installed with `Pkg.add`.

```julia
julia> Pkg.add("ContMechTensors")
julia> Pkg.add("Tensors")
```

## Documentation
Expand All @@ -36,18 +36,18 @@ The package is tested against Julia `0.5`, and `0.6-dev` on Linux, OS X, and Win
Contributions are very welcome, as are feature requests and suggestions. Please open an [issue][issues-url] if you encounter any problems.

[docs-latest-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-latest-url]: https://kristofferc.github.io/ContMechTensors.jl/latest/
[docs-latest-url]: https://kristofferc.github.io/Tensors.jl/latest/

[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://kristofferc.github.io/ContMechTensors.jl/stable
[docs-stable-url]: https://kristofferc.github.io/Tensors.jl/stable

[travis-img]: https://travis-ci.org/KristofferC/ContMechTensors.jl.svg?branch=master
[travis-url]: https://travis-ci.org/KristofferC/ContMechTensors.jl
[travis-img]: https://travis-ci.org/KristofferC/Tensors.jl.svg?branch=master
[travis-url]: https://travis-ci.org/KristofferC/Tensors.jl

[appveyor-img]: https://ci.appveyor.com/api/projects/status/xe0ghtyas12wv555/branch/master?svg=true
[appveyor-url]: https://ci.appveyor.com/project/KristofferC/contmechtensors-jl/branch/master
[appveyor-url]: https://ci.appveyor.com/project/KristofferC/Tensors-jl/branch/master

[issues-url]: https://github.com/KristofferC/ContMechTensors.jl/issues
[issues-url]: https://github.com/KristofferC/Tensors.jl/issues

[codecov-img]: https://codecov.io/gh/KristofferC/ContMechTensors.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/KristofferC/ContMechTensors.jl
[codecov-img]: https://codecov.io/gh/KristofferC/Tensors.jl/branch/master/graph/badge.svg
[codecov-url]: https://codecov.io/gh/KristofferC/Tensors.jl
4 changes: 2 additions & 2 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ build_script:
# Need to convert from shallow to complete for Pkg.clone to work
- IF EXIST .git\shallow (git fetch --unshallow)
- C:\projects\julia\bin\julia -e "versioninfo();
Pkg.clone(pwd(), \"ContMechTensors\"); Pkg.build(\"ContMechTensors\")"
Pkg.clone(pwd(), \"Tensors\"); Pkg.build(\"Tensors\")"

test_script:
- C:\projects\julia\bin\julia --check-bounds=yes -e "Pkg.test(\"ContMechTensors\")"
- C:\projects\julia\bin\julia --check-bounds=yes -e "Pkg.test(\"Tensors\")"
4 changes: 2 additions & 2 deletions benchmark/benchmark_ad.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
const= ContMechTensors.gradient
const Δ = ContMechTensors.hessian
const= Tensors.gradient
const Δ = Tensors.hessian

function Ψ(C, μ, Kb)
detC = det(C)
Expand Down
50 changes: 50 additions & 0 deletions benchmark/results_ad.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
| ID | time | GC time | memory | allocations |

This comment has been minimized.

Copy link
@fredrikekre

fredrikekre Feb 10, 2017

Member

This file should not have been included, right?

This comment has been minimized.

Copy link
@KristofferC

KristofferC Feb 10, 2017

Author Collaborator

nope :P

This comment has been minimized.

Copy link
@KristofferC

KristofferC Feb 10, 2017

Author Collaborator

we can remove it later

This comment has been minimized.

Copy link
@fredrikekre

fredrikekre Feb 10, 2017

Member

yes

This comment has been minimized.

Copy link
@fredrikekre

fredrikekre Feb 10, 2017

Member

Speaking of benchmarks, it would be nice with some benchmarking numbers in the documentation.

This comment has been minimized.

Copy link
@KristofferC

KristofferC Feb 10, 2017

Author Collaborator

yep, perhaps just do a full benchmark and show the timings

|----|------|---------|--------|-------------|
| `["gradient","dim 1 S - Float32"]` | 180.687 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 S - Float64"]` | 196.869 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 S - sym - Float32"]` | 194.498 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 S - sym - Float64"]` | 196.092 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - Float32 - ana"]` | 111.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - Float32"]` | 161.945 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - Float64 - ana"]` | 118.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - Float64"]` | 176.064 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - sym - Float32 - ana"]` | 118.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - sym - Float32"]` | 175.461 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - sym - Float64 - ana"]` | 118.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 1 Ψ - sym - Float64"]` | 175.332 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 S - Float32"]` | 316.050 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 S - Float64"]` | 416.660 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 S - sym - Float32"]` | 315.605 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 S - sym - Float64"]` | 311.008 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - Float32 - ana"]` | 128.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - Float32"]` | 245.983 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - Float64 - ana"]` | 127.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - Float64"]` | 237.191 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - sym - Float32 - ana"]` | 128.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - sym - Float32"]` | 245.562 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - sym - Float64 - ana"]` | 127.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 2 Ψ - sym - Float64"]` | 237.191 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 S - Float32"]` | 839.907 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 S - Float64"]` | 804.629 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 S - sym - Float32"]` | 842.720 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 S - sym - Float64"]` | 804.034 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - Float32 - ana"]` | 141.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - Float32"]` | 397.249 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - Float64 - ana"]` | 139.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - Float64"]` | 373.892 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - sym - Float32 - ana"]` | 191.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - sym - Float32"]` | 395.731 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - sym - Float64 - ana"]` | 140.000 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["gradient","dim 3 Ψ - sym - Float64"]` | 374.348 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 1 Ψ - Float32"]` | 328.355 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 1 Ψ - Float64"]` | 347.819 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 1 Ψ - sym - Float32"]` | 563.225 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 1 Ψ - sym - Float64"]` | 347.877 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 2 Ψ - Float32"]` | 695.905 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 2 Ψ - Float64"]` | 902.880 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 2 Ψ - sym - Float32"]` | 954.442 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 2 Ψ - sym - Float64"]` | 647.164 ns (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 3 Ψ - Float32"]` | 6.678 μs (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 3 Ψ - Float64"]` | 7.201 μs (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 3 Ψ - sym - Float32"]` | 6.676 μs (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
| `["hessian","dim 3 Ψ - sym - Float64"]` | 6.529 μs (5%) | 0.000 ns | 0.00 bytes (1%) | 0 |
2 changes: 1 addition & 1 deletion benchmark/runbenchmarks.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
using ContMechTensors
using Tensors
using BenchmarkTools
using JLD

Expand Down
8 changes: 4 additions & 4 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
using Documenter, ContMechTensors
using Documenter, Tensors

makedocs(
modules = [ContMechTensors],
modules = [Tensors],
format = :html,
sitename = "ContMechTensors.jl",
sitename = "Tensors.jl",
doctest = true,
strict = true,
pages = Any[
Expand All @@ -21,7 +21,7 @@ makedocs(
)

deploydocs(
repo = "github.com/KristofferC/ContMechTensors.jl.git",
repo = "github.com/KristofferC/Tensors.jl.git",
target = "build",
julia = "0.5",
deps = nothing,
Expand Down
12 changes: 6 additions & 6 deletions docs/src/demos.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Demos

This section contain a few demos of applying `ContMechTensors` to continuum mechanics.
This section contain a few demos of applying `Tensors` to continuum mechanics.

## Creating the linear elasticity tensor

Expand All @@ -13,7 +13,7 @@ where $\delta_{ij} = 1$ if $i = j$ otherwise $0$. It can also be computed in ter
The code below creates the elasticity tensor for given parameters $E$ and $\nu$ and dimension $\texttt{dim}$. Note the similarity between the mathematical formula and the code.

```julia
using ContMechTensors
using Tensors
E = 200e9
ν = 0.3
dim = 2
Expand All @@ -35,7 +35,7 @@ $\Psi(\mathbf{C}) = 1/2 \mu (\mathrm{tr}(\hat{\mathbf{C}}) - 3) + K_b(J-1)^2,$

where $\hat{\mathbf{C}} = \mathrm{det}(\mathbf{C})^{-1/3} \mathbf{C}$ and $J = \det(\mathbf{F}) = \sqrt{\det(\mathbf{C})}$ and the shear and bulk modulus are given by $\mu$ and $K_b$ respectively.

This free energy function can be implemented in `ContMechTensors` as:
This free energy function can be implemented in `Tensors` as:

```julia
function Ψ(C, μ, Kb)
Expand Down Expand Up @@ -69,7 +69,7 @@ For some material models it can be cumbersome to compute the analytical expressi
```@meta
DocTestSetup = quote
srand(1234)
using ContMechTensors
using Tensors
E = 200e9
ν = 0.3
dim = 2
Expand Down Expand Up @@ -106,13 +106,13 @@ julia> F = one(Tensor{2,3}) + rand(Tensor{2,3});
julia> C = tdot(F);
julia> S_AD = 2 * gradient(C -> Ψ(C, μ, Kb), C)
3×3 ContMechTensors.SymmetricTensor{2,3,Float64,6}:
3×3 Tensors.SymmetricTensor{2,3,Float64,6}:
4.30534e11 -2.30282e11 -8.52861e10
-2.30282e11 4.38793e11 -2.64481e11
-8.52861e10 -2.64481e11 7.85515e11
julia> S(C, μ, Kb)
3×3 ContMechTensors.SymmetricTensor{2,3,Float64,6}:
3×3 Tensors.SymmetricTensor{2,3,Float64,6}:
4.30534e11 -2.30282e11 -8.52861e10
-2.30282e11 4.38793e11 -2.64481e11
-8.52861e10 -2.64481e11 7.85515e11
Expand Down
6 changes: 3 additions & 3 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ContMechTensors
# Tensors

*Efficient computations with symmetric and non-symmetric tensors in Julia.*

Expand All @@ -12,10 +12,10 @@ Supports Automatic Differentiation to easily compute first and second order deri

## Installation

`ContMechTensors` is a registered package and so can be installed via
`Tensors` is a registered package and so can be installed via

```julia
Pkg.add("ContMechTensors")
Pkg.add("Tensors")
```

## Manual Outline
Expand Down
16 changes: 8 additions & 8 deletions docs/src/man/automatic_differentiation.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
```@meta
DocTestSetup = quote
srand(1234)
using ContMechTensors
using Tensors
end
```

Expand All @@ -11,13 +11,13 @@ end
Pages = ["automatic_differentiation.md"]
```

`ContMechTensors` supports forward mode automatic differentiation (AD) of tensorial functions to compute first order derivatives (gradients) and second order derivatives (Hessians).
`Tensors` supports forward mode automatic differentiation (AD) of tensorial functions to compute first order derivatives (gradients) and second order derivatives (Hessians).
It does this by exploiting the `Dual` number defined in `ForwardDiff.jl`.
While `ForwardDiff.jl` can itself be used to differentiate tensor functions it is a bit awkward because `ForwardDiff.jl` is written to work with standard Julia `Array`s. One therefore has to send the input argument as an `Array` to `ForwardDiff.jl`, convert it to a `Tensor` and then convert the output `Array` to a `Tensor` again. This can also be inefficient since these `Array`s are allocated on the heap so one needs to preallocate which can be annoying.

Instead, it is simpler to use `ContMechTensors` own AD API to do the differentiation. This does not require any conversions and everything will be stack allocated so there is no need to preallocate.
Instead, it is simpler to use `Tensors` own AD API to do the differentiation. This does not require any conversions and everything will be stack allocated so there is no need to preallocate.

The API for AD in `ContMechTensors` is `gradient(f, A)` and `hessian(f, A)` where `f` is a function and `A` is a first or second order tensor. For `gradient` the function can return a scalar, vector (in case the input is a vector) or a second order tensor. For `hessian` the function should return a scalar.
The API for AD in `Tensors` is `gradient(f, A)` and `hessian(f, A)` where `f` is a function and `A` is a first or second order tensor. For `gradient` the function can return a scalar, vector (in case the input is a vector) or a second order tensor. For `hessian` the function should return a scalar.

When evaluating the function with dual numbers, the value (value and gradient in the case of hessian) is obtained automatically, along with the gradient. To obtain the lower order results `gradient` and `hessian` accepts a third arguement, a `Symbol`. Note that the symbol is only used to dispatch to the correct function, and thus it can be any symbol. In the examples the symbol `:all` is used to obtain all the lower order derivatives and values.

Expand All @@ -38,12 +38,12 @@ $f(\mathbf{x}) = |\mathbf{x}| \quad \Rightarrow \quad \partial f / \partial \mat
julia> x = rand(Vec{2});
julia> gradient(norm, x)
2-element ContMechTensors.Tensor{1,2,Float64,2}:
2-element Tensors.Tensor{1,2,Float64,2}:
0.61036
0.792124
julia> x / norm(x)
2-element ContMechTensors.Tensor{1,2,Float64,2}:
2-element Tensors.Tensor{1,2,Float64,2}:
0.61036
0.792124
```
Expand All @@ -56,12 +56,12 @@ $f(\mathbf{A}) = \det \mathbf{A} \quad \Rightarrow \quad \partial f / \partial \
julia> A = rand(SymmetricTensor{2,2});
julia> gradient(det, A)
2×2 ContMechTensors.SymmetricTensor{2,2,Float64,3}:
2×2 Tensors.SymmetricTensor{2,2,Float64,3}:
0.566237 -0.766797
-0.766797 0.590845
julia> inv(A)' * det(A)
2×2 ContMechTensors.SymmetricTensor{2,2,Float64,3}:
2×2 Tensors.SymmetricTensor{2,2,Float64,3}:
0.566237 -0.766797
-0.766797 0.590845
```
Expand Down
2 changes: 1 addition & 1 deletion docs/src/man/binary_operators.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
```@meta
DocTestSetup = quote
srand(1234)
using ContMechTensors
using Tensors
end
```

Expand Down
Loading

0 comments on commit 9fa996e

Please sign in to comment.