Skip to content

Commit

Permalink
docs updated
Browse files Browse the repository at this point in the history
  • Loading branch information
genmeblog committed Oct 10, 2023
1 parent 2edbd98 commit 3a0f0e8
Show file tree
Hide file tree
Showing 9 changed files with 6,683 additions and 6,055 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
* join columns should consider `nil` as missing value only, [discussion](https://clojurians.zulipchat.com/#narrow/stream/236259-tech.2Eml.2Edataset.2Edev/topic/tablecloth.20join-columns.20is.20nil-ing.20false.3F.20values)
* `:nil-missing?` in more places needed (group-by operations), [discussion](https://clojurians.zulipchat.com/#narrow/stream/236259-tech.2Eml.2Edataset.2Edev/topic/tablecloth.20group.20operations.20dropping.20a.20nil.20group-by.20column)
* changes to the `group-by` documentation [PR115](https://github.com/scicloj/tablecloth/pull/115), thanks to [Marshall](https://github.com/mars0i)

* reflection warning for `Collections/shuffle` removed

## [7.007]

Expand Down
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,28 +30,28 @@ During conversions of the examples I’ve come up how to reorganized
existing `tech.ml.dataset` functions into simple to use API. The main
goals were:

- Focus on dataset manipulation functionality, leaving other parts of
- Focus on dataset manipulation functionality, leaving other parts of
`tech.ml` like pipelines, datatypes, readers, ML, etc.
- Single entry point for common operations - one function dispatching
- Single entry point for common operations - one function dispatching
on given arguments.
- `group-by` results with special kind of dataset - a dataset
- `group-by` results with special kind of dataset - a dataset
containing subsets created after grouping as a column.
- Most operations recognize regular dataset and grouped dataset and
- Most operations recognize regular dataset and grouped dataset and
process data accordingly.
- One function form to enable thread-first on dataset.
- One function form to enable thread-first on dataset.

Important\! This library is not the replacement of `tech.ml.dataset` nor
Important! This library is not the replacement of `tech.ml.dataset` nor
a separate library. It should be considered as a addition on the top of
`tech.ml.dataset`.

If you want to know more about `tech.ml.dataset` and `dtype-next` please
refer their documentation:

- [tech.ml.dataset
- [tech.ml.dataset
walkthrough](https://techascent.github.io/tech.ml.dataset/walkthrough.html)
- [dtype-next
- [dtype-next
overview](https://cnuernber.github.io/dtype-next/overview.html)
- [dtype-next
- [dtype-next
cheatsheet](https://cnuernber.github.io/dtype-next/cheatsheet.html)

Join the discussion on
Expand Down Expand Up @@ -82,7 +82,7 @@ examples](https://scicloj.github.io/tablecloth/index.html)
\_unnamed \[10 3\]:

| :symbol | :year | summary |
| ------- | ----: | -----------: |
|---------|------:|-------------:|
| AAPL | 2000 | 21.74833333 |
| AAPL | 2001 | 10.17583333 |
| AAPL | 2002 | 9.40833333 |
Expand All @@ -106,8 +106,8 @@ Documentation is written in RMarkdown, that means that you need R to
create html/md/pdf files. Documentation contains around 600 code
snippets which are run during build. There are two files:

- `README.Rmd`
- `docs/index.Rmd`
- `README.Rmd`
- `docs/index.Rmd`

Prepare following software:

Expand Down Expand Up @@ -135,15 +135,15 @@ run it before making documentation

### Guideline

1. Before commiting changes please perform tests. I ususally do: `lein
do clean, check, test` and build documentation as described above
(which also tests whole library).
1. Before commiting changes please perform tests. I ususally do:
`lein do clean, check, test` and build documentation as described
above (which also tests whole library).
2. Keep API as simple as possible:
- first argument should be a dataset
- if parametrizations is complex, last argument should accept a
- first argument should be a dataset
- if parametrizations is complex, last argument should accept a
map with not obligatory function arguments
- avoid variadic associative destructuring for function arguments
- usually function should working on grouped dataset as well,
- avoid variadic associative destructuring for function arguments
- usually function should working on grouped dataset as well,
accept `parallel?` argument then (if applied).
3. Follow `potemkin` pattern and import functions to the API namespace
using `tech.v3.datatype.export-symbols/export-symbols` function
Expand All @@ -155,8 +155,8 @@ run it before making documentation

## TODO

- tests
- tutorials
- tests
- tutorials

## Licence

Expand Down
6,131 changes: 3,388 additions & 2,743 deletions docs/index.html

Large diffs are not rendered by default.

Loading

0 comments on commit 3a0f0e8

Please sign in to comment.