From 9462f33ca81e964375a24ac74d85533c748c5ad2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Cl=C3=A9ment=20Berthet?= Date: Mon, 2 Sep 2024 10:30:26 +0200 Subject: [PATCH] Releasecran/271 (#253) * update github workflow * Read cluster desc bugfix when cluster exist (#242) * fix bug readClusterDesc when no cluster + cover ST and RES case * add test (init test for thermal + res) * Release/cran261 (#244) (#245) update with developments made during the CRAN release phase * Add argument encoding in api_get() (#246) * Add encoding argument to api_get() * [readIniFile] : avoid `utils::type.convert` on specific cases (ex : 789e or 123i) (#247) * Add specific case to avoid bad convert * Add unit test file for readIniFile * Release/v8.7.0 (#224) ### Breaking changes (Antares v8.7.0) : * `readBindingConstraints()` read now Scenarized RHS for binding constraints (cf. Antares v8.7 changelog) - function returns a new list structure * Private function `fread_antares()` no longer returns warnings * `api_put()/api_delete()` return a server error message BUGFIXES : * `readBindingConstraints()` read well study >= v8.3.2 DATA : * A test study in tar.gz format is available in version v8.7.0 * An empty test study in version v8.7.0 for marginal cases Dependencies : * New package `lifecycle` to manage functions status/package status * Dev readInputThermal, add area parameter + parameter thermalavailabilities + default matrix when no time series in cluster (#243) * Dev readInputThermal, add area parameter + parameter thermalavailabilities + default matrix when no time series in cluster * .importThermalData fixed to be compatible with data daily format * fix test thermal + add test RES * update * Ant1777/dimbcgroup (#251) * setSimulationPath() updated with new private function to add meta data group dimension for binding constraints * setSimulationPath() doc updated with new parameter * Read ST clusters with readAntares() (#252) * Add .importOutputForSTClusters() * Add 'clustersST' argument in readAntares() * create a wrapper for calling .reshape_details_file() * Simplify .get_value_columns_details_file() with a reference table * Correct areasWithSTClusters parameter in .getSimOptions() * Add unit tests for 'areasWithSTClusters' parameter * Add DETAILS_FILES_TYPE column in the simulation_variables_names_by_support.csv * Update NEWS.md * Make new parameter 'clustersST' API compatible * Update documentation with the new function .importOutputForSTClusters() * Import simulation_variables_names_by_support.csv in data.frame type rather than data.table * Specify the separator for the simulation_variables_names_by_support.csv import * Check reverse dependencies * revdep check ok * fix rhdf5 call refenrence * Revdeps ok * cleaning tests * add cran-comments * COMMENT THESE TESTS * GitHub Actions workflow updated with r-hub v2 * delete package rhdf5 from suggests and delete all functions who use it + delete tests * cran comment updated with rev dep check ok * update setsimulation path no more check .h5 * update test to put inside testhat * comment, cran issue * update study selection "grep" pattern in tests causing CRAN ISSUE * add test for path in setSimulationPath() + test + cleaning vignette * update docs to put values returned + some cleaning references to private function with (:::) * Fix bad usage of options("warn") * delete references to .GlobalEnv in test environment * readAntares() updated to delete `assign` function to modifiy the .GlobalEnv * delete assign() function and ref to .GlobalEnv * update cran-comment * setSimulationPath() delete `getwd()` to choose dir in windows (cran policie) --- .github/workflows/rhub.yaml | 95 ++ DESCRIPTION | 7 +- NAMESPACE | 6 +- NEWS.md | 30 +- R/aggregateResult.R | 935 ++++++++--------- R/giveSize.R | 2 + R/h5_antaresReadH5.R | 980 ------------------ R/h5_readInputs.R | 54 - R/h5_setSimulationPathH5.R | 77 -- R/h5_timeManagement.R | 133 --- R/h5_transformData.R | 36 - R/h5_writeData.R | 157 --- R/h5_writeH5ByYear.R | 492 --------- R/h5utils.R | 26 - R/hvdcModification.R | 3 + R/importInput.R | 9 +- R/importOutput.R | 167 +-- R/ponderateMcAggregation.R | 3 + R/readAntares.R | 99 +- R/readBindingConstraints.R | 7 - R/readClusterDesc.R | 14 +- R/readInputClusters.R | 170 ++- R/readLayout.R | 14 +- R/setHvdcAreas.R | 3 +- R/setSimulationPath.R | 121 ++- R/showAliases.R | 6 +- R/utils_api.R | 14 + R/zzz.R | 149 +-- cran-comments.md | 26 + .../simulation_variables_names_by_support.csv | 9 + man/aggregatate_mc_all.Rd | 4 + man/antaresRead-package.Rd | 1 + man/dot-getOptionsH5.Rd | 15 - man/dot-h5ReadAntares.Rd | 71 -- man/dot-writeAntaresH5Fun.Rd | 32 - man/dot-writeIni.Rd | 10 +- man/hvdcModification.Rd | 4 + man/isH5Opts.Rd | 14 - man/ponderateMcAggregation.Rd | 3 + man/readAntares.Rd | 8 + man/readInputRES.Rd | 5 +- man/readInputThermal.Rd | 8 +- man/setHvdcAreas.Rd | 3 +- man/setRam.Rd | 3 + man/setSimulationPath.Rd | 3 + man/setTimeoutAPI.Rd | 4 + man/writeAntaresH5.Rd | 130 --- revdep/README.md | 20 +- tests/testthat/helper_init.R | 113 +- tests/testthat/test-.timeIdToDate.R | 2 +- tests/testthat/test-aggregateResult.R | 60 +- tests/testthat/test-h5ReadAntares.R | 175 ---- tests/testthat/test-h5_nodata.R | 14 - tests/testthat/test-h5_readInputs.R | 42 - tests/testthat/test-h5_setSimulationPathH5.R | 19 - tests/testthat/test-h5_write.R | 61 -- tests/testthat/test-importOutputForClusters.R | 62 ++ tests/testthat/test-ponderate.R | 110 +- tests/testthat/test-readAntares_STclusters.R | 15 + tests/testthat/test-readBindingConstraints.R | 4 +- tests/testthat/test-readClusterDesc.R | 2 +- tests/testthat/test-readInputClusters.R | 104 +- tests/testthat/test-readInputTS.R | 6 +- tests/testthat/test-readLayout.R | 3 - tests/testthat/test-read_optim_criteria.R | 3 - ...{test-setup.R => test-setSimulationPath.R} | 73 +- tests/testthat/test-viewAntares.R | 86 +- vignettes/antaresH5.Rmd | 143 --- 68 files changed, 1456 insertions(+), 3823 deletions(-) create mode 100644 .github/workflows/rhub.yaml delete mode 100644 R/h5_antaresReadH5.R delete mode 100644 R/h5_readInputs.R delete mode 100644 R/h5_setSimulationPathH5.R delete mode 100644 R/h5_timeManagement.R delete mode 100644 R/h5_transformData.R delete mode 100644 R/h5_writeData.R delete mode 100644 R/h5_writeH5ByYear.R delete mode 100644 R/h5utils.R create mode 100644 cran-comments.md create mode 100644 inst/format_output/simulation_variables_names_by_support.csv delete mode 100644 man/dot-getOptionsH5.Rd delete mode 100644 man/dot-h5ReadAntares.Rd delete mode 100644 man/dot-writeAntaresH5Fun.Rd delete mode 100644 man/isH5Opts.Rd delete mode 100644 man/writeAntaresH5.Rd delete mode 100644 tests/testthat/test-h5ReadAntares.R delete mode 100644 tests/testthat/test-h5_nodata.R delete mode 100644 tests/testthat/test-h5_readInputs.R delete mode 100644 tests/testthat/test-h5_setSimulationPathH5.R delete mode 100644 tests/testthat/test-h5_write.R create mode 100644 tests/testthat/test-importOutputForClusters.R create mode 100644 tests/testthat/test-readAntares_STclusters.R rename tests/testthat/{test-setup.R => test-setSimulationPath.R} (77%) delete mode 100644 vignettes/antaresH5.Rmd diff --git a/.github/workflows/rhub.yaml b/.github/workflows/rhub.yaml new file mode 100644 index 00000000..74ec7b05 --- /dev/null +++ b/.github/workflows/rhub.yaml @@ -0,0 +1,95 @@ +# R-hub's generic GitHub Actions workflow file. It's canonical location is at +# https://github.com/r-hub/actions/blob/v1/workflows/rhub.yaml +# You can update this file to a newer version using the rhub2 package: +# +# rhub::rhub_setup() +# +# It is unlikely that you need to modify this file manually. + +name: R-hub +run-name: "${{ github.event.inputs.id }}: ${{ github.event.inputs.name || format('Manually run by {0}', github.triggering_actor) }}" + +on: + workflow_dispatch: + inputs: + config: + description: 'A comma separated list of R-hub platforms to use.' + type: string + default: 'linux,windows,macos' + name: + description: 'Run name. You can leave this empty now.' + type: string + id: + description: 'Unique ID. You can leave this empty now.' + type: string + +jobs: + + setup: + runs-on: ubuntu-latest + outputs: + containers: ${{ steps.rhub-setup.outputs.containers }} + platforms: ${{ steps.rhub-setup.outputs.platforms }} + + steps: + # NO NEED TO CHECKOUT HERE + - uses: r-hub/actions/setup@v1 + with: + config: ${{ github.event.inputs.config }} + id: rhub-setup + + linux-containers: + needs: setup + if: ${{ needs.setup.outputs.containers != '[]' }} + runs-on: ubuntu-latest + name: ${{ matrix.config.label }} + strategy: + fail-fast: false + matrix: + config: ${{ fromJson(needs.setup.outputs.containers) }} + container: + image: ${{ matrix.config.container }} + + steps: + - uses: r-hub/actions/checkout@v1 + - uses: r-hub/actions/platform-info@v1 + with: + token: ${{ secrets.RHUB_TOKEN }} + job-config: ${{ matrix.config.job-config }} + - uses: r-hub/actions/setup-deps@v1 + with: + token: ${{ secrets.RHUB_TOKEN }} + job-config: ${{ matrix.config.job-config }} + - uses: r-hub/actions/run-check@v1 + with: + token: ${{ secrets.RHUB_TOKEN }} + job-config: ${{ matrix.config.job-config }} + + other-platforms: + needs: setup + if: ${{ needs.setup.outputs.platforms != '[]' }} + runs-on: ${{ matrix.config.os }} + name: ${{ matrix.config.label }} + strategy: + fail-fast: false + matrix: + config: ${{ fromJson(needs.setup.outputs.platforms) }} + + steps: + - uses: r-hub/actions/checkout@v1 + - uses: r-hub/actions/setup-r@v1 + with: + job-config: ${{ matrix.config.job-config }} + token: ${{ secrets.RHUB_TOKEN }} + - uses: r-hub/actions/platform-info@v1 + with: + token: ${{ secrets.RHUB_TOKEN }} + job-config: ${{ matrix.config.job-config }} + - uses: r-hub/actions/setup-deps@v1 + with: + job-config: ${{ matrix.config.job-config }} + token: ${{ secrets.RHUB_TOKEN }} + - uses: r-hub/actions/run-check@v1 + with: + job-config: ${{ matrix.config.job-config }} + token: ${{ secrets.RHUB_TOKEN }} diff --git a/DESCRIPTION b/DESCRIPTION index 6cbe66ea..08d8dd60 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,7 +1,7 @@ Package: antaresRead Type: Package Title: Import, Manipulate and Explore the Results of an 'Antares' Simulation -Version: 2.7.0 +Version: 2.7.1 Authors@R: c( person("Tatiana", "Vargas", email = "tatiana.vargas@rte-france.com", role = c("aut", "cre")), person("Jalal-Edine", "ZAWAM", role = "aut"), @@ -15,6 +15,7 @@ Authors@R: c( person("Clement", "Berthet", role = "ctb"), person("Kamel", "Kemiha", role = "ctb"), person("Abdallah", "Mahoudi", role = "ctb"), + person("Nicolas", "Boitard", role = "ctb"), person("RTE", role = "cph") ) Description: Import, manipulate and explore results generated by 'Antares', a @@ -40,9 +41,9 @@ Imports: utils, memuse, purrr, - lifecycle + lifecycle, + assertthat Suggests: - rhdf5 (>= 2.24.0), testthat, covr, knitr, diff --git a/NAMESPACE b/NAMESPACE index a3692c35..2dd11f06 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -21,9 +21,6 @@ S3method(subset,antaresDataList) S3method(summary,bindingConstraints) S3method(viewAntares,antaresDataList) S3method(viewAntares,default) -export(.getOptionsH5) -export(.h5ReadAntares) -export(.writeAntaresH5Fun) export(aggregateResult) export(api_delete) export(api_get) @@ -41,7 +38,6 @@ export(getGeographicTrimming) export(getIdCols) export(getLinks) export(hvdcModification) -export(isH5Opts) export(mergeDigests) export(parAggregateMCall) export(ponderateMcAggregation) @@ -71,7 +67,6 @@ export(setTimeoutAPI) export(showAliases) export(simOptions) export(viewAntares) -export(writeAntaresH5) export(writeDigest) import(bit64) import(data.table) @@ -79,6 +74,7 @@ import(doParallel) import(jsonlite) import(parallel) import(plyr) +importFrom(assertthat,assert_that) importFrom(doParallel,registerDoParallel) importFrom(grDevices,col2rgb) importFrom(grDevices,rgb) diff --git a/NEWS.md b/NEWS.md index ec4f1984..f5bdf368 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,5 +1,28 @@ > Copyright © 2016 RTE Réseau de transport d’électricité +# antaresRead 2.7.1 + +NEW FEATURES: + +* `readInputThermal()` : + - new parameter **areas** to get desired clusters from selected areas. + - new parameter **thermalAvailabilities** to import time series. +* `readInputRES()` new parameter **areas** to get desired clusters from selected areas. +* `setSimulationPath()` return a new parameter `binding` (for studies >= v8.7.0). +It contains a table with group dimensions of time series for binding constraints. +* `readAntares()` new parameter **clustersST** to read (output simulation) short-term clusters + +BREAKING CHANGES : + +* `readInputThermal()` / `readInputRES()` default value when no time series in the selected clusters. + +BUGFIXES : + +* `readInputThermal()` return data from file data.txt with `thermalData` parameter +* `setSimulationPath()` has also the parameter **areasWithSTClusters** in 'output' mode + + + # antaresRead 2.7.0 ### Breaking changes (Antares v8.7.0) : @@ -23,7 +46,8 @@ Dependencies : * New package `lifecycle` to manage functions status/package status -# antaresRead 2.6.2 (development) + +# antaresRead 2.6.2 BUGFIXES : * `readIniFile()` : avoid `utils::type.convert` on specific cases (ex : 789e or 123i) @@ -45,6 +69,8 @@ BUGFIXES : BREAKING CHANGES : * `api_get()` has a new parameter to control JSON file parsing +* `readInputThermal()` default value when no time series in the selected clusters. +* `readInputRES()` default value when no time series in the selected clusters * `readClusterDesc()`/ `readClusterRESDesc()` / `readClusterSTDesc()` return empty dataTable and warning if no cluster in Antares study. @@ -60,7 +86,7 @@ BREAKING CHANGES (Antares v8.6) : * `readInputTS()` is now compatible to read time series with : - "short-term storage" - "mingen" (pmin hydro value) -* `setSimulationPath()` has new parameter `areasWithSTClusters` (name of area with "st-storage" cluster) +* `setSimulationPath()` has new parameter **areasWithSTClusters** (name of area with "st-storage" cluster) BUGFIXES : diff --git a/R/aggregateResult.R b/R/aggregateResult.R index b215e452..1b11d06a 100644 --- a/R/aggregateResult.R +++ b/R/aggregateResult.R @@ -22,6 +22,9 @@ #' @importFrom stringr str_split #' @importFrom stringi stri_replace_last_fixed #' @importFrom memuse Sys.meminfo +#' +#' @return Object `list` of data.tables, each element representing one type +#' of element (areas, links, clusters) #' #' @export #' @@ -301,15 +304,15 @@ parAggregateMCall <- function(opts, } #Calcul of sd - oldw <- getOption("warn") - options(warn = -1) b <- Sys.time() coef_div_var = (coef_div_mc_pond )#- coef_div_mc_pond_2 / coef_div_mc_pond for (elmt in c("areas","links","clusters","clustersRes")){ if (!is.null(value[[elmt]])){ - value[[elmt]]$std <- sqrt(value[[elmt]]$var / coef_div_var) + suppressWarnings( + value[[elmt]]$std <- sqrt(value[[elmt]]$var / coef_div_var) + ) #nan due to round for (i in names(value[[elmt]]$std)) value[[elmt]]$std[is.nan(get(i)), (i) := 0] @@ -327,8 +330,6 @@ parAggregateMCall <- function(opts, }) } - options(warn = oldw) - for (itm in names(value)){ if(!is.null(value[[itm]])){ value[[itm]]$sumC <- NULL @@ -604,13 +605,13 @@ parAggregateMCall <- function(opts, } } LOLD_data[, isLOLD_cum := 100 * isLOLD_cum/sum(mcWeights)] - assign("LOLD_data", LOLD_data, envir = parent.env(environment())) } }) - #browser() + # browser() .addMessage(verbose, paste0("------- End Mc-all : ", type, " -------")) - .formatOutput( lapply(value, function(X)(Reduce(cbind, X))), struct) + outputlist <- .formatOutput( lapply(value, function(X)(Reduce(cbind, X))), struct) + list(outputlist, LOLD_data) }, verbose = verbose, simplify = FALSE) @@ -620,7 +621,7 @@ parAggregateMCall <- function(opts, # Create grid folder#### .gridFolderCreation(opts, verbose) # Create digest#### - suppressWarnings(.writeDigestFile(opts, output, tmstp, linkTable, verbose, LOLD_data)) + suppressWarnings(.writeDigestFile(opts, output[[1]], tmstp, linkTable, verbose, LOLD_data = output[[1]][[2]])) } mc_all <- file.path(opts$simDataPath, "mc-all") @@ -1043,7 +1044,6 @@ aggregateResult <- function(opts, .formatOutput <- function(out, struct){ # out <- lapply(value, function(X)(Reduce(cbind, X))) - # browser() for(i in names(struct)){ if(is.null(nrow(struct[[i]]))){ struct[[i]] <- NULL @@ -1417,9 +1417,9 @@ pmax.fast <- function(k,x) (x+k + abs(x-k))/2 # From antaresFlowbased duplicated necessary -#' Write ini file from list obtain by antaresRead:::readIniFile and modify by user +#' Write ini file from list obtain by antaresRead::readIniFile and modify by user #' -#' @param listData \code{list}, modified list obtained by antaresRead:::readIniFile. +#' @param listData \code{list}, modified list obtained by antaresRead::readIniFile. #' @param pathIni \code{Character}, Path to ini file. #' @param overwrite logical, should file be overwritten if already exist? #' @@ -1427,11 +1427,12 @@ pmax.fast <- function(k,x) (x+k + abs(x-k))/2 #' #' \dontrun{ #' pathIni <- "D:/exemple_test/settings/generaldata.ini" -#' generalSetting <- antaresRead:::readIniFile(pathIni) +#' generalSetting <- antaresRead::readIniFile(pathIni) #' generalSetting$output$synthesis <- FALSE #' writeIni(generalSetting, pathIni) #' } #' +#' @keywords internal #' .writeIni <- function(listData, pathIni, overwrite = FALSE) { if (file.exists(pathIni)) { @@ -1572,12 +1573,9 @@ aggregateResult_old <- function(opts, verbose = 1, }) } - - - oldw <- getOption("warn") - options(warn = -1) - opts <- setSimulationPath(opts$simPath) - options(warn = oldw) + suppressWarnings( + opts <- setSimulationPath(opts$simPath) + ) # Version which readAntares linkTable <- try({ @@ -1620,517 +1618,507 @@ aggregateResult_old <- function(opts, verbose = 1, .addMessage(verbose, paste0("------- Mc-all : ", type, " -------")) - try({ - - # browser() - # load first MC-year - a <- Sys.time() - - oldw <- getOption("warn") - options(warn = -1) - - if(!filtering) - { - dta <- readAntares(areas = "all", links = "all", clusters = "all", - clustersRes = "all", timeStep = type, simplify = FALSE, - mcYears = numMc[1], showProgress = FALSE) - } else { - if(is.null(selected)){ - - areasselect <- .getAreasToAggregate(opts, type) - linksSelect <- .getLinksToAggregate(opts, type) - dta <- readAntares(areas = areasselect, - links = linksSelect, - clusters = areasselect, - clustersRes = areasselect, - timeStep = type, - simplify = FALSE, - mcYears = numMc[1], - showProgress = FALSE) + suppressWarnings( + try({ + + # browser() + # load first MC-year + a <- Sys.time() + + if(!filtering) + { + dta <- readAntares(areas = "all", links = "all", clusters = "all", + clustersRes = "all", timeStep = type, simplify = FALSE, + mcYears = numMc[1], showProgress = FALSE) } else { - dta <- readAntares(areas = selected$areas, - links = selected$links, - clusters = selected[["clusters"]], - clustersRes = selected[["clustersRes"]], - timeStep = type, - simplify = FALSE, - mcYears = numMc[1], - showProgress = FALSE) - + if(is.null(selected)){ + + areasselect <- .getAreasToAggregate(opts, type) + linksSelect <- .getLinksToAggregate(opts, type) + dta <- readAntares(areas = areasselect, + links = linksSelect, + clusters = areasselect, + clustersRes = areasselect, + timeStep = type, + simplify = FALSE, + mcYears = numMc[1], + showProgress = FALSE) + } else { + dta <- readAntares(areas = selected$areas, + links = selected$links, + clusters = selected[["clusters"]], + clustersRes = selected[["clustersRes"]], + timeStep = type, + simplify = FALSE, + mcYears = numMc[1], + showProgress = FALSE) + + } } - } - - options(warn = oldw) - - if(length(dta)>0){ - dtaLoadAndcalcul <- try({ - - aTot <- as.numeric(Sys.time() - a) - - SDcolsStartareas <- switch(type, - daily = 6, - annual = 4, - hourly = 7, - monthly = 5, - weekly = 4 - ) - - SDcolsStartClust <- SDcolsStartareas + 1 - #make structure - - struct <- list() - if(!is.null(dta[["areas"]])){ - struct$areas <- dta$areas[,.SD, .SDcols = 1:SDcolsStartareas] - } - - if(!is.null(dta[["links"]])){ - struct$links <- dta$links[,.SD, .SDcols = 1:SDcolsStartareas] - } - - if(!is.null(dta[["clusters"]])){ - struct$clusters <- dta[["clusters"]][,.SD, .SDcols = 1:SDcolsStartClust] - } - - if(!is.null(dta[["clustersRes"]])){ - struct$clustersRes <- dta[["clustersRes"]][,.SD, .SDcols = 1:SDcolsStartClust] - } + if(length(dta)>0){ - if(type == "weekly"){ - if(!is.null(struct[["areas"]])){ - struct$areas$timeId <- as.numeric(substr(struct$areas$time, nchar(as.character(struct$areas$time[1]))-1, - nchar(as.character(struct$areas$time[1])))) + dtaLoadAndcalcul <- try({ + + aTot <- as.numeric(Sys.time() - a) + + SDcolsStartareas <- switch(type, + daily = 6, + annual = 4, + hourly = 7, + monthly = 5, + weekly = 4 + ) + + SDcolsStartClust <- SDcolsStartareas + 1 + #make structure + + struct <- list() + if(!is.null(dta[["areas"]])){ + struct$areas <- dta$areas[,.SD, .SDcols = 1:SDcolsStartareas] } - if(!is.null(struct[["links"]])){ - struct$links$timeId <- as.numeric(substr(struct$link$time, nchar(as.character(struct$link$time[1]))-1, - nchar(as.character(struct$link$time[1])))) + if(!is.null(dta[["links"]])){ + struct$links <- dta$links[,.SD, .SDcols = 1:SDcolsStartareas] } - if(!is.null(struct[["clusters"]])){ - struct$clusters$timeId <- as.numeric(substr(struct$clusters$time, nchar(as.character(struct$clusters$time[1]))-1, - nchar(as.character(struct$clusters$time[1])))) + if(!is.null(dta[["clusters"]])){ + struct$clusters <- dta[["clusters"]][,.SD, .SDcols = 1:SDcolsStartClust] } - if(!is.null(struct[["clustersRes"]])){ - struct$clustersRes$timeId <- as.numeric(substr(struct$clustersRes$time, nchar(as.character(struct$clustersRes$time[1]))-1, - nchar(as.character(struct$clustersRes$time[1])))) + if(!is.null(dta[["clustersRes"]])){ + struct$clustersRes <- dta[["clustersRes"]][,.SD, .SDcols = 1:SDcolsStartClust] } - } - - if(!is.null(struct$areas$day)){ - struct$areas$day <- ifelse(nchar(struct$areas$day) == 1, - paste0("0", struct$areas$day), - as.character(struct$areas$day)) - } - if(!is.null(struct$links$day)){ - struct$links$day <- ifelse(nchar(struct$links$day) == 1, - paste0("0", struct$links$day), - as.character(struct$links$day)) - } - if(!is.null(struct[["clusters"]]$day)){ - struct$clusters$day <- ifelse(nchar(struct$clusters$day) == 1, - paste0("0", struct$clusters$day), - as.character(struct$clusters$day)) - } - - if(!is.null(struct[["clustersRes"]]$day)){ - struct$clustersRes$day <- ifelse(nchar(struct$clustersRes$day) == 1, - paste0("0", struct$clustersRes$day), - as.character(struct$clustersRes$day)) - } - - b <- Sys.time() - #value structure - value <- .giveValue(dta, SDcolsStartareas, SDcolsStartClust) - N <- length(numMc) - - W_sum = 0 - w_sum2 = 0 - mean_m = 0 - S = 0 - - value <- lapply(value, function(X){.creatStats(X, W_sum, w_sum2, mean_m, S, mcWeights[1])}) - - btot <- as.numeric(Sys.time() - b) - if(verbose>0) - { - try({ - .progBar(pb, type, 1, N, coef) - }) - } - #sequentially add values - if(N>1) - { - for(i in 2:N){ - a <- Sys.time() + + if(type == "weekly"){ + if(!is.null(struct[["areas"]])){ + struct$areas$timeId <- as.numeric(substr(struct$areas$time, nchar(as.character(struct$areas$time[1]))-1, + nchar(as.character(struct$areas$time[1])))) + } - oldw <- getOption("warn") - options(warn = -1) + if(!is.null(struct[["links"]])){ + struct$links$timeId <- as.numeric(substr(struct$link$time, nchar(as.character(struct$link$time[1]))-1, + nchar(as.character(struct$link$time[1])))) + } - if(!filtering) - { - dtaTP <- readAntares(areas = "all", links = "all", clusters = "all", clustersRes = "all", - timeStep = type, simplify = FALSE, mcYears = numMc[i], showProgress = FALSE) - } else { + if(!is.null(struct[["clusters"]])){ + struct$clusters$timeId <- as.numeric(substr(struct$clusters$time, nchar(as.character(struct$clusters$time[1]))-1, + nchar(as.character(struct$clusters$time[1])))) + } + + if(!is.null(struct[["clustersRes"]])){ + struct$clustersRes$timeId <- as.numeric(substr(struct$clustersRes$time, nchar(as.character(struct$clustersRes$time[1]))-1, + nchar(as.character(struct$clustersRes$time[1])))) + } + } + + if(!is.null(struct$areas$day)){ + struct$areas$day <- ifelse(nchar(struct$areas$day) == 1, + paste0("0", struct$areas$day), + as.character(struct$areas$day)) + } + if(!is.null(struct$links$day)){ + struct$links$day <- ifelse(nchar(struct$links$day) == 1, + paste0("0", struct$links$day), + as.character(struct$links$day)) + } + if(!is.null(struct[["clusters"]]$day)){ + struct$clusters$day <- ifelse(nchar(struct$clusters$day) == 1, + paste0("0", struct$clusters$day), + as.character(struct$clusters$day)) + } + + if(!is.null(struct[["clustersRes"]]$day)){ + struct$clustersRes$day <- ifelse(nchar(struct$clustersRes$day) == 1, + paste0("0", struct$clustersRes$day), + as.character(struct$clustersRes$day)) + } + + b <- Sys.time() + #value structure + value <- .giveValue(dta, SDcolsStartareas, SDcolsStartClust) + N <- length(numMc) + + W_sum = 0 + w_sum2 = 0 + mean_m = 0 + S = 0 + + value <- lapply(value, function(X){.creatStats(X, W_sum, w_sum2, mean_m, S, mcWeights[1])}) + + btot <- as.numeric(Sys.time() - b) + if(verbose>0) + { + try({ + .progBar(pb, type, 1, N, coef) + }) + } + #sequentially add values + if(N>1) + { + for(i in 2:N){ + a <- Sys.time() + + if(!filtering) + { + dtaTP <- readAntares(areas = "all", links = "all", clusters = "all", clustersRes = "all", + timeStep = type, simplify = FALSE, mcYears = numMc[i], showProgress = FALSE) + } else { + + if(is.null(selected)){ + + dtaTP <- readAntares(areas = areasselect, + links = linksSelect, + clusters = areasselect, + clustersRes = areasselect, + timeStep = type, simplify = FALSE, + mcYears = numMc[i], showProgress = FALSE) + } else{ + dtaTP <- readAntares(areas = selected$areas, + links = selected$links, + clusters = selected[["clusters"]], + clustersRes = selected[["clustersRes"]], + timeStep = type, + simplify = FALSE, + mcYears = numMc[i], + showProgress = FALSE) + + } + } + + aTot <- aTot + as.numeric(Sys.time() - a) + b <- Sys.time() - if(is.null(selected)){ + valueTP <- .giveValue(dtaTP, SDcolsStartareas, SDcolsStartClust) + + nmKeep <- names(valueTP) + + valueTP <- lapply(names(valueTP), function(X){ - dtaTP <- readAntares(areas = areasselect, - links = linksSelect, - clusters = areasselect, - clustersRes = areasselect, - timeStep = type, simplify = FALSE, - mcYears = numMc[i], showProgress = FALSE) - } else{ - dtaTP <- readAntares(areas = selected$areas, - links = selected$links, - clusters = selected[["clusters"]], - clustersRes = selected[["clustersRes"]], - timeStep = type, - simplify = FALSE, - mcYears = numMc[i], - showProgress = FALSE) + .creatStats(valueTP[[X]], value[[X]]$W_sum, value[[X]]$w_sum2, value[[X]]$mean_m, value[[X]]$S , mcWeights[i]) + }) + + names(valueTP) <- nmKeep + + # valueTP <- mapply(function(X, Y){.creatStats(X, Y$W_sum, Y$w_sum2, Y$mean_m, Y$S , mcWeights[i])}, X = valueTP, Y = value, SIMPLIFY = FALSE) + + value$areas <- .updateStats(value[["areas"]], valueTP[["areas"]]) + value$links <- .updateStats(value[["links"]], valueTP[["links"]]) + value$clusters <- .updateStats(value[["clusters"]], valueTP[["clusters"]]) + value$clustersRes <- .updateStats(value[["clustersRes"]], valueTP[["clustersRes"]]) + + btot <- btot + as.numeric(Sys.time() - b) + if(verbose>0) + { + try({ + .progBar(pb, type, i, N, coef) + }) } } - options(warn = oldw) - aTot <- aTot + as.numeric(Sys.time() - a) + #Calcul of sd b <- Sys.time() - valueTP <- .giveValue(dtaTP, SDcolsStartareas, SDcolsStartClust) - - nmKeep <- names(valueTP) - - valueTP <- lapply(names(valueTP), function(X){ - - .creatStats(valueTP[[X]], value[[X]]$W_sum, value[[X]]$w_sum2, value[[X]]$mean_m, value[[X]]$S , mcWeights[i]) - - }) - - names(valueTP) <- nmKeep + coef_div_var = (coef_div_mc_pond ) + value$areas$std <- sqrt(value$areas$var / coef_div_var) + #nan due to round + for (i in names(value$areas$std)) + value$areas$std[is.nan(get(i)), (i) := 0] - # valueTP <- mapply(function(X, Y){.creatStats(X, Y$W_sum, Y$w_sum2, Y$mean_m, Y$S , mcWeights[i])}, X = valueTP, Y = value, SIMPLIFY = FALSE) + value$links$std <- sqrt(value$links$var / coef_div_var) + #nan due to round + for (i in names(value$links$std)) + value$links$std[is.nan(get(i)), (i) := 0] - value$areas <- .updateStats(value[["areas"]], valueTP[["areas"]]) - value$links <- .updateStats(value[["links"]], valueTP[["links"]]) - value$clusters <- .updateStats(value[["clusters"]], valueTP[["clusters"]]) - value$clustersRes <- .updateStats(value[["clustersRes"]], valueTP[["clustersRes"]]) + if(!is.null(value[["clusters"]])){ + value$clusters$std <- sqrt(value$clusters$var / coef_div_var) + #nan due to round + for (i in names(value$clusters$std)) + value$clusters$std[is.nan(get(i)), (i) := 0] + } - btot <- btot + as.numeric(Sys.time() - b) - if(verbose>0) - { - try({ - .progBar(pb, type, i, N, coef) - }) + if(!is.null(value[["clustersRes"]])){ + value$clustersRes$std <- sqrt(value$clustersRes$var / coef_div_var) + #nan due to round + for (i in names(value$clustersRes$std)) + value$clustersRes$std[is.nan(get(i)), (i) := 0] } + } else { + # std to 0 + value <- lapply(value, function(x){ + if(!is.null(x$sumC)){ + x$std <- x$sumC + x$std[, c(colnames(x$std)) := lapply(.SD, function(x) 0), .SDcols = colnames(x$std)] + colnames(x$std) <- gsub("_std$", "", colnames(x$std)) + x + } + }) } - #Calcul of sd - oldw <- getOption("warn") - options(warn = -1) - b <- Sys.time() + value$areas$sumC <- NULL + value$links$sumC <- NULL - coef_div_var = (coef_div_mc_pond )#- coef_div_mc_pond_2 / coef_div_mc_pond - value$areas$std <- sqrt(value$areas$var / coef_div_var) - #nan due to round - for (i in names(value$areas$std)) - value$areas$std[is.nan(get(i)), (i) := 0] + if(!is.null(value[["clusters"]])){ + value$clusters$sumC <- NULL + } + if(!is.null(value[["clustersRes"]])){ + value$clustersRes$sumC <- NULL + } + value$areas$var <- NULL + value$areas$S <- NULL + value$areas$W_sum <- NULL + value$areas$w_sum2 <- NULL + value$areas$mean_m <- NULL - value$links$std <- sqrt(value$links$var / coef_div_var) - #nan due to round - for (i in names(value$links$std)) - value$links$std[is.nan(get(i)), (i) := 0] + value$links$var <- NULL + value$links$S <- NULL + value$links$W_sum <- NULL + value$links$w_sum2 <- NULL + value$links$mean_m <- NULL if(!is.null(value[["clusters"]])){ - value$clusters$std <- sqrt(value$clusters$var / coef_div_var) - #nan due to round - for (i in names(value$clusters$std)) - value$clusters$std[is.nan(get(i)), (i) := 0] + value$clusters$var <- NULL + value$clusters$S <- NULL + value$clusters$W_sum <- NULL + value$clusters$w_sum2 <- NULL + value$clusters$mean_m <- NULL } if(!is.null(value[["clustersRes"]])){ - value$clustersRes$std <- sqrt(value$clustersRes$var / coef_div_var) - #nan due to round - for (i in names(value$clustersRes$std)) - value$clustersRes$std[is.nan(get(i)), (i) := 0] + value$clustersRes$var <- NULL + value$clustersRes$S <- NULL + value$clustersRes$W_sum <- NULL + value$clustersRes$w_sum2 <- NULL + value$clustersRes$mean_m <- NULL } - } else { - # std to 0 - value <- lapply(value, function(x){ - if(!is.null(x$sumC)){ - x$std <- x$sumC - x$std[, c(colnames(x$std)) := lapply(.SD, function(x) 0), .SDcols = colnames(x$std)] - colnames(x$std) <- gsub("_std$", "", colnames(x$std)) - x - } - }) - } - - options(warn = oldw) - value$areas$sumC <- NULL - value$links$sumC <- NULL - - if(!is.null(value[["clusters"]])){ - value$clusters$sumC <- NULL - } - if(!is.null(value[["clustersRes"]])){ - value$clustersRes$sumC <- NULL - } - value$areas$var <- NULL - value$areas$S <- NULL - value$areas$W_sum <- NULL - value$areas$w_sum2 <- NULL - value$areas$mean_m <- NULL - - value$links$var <- NULL - value$links$S <- NULL - value$links$W_sum <- NULL - value$links$w_sum2 <- NULL - value$links$mean_m <- NULL - - if(!is.null(value[["clusters"]])){ - value$clusters$var <- NULL - value$clusters$S <- NULL - value$clusters$W_sum <- NULL - value$clusters$w_sum2 <- NULL - value$clusters$mean_m <- NULL - } - - if(!is.null(value[["clustersRes"]])){ - value$clustersRes$var <- NULL - value$clustersRes$S <- NULL - value$clustersRes$W_sum <- NULL - value$clustersRes$w_sum2 <- NULL - value$clustersRes$mean_m <- NULL - } - - if(!is.null(value$areas) && !is.null(names(value$areas$std))){names(value$areas$std) <- paste0(names(value$areas$std) , "_std")} - if(!is.null(value$links) && !is.null(names(value$links$std))){names(value$links$std) <- paste0(names(value$links$std) , "_std")} - if(!is.null(value[["clusters"]]) && !is.null(names(value[["clusters"]]$std))){names(value[["clusters"]]$std) <- paste0(names(value[["clusters"]]$std) , "_std")} - if(!is.null(value[["clustersRes"]]) && !is.null(names(value[["clustersRes"]]$std))){names(value[["clustersRes"]]$std) <- paste0(names(value[["clustersRes"]]$std) , "_std")} - - value$areas$sum <- value$areas$sum / coef_div_mc_pond - value$links$sum <- value$links$sum / coef_div_mc_pond - if(!is.null(value[["clusters"]])){ - value$clusters$sum <- value$clusters$sum / coef_div_mc_pond - } - if(!is.null(value[["clustersRes"]])){ - value$clustersRes$sum <- value$clustersRes$sum / coef_div_mc_pond - } - + + if(!is.null(value$areas) && !is.null(names(value$areas$std))){names(value$areas$std) <- paste0(names(value$areas$std) , "_std")} + if(!is.null(value$links) && !is.null(names(value$links$std))){names(value$links$std) <- paste0(names(value$links$std) , "_std")} + if(!is.null(value[["clusters"]]) && !is.null(names(value[["clusters"]]$std))){names(value[["clusters"]]$std) <- paste0(names(value[["clusters"]]$std) , "_std")} + if(!is.null(value[["clustersRes"]]) && !is.null(names(value[["clustersRes"]]$std))){names(value[["clustersRes"]]$std) <- paste0(names(value[["clustersRes"]]$std) , "_std")} + + value$areas$sum <- value$areas$sum / coef_div_mc_pond + value$links$sum <- value$links$sum / coef_div_mc_pond + if(!is.null(value[["clusters"]])){ + value$clusters$sum <- value$clusters$sum / coef_div_mc_pond + } + if(!is.null(value[["clustersRes"]])){ + value$clustersRes$sum <- value$clustersRes$sum / coef_div_mc_pond + } + + + btot <- btot + as.numeric(Sys.time() - b) + .addMessage(verbose, paste0("Time for reading data : ", round(aTot,1), " secondes")) + .addMessage(verbose, paste0("Time for calculating : ", round(btot,1), " secondes")) + }, silent = TRUE) - btot <- btot + as.numeric(Sys.time() - b) - .addMessage(verbose, paste0("Time for reading data : ", round(aTot,1), " secondes")) - .addMessage(verbose, paste0("Time for calculating : ", round(btot,1), " secondes")) - }, silent = TRUE) - - .errorTest(dtaLoadAndcalcul, verbose, "\nLoad data and calcul") - - # browser() - - #Write area - allfiles <- c("values") - - if(writeOutput == FALSE){ - if(verbose>0) - { - .progBar(pb, type, 1, 1, 1, terminate = TRUE) - } + .errorTest(dtaLoadAndcalcul, verbose, "\nLoad data and calcul") - return(.formatOutput( lapply(value, function(X)(Reduce(cbind, X))), struct)) - } else { + # browser() - if(!is.null(value$clustersRes) && is.data.frame(value$clustersRes) && nrow(value$clustersRes) > 0){ - warning("Writing clusterRes file is not at moment available") - } + #Write area + allfiles <- c("values") - areaWrite <- try(sapply(allfiles, function(f) - { - #prepare data for all country - areaSpecialFile <- linkTable[Folder == "area" & Files == f & Mode == tolower(opts$mode)] - namekeep <- paste(areaSpecialFile$Name, areaSpecialFile$Stats) - namekeepprog <- paste(areaSpecialFile$Name, areaSpecialFile$progNam) - areas <- cbind(value$areas$sum, value$areas$std, value$areas$min, value$areas$max) - if(nrow(areas) > 0) + if(writeOutput == FALSE){ + if(verbose>0) { - - areas <- areas[, .SD, .SDcols = which(names(areas)%in%opts$variables$areas)] - areas <- areas[, .SD, .SDcols = match(opts$variables$areas, names(areas))] - - nbvar <- ncol(areas) - areas <- cbind(struct$areas, areas) - ncolFix <- ncol(struct$areas) - 3 - areas[, c("mcYear", "time") := NULL] - allAreas <- unique(areas$area) - - for(i in 1:length(opts$variables$areas)) - { - var <- opts$variables$areas[i] - dig <- areaSpecialFile[var == paste(Name,progNam )]$digits - if(length(dig)>0)areas[, c(var) := .(do.call(round, args = list(get(var), digits = dig)))] - } - - - if(length(allAreas) > 0) - { - sapply(allAreas, function(areasel){ - #for each country prepare file - areastowrite <- areas[area == areasel] - areastowrite[,c("area") := NULL] - indexMin <- min(areas$timeId) - indexMax <- max(areas$timeId) - kepNam <- names(struct$areas)[!names(struct$areas)%in%c("area","mcYear","time")] - nameIndex <- ifelse(type == "weekly", "week", "index") - kepNam[which(kepNam == "timeId")] <- nameIndex - #write txt - .writeFileOut(dta = areastowrite, timestep = type, fileType = f, - ctry = areasel, opts = opts, folderType = "areas", nbvar = nbvar, - indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, - nomcair = areaSpecialFile$Name, unit = areaSpecialFile$Unit, - nomStruct = kepNam,Stats = areaSpecialFile$Stats) - - - }) - } + .progBar(pb, type, 1, 1, 1, terminate = TRUE) } - }), silent = TRUE) - - .errorTest(areaWrite, verbose, "Area write") - - allfiles <- c("values") - linkWrite <- try(sapply(allfiles, function(f) - { - #prepare data for all link - linkSpecialFile <- linkTable[Folder == "link" & Files == f & Mode == tolower(opts$mode)] - namekeep <- paste(linkSpecialFile$Name, linkSpecialFile$Stats) - namekeepprog <- paste(linkSpecialFile$Name, linkSpecialFile$progNam) - links <- cbind(value$links$sum, value$links$std, value$links$min, value$links$max) - if(nrow(links) > 0) - { - - - links <- links[, .SD, .SDcols = which(names(links)%in%opts$variables$links)] - links <- links[, .SD, .SDcols = match(opts$variables$links, names(links))] - - # - # areas <- areas[, .SD, .SDcols = which(names(areas)%in%opts$variables$links)] - # areas <- areas[, .SD, .SDcols = match(opts$variables$areas, names(areas))] - # - # - - nbvar <- ncol(links) - links <- cbind(struct$links, links) - ncolFix <- ncol(struct$links)-3 - links[, c("mcYear", "time") := NULL] - allLink<- unique(links$link) - - for(i in 1:length(opts$variables$links)) - { - var <- opts$variables$links[i] - dig <- linkSpecialFile[var == paste(Name,progNam )]$digits - if(length(dig)>0)links[, c(var) := .(do.call(round, args = list(get(var), digits = dig)))] - } - - sapply(allLink, function(linksel){ - #for eatch link prepare file - linkstowrite <- links[link == linksel] - linkstowrite[,c("link") := NULL] - indexMin <- min(links$timeId) - indexMax <- max(links$timeId) - kepNam <- names(struct$link)[!names(struct$link)%in%c("link","mcYear","time")] - nameIndex <- ifelse(type == "weekly", "week", "index") - kepNam[which(kepNam == "timeId")] <- nameIndex - #write txt - .writeFileOut(dta = linkstowrite, timestep = type, fileType = f, - ctry = linksel, opts = opts, folderType = "links", nbvar = nbvar, - indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, - nomcair = linkSpecialFile$Name, unit = linkSpecialFile$Unit, - nomStruct = kepNam,Stats = linkSpecialFile$Stats) - }) + + return(.formatOutput( lapply(value, function(X)(Reduce(cbind, X))), struct)) + } else { + + if(!is.null(value$clustersRes) && is.data.frame(value$clustersRes) && nrow(value$clustersRes) > 0){ + warning("Writing clusterRes file is not at moment available") } - }), silent = TRUE) - - .errorTest(linkWrite, verbose, "Link write") - - ##Details - details <- value$clusters$sum - - if(!is.null(struct$clusters$day)) - { - if(length(struct$clusters$day) > 0) + + areaWrite <- try(sapply(allfiles, function(f) { - endClust <- cbind(struct$clusters, details) - - endClust[, c("mcYear") := NULL] - - detailWrite <- try(sapply(unique(endClust$area), function(ctry){ - #for each country prepare file - endClustctry <- endClust[area == ctry] - orderBeg <- unique(endClustctry$time) - endClustctry[,c("area") := NULL] + #prepare data for all country + areaSpecialFile <- linkTable[Folder == "area" & Files == f & Mode == tolower(opts$mode)] + namekeep <- paste(areaSpecialFile$Name, areaSpecialFile$Stats) + namekeepprog <- paste(areaSpecialFile$Name, areaSpecialFile$progNam) + areas <- cbind(value$areas$sum, value$areas$std, value$areas$min, value$areas$max) + if(nrow(areas) > 0) + { + + areas <- areas[, .SD, .SDcols = which(names(areas)%in%opts$variables$areas)] + areas <- areas[, .SD, .SDcols = match(opts$variables$areas, names(areas))] + + nbvar <- ncol(areas) + areas <- cbind(struct$areas, areas) + ncolFix <- ncol(struct$areas) - 3 + areas[, c("mcYear", "time") := NULL] + allAreas <- unique(areas$area) - if(tolower(opts$mode) == "economy") + for(i in 1:length(opts$variables$areas)) + { + var <- opts$variables$areas[i] + dig <- areaSpecialFile[var == paste(Name,progNam )]$digits + if(length(dig)>0)areas[, c(var) := .(do.call(round, args = list(get(var), digits = dig)))] + } + + + if(length(allAreas) > 0) { - nameBy <- c("production", "NP Cost", "NODU") - }else{ - nameBy <- c("production") + sapply(allAreas, function(areasel){ + #for each country prepare file + areastowrite <- areas[area == areasel] + areastowrite[,c("area") := NULL] + indexMin <- min(areas$timeId) + indexMax <- max(areas$timeId) + kepNam <- names(struct$areas)[!names(struct$areas)%in%c("area","mcYear","time")] + nameIndex <- ifelse(type == "weekly", "week", "index") + kepNam[which(kepNam == "timeId")] <- nameIndex + #write txt + .writeFileOut(dta = areastowrite, timestep = type, fileType = f, + ctry = areasel, opts = opts, folderType = "areas", nbvar = nbvar, + indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, + nomcair = areaSpecialFile$Name, unit = areaSpecialFile$Unit, + nomStruct = kepNam,Stats = areaSpecialFile$Stats) + + + }) } - # if("NP Cost"%in%names(endClustctry)){} - nomStruct <- names(endClustctry)[!names(endClustctry) %in% c("cluster", nameBy)] + } + }), silent = TRUE) + + .errorTest(areaWrite, verbose, "Area write") + + allfiles <- c("values") + linkWrite <- try(sapply(allfiles, function(f) + { + #prepare data for all link + linkSpecialFile <- linkTable[Folder == "link" & Files == f & Mode == tolower(opts$mode)] + namekeep <- paste(linkSpecialFile$Name, linkSpecialFile$Stats) + namekeepprog <- paste(linkSpecialFile$Name, linkSpecialFile$progNam) + links <- cbind(value$links$sum, value$links$std, value$links$min, value$links$max) + if(nrow(links) > 0) + { + - tmp_formula <- nomStruct - # tmp_formula <- gsub(" ", "_", tmp_formula) - tmp_formula <- paste0("`", tmp_formula, "`") + links <- links[, .SD, .SDcols = which(names(links)%in%opts$variables$links)] + links <- links[, .SD, .SDcols = match(opts$variables$links, names(links))] + + # + # areas <- areas[, .SD, .SDcols = which(names(areas)%in%opts$variables$links)] + # areas <- areas[, .SD, .SDcols = match(opts$variables$areas, names(areas))] + # + # - tmp_formula <- as.formula(paste0(paste0(tmp_formula, collapse = " + "), "~cluster")) + nbvar <- ncol(links) + links <- cbind(struct$links, links) + ncolFix <- ncol(struct$links)-3 + links[, c("mcYear", "time") := NULL] + allLink<- unique(links$link) - if(tolower(opts$mode) == "economy") + for(i in 1:length(opts$variables$links)) { - endClustctry[, c(nameBy) := list(round(`production`), - round(`NP Cost`), - round(`NODU`))] - }else{ - endClustctry[, c(nameBy) := list(round(`production`))] + var <- opts$variables$links[i] + dig <- linkSpecialFile[var == paste(Name,progNam )]$digits + if(length(dig)>0)links[, c(var) := .(do.call(round, args = list(get(var), digits = dig)))] } - endClustctry <- data.table::dcast(endClustctry, tmp_formula, - value.var = c(nameBy)) + sapply(allLink, function(linksel){ + #for eatch link prepare file + linkstowrite <- links[link == linksel] + linkstowrite[,c("link") := NULL] + indexMin <- min(links$timeId) + indexMax <- max(links$timeId) + kepNam <- names(struct$link)[!names(struct$link)%in%c("link","mcYear","time")] + nameIndex <- ifelse(type == "weekly", "week", "index") + kepNam[which(kepNam == "timeId")] <- nameIndex + #write txt + .writeFileOut(dta = linkstowrite, timestep = type, fileType = f, + ctry = linksel, opts = opts, folderType = "links", nbvar = nbvar, + indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, + nomcair = linkSpecialFile$Name, unit = linkSpecialFile$Unit, + nomStruct = kepNam,Stats = linkSpecialFile$Stats) + }) + } + }), silent = TRUE) + + .errorTest(linkWrite, verbose, "Link write") + + ##Details + details <- value$clusters$sum + + if(!is.null(struct$clusters$day)) + { + if(length(struct$clusters$day) > 0) + { + endClust <- cbind(struct$clusters, details) + + endClust[, c("mcYear") := NULL] - endClustctry <- endClustctry[match(orderBeg, endClustctry$time)] - endClustctry[,c("time") := NULL] - nomStruct <- nomStruct[-which(nomStruct == "time")] - nomcair <- names(endClustctry) - nomcair <- nomcair[!nomcair%in%nomStruct] - nbvar <- length(nomcair) - unit <- rep("", length(nomcair)) - unit[grep("production",nomcair)] <- "MWh" - unit[grep("NP Cost",nomcair)] <- "NP Cost - Euro" - unit[grep("NODU",nomcair)] <- "NODU" - nomcair <- gsub("production","",nomcair) - nomcair <- gsub("NP Cost","",nomcair) - nomcair <- gsub("NODU","",nomcair) - Stats <- rep("EXP", length(unit)) - nameIndex <- ifelse(type == "weekly", "week", "index") - nomStruct[which(nomStruct == "timeId")] <- nameIndex - indexMin <- min(endClustctry$timeId) - indexMax <- max(endClustctry$timeId) - ncolFix <- length(nomStruct) - #write details txt - .writeFileOut(dta = endClustctry, timestep = type, fileType = "details", - ctry = ctry, opts = opts, folderType = "areas", nbvar = nbvar, - indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, - nomcair = nomcair, unit = unit, nomStruct = nomStruct,Stats = Stats) - }), silent = TRUE) - .errorTest(detailWrite, verbose, "Detail write") + detailWrite <- try(sapply(unique(endClust$area), function(ctry){ + #for each country prepare file + endClustctry <- endClust[area == ctry] + orderBeg <- unique(endClustctry$time) + endClustctry[,c("area") := NULL] + + if(tolower(opts$mode) == "economy") + { + nameBy <- c("production", "NP Cost", "NODU") + }else{ + nameBy <- c("production") + } + # if("NP Cost"%in%names(endClustctry)){} + nomStruct <- names(endClustctry)[!names(endClustctry) %in% c("cluster", nameBy)] + + tmp_formula <- nomStruct + # tmp_formula <- gsub(" ", "_", tmp_formula) + tmp_formula <- paste0("`", tmp_formula, "`") + + tmp_formula <- as.formula(paste0(paste0(tmp_formula, collapse = " + "), "~cluster")) + + if(tolower(opts$mode) == "economy") + { + endClustctry[, c(nameBy) := list(round(`production`), + round(`NP Cost`), + round(`NODU`))] + }else{ + endClustctry[, c(nameBy) := list(round(`production`))] + } + + endClustctry <- data.table::dcast(endClustctry, tmp_formula, + value.var = c(nameBy)) + + endClustctry <- endClustctry[match(orderBeg, endClustctry$time)] + endClustctry[,c("time") := NULL] + nomStruct <- nomStruct[-which(nomStruct == "time")] + nomcair <- names(endClustctry) + nomcair <- nomcair[!nomcair%in%nomStruct] + nbvar <- length(nomcair) + unit <- rep("", length(nomcair)) + unit[grep("production",nomcair)] <- "MWh" + unit[grep("NP Cost",nomcair)] <- "NP Cost - Euro" + unit[grep("NODU",nomcair)] <- "NODU" + nomcair <- gsub("production","",nomcair) + nomcair <- gsub("NP Cost","",nomcair) + nomcair <- gsub("NODU","",nomcair) + Stats <- rep("EXP", length(unit)) + nameIndex <- ifelse(type == "weekly", "week", "index") + nomStruct[which(nomStruct == "timeId")] <- nameIndex + indexMin <- min(endClustctry$timeId) + indexMax <- max(endClustctry$timeId) + ncolFix <- length(nomStruct) + #write details txt + .writeFileOut(dta = endClustctry, timestep = type, fileType = "details", + ctry = ctry, opts = opts, folderType = "areas", nbvar = nbvar, + indexMin = indexMin, indexMax = indexMax, ncolFix = ncolFix, + nomcair = nomcair, unit = unit, nomStruct = nomStruct,Stats = Stats) + }), silent = TRUE) + .errorTest(detailWrite, verbose, "Detail write") + } } } } - } - }) + }) + ) .addMessage(verbose, paste0("------- End Mc-all : ", type, " -------")) .formatOutput( lapply(value, function(X)(Reduce(cbind, X))), struct) @@ -2159,8 +2147,7 @@ aggregateResult_old <- function(opts, verbose = 1, #' @importFrom pbapply pblapply pboptions #' @importFrom doParallel registerDoParallel #' -#' -#' @rdname aggregatate_mc_all_old +#' @noRd #' parAggregateMCall_old <- function(opts, nbcl = 8, diff --git a/R/giveSize.R b/R/giveSize.R index 4b941253..1318511e 100644 --- a/R/giveSize.R +++ b/R/giveSize.R @@ -8,6 +8,8 @@ #' #' @param x \code{numeric} RAM limit in Go #' +#' @return `list` (returned by [options()]) +#' #' @examples #' \dontrun{ #' #Set maximum ram to used to 50 Go diff --git a/R/h5_antaresReadH5.R b/R/h5_antaresReadH5.R deleted file mode 100644 index 3624cab1..00000000 --- a/R/h5_antaresReadH5.R +++ /dev/null @@ -1,980 +0,0 @@ -#' Read data -#' -#' @param path {character} path of h5file to load -#' @param areas see \link[antaresRead]{readAntares} -#' @param links see \link[antaresRead]{readAntares} -#' @param clusters see \link[antaresRead]{readAntares} -#' @param districts see \link[antaresRead]{readAntares} -#' @param mcYears see \link[antaresRead]{readAntares} -#' @param timeStep see \link[antaresRead]{readAntares} -#' @param select see \link[antaresRead]{readAntares} -#' @param showProgress see \link[antaresRead]{readAntares} -#' @param simplify see \link[antaresRead]{readAntares} -#' @param misc see \link[antaresRead]{readAntares} -#' @param thermalAvailabilities see \link[antaresRead]{readAntares} -#' @param hydroStorage see \link[antaresRead]{readAntares} -#' @param hydroStorageMaxPower see \link[antaresRead]{readAntares} -#' @param reserve see \link[antaresRead]{readAntares} -#' @param linkCapacity see \link[antaresRead]{readAntares} -#' @param mustRun see \link[antaresRead]{readAntares} -#' @param thermalModulation see \link[antaresRead]{readAntares} -#' @param perf \code{boolean}, eval performance during developpement time, to remove -#' -#' @import parallel plyr -#' -#' @keywords internal -#' @export -# Need to be export for antaresViz -.h5ReadAntares <- function(path, areas = NULL, links = NULL, clusters = NULL, - districts = NULL, mcYears = NULL, - misc = FALSE, thermalAvailabilities = FALSE, - hydroStorage = FALSE, hydroStorageMaxPower = FALSE, reserve = FALSE, - linkCapacity = FALSE, mustRun = FALSE, thermalModulation = FALSE, - timeStep = "hourly", select = NULL, showProgress = TRUE, - simplify = TRUE, perf = FALSE){ - - .requireRhdf5_Antares() - - if(!is.list(select)) - { - if(is.null(select)) - { - if(!is.null(areas)) - { - if(areas[1] == "all"){ - select <- c("all", select) - }} - - if(!is.null(links)) - { - if(links[1] == "all"){ - select <- c("all", select) - }} - - if(!is.null(clusters)) - { - if(clusters[1] == "all"){ - select <- c("all", select) - }} - if(!is.null(districts)) - { - if(districts[1] == "all"){ - select <- c("all", select) - }} - }else{ - if("allAreas" %in% select){ - select <- c(select, pkgEnvAntareasH5$varAreas) - } - - if("allLinks" %in% select){ - select <- c(select, pkgEnvAntareasH5$varLinks) - } - - if("allDistricts" %in% select){ - select <- c(select, pkgEnvAntareasH5$varDistricts) - } - } - } - - - if(!file.exists(path)){ - stop(paste0("File ", path, " not exist.")) - } - - if(perf){ - Beg <- Sys.time() - } - - ctrlselectlist <- FALSE - if(!is.list(select)){ - ctrlselectlist <- TRUE - } - - - if(ctrlselectlist){ - if(misc){ - select <- c(select, "misc") - } - if(thermalAvailabilities){ - select <- c(select, "thermalAvailabilities") - } - if(hydroStorage){ - select <- c(select, "hydroStorage") - } - if(hydroStorageMaxPower){ - select <- c(select, "hydroStorageMaxPower") - } - if(reserve){ - select <- c(select, "reserve") - } - if(linkCapacity){ - select <- c(select, "linkCapacity") - } - if(mustRun){ - select <- c(select, "mustRun") - } - if(thermalModulation){ - select <- c(select, "thermalModulation") - } - } - - - if(is.null(select)){ - select <- "all" - } - reqInfos <- .giveInfoRequest(select = select, - areas = areas, - links = links, - clusters = clusters, - clustersRes = NULL, - districts = districts, - mcYears = mcYears) - - reqInfos$clustersRes <- NULL - - select <- reqInfos$select - if(ctrlselectlist){ - select$clusters <- c(pkgEnvAntareasH5$varClusters, select$areas) - } - - - unselect <- reqInfos$unselect - - - - allCompute <- pkgEnv$allCompute - computeAdd <- unlist(select)[unlist(select) %in% allCompute] - computeAdd <- unique(computeAdd) - if(length(computeAdd) > 0){ - for(i in computeAdd) - { - assign(i, TRUE) - } - } - - for(i in allCompute){ - if(get(i)){ - select <- .addColumns(select, i) - } - } - - select <- sapply(names(select), function(X){ - as.vector(unlist(sapply(select[[X]], function(Y){ - if(is.null(pkgEnvAntareasH5$varAliasCreated[[Y]][[X]])){ - Y - }else{ - pkgEnvAntareasH5$varAliasCreated[[Y]][[X]] - } - }))) - }, simplify = FALSE) - - ctrl <- FALSE - if(!is.null(select$areas)) - { - if(select$areas[1] == "all"){ - ctrl <- TRUE - } - } - if(is.null(select$areas) | ctrl) - { - select$areas <- c(pkgEnvAntareasH5$varAreas,select$areas) - } - - ctrl <- FALSE - if(!is.null(select$links)) - { - if(select$links[1] == "all"){ - ctrl <- TRUE - } - } - - if(is.null(select$links)| ctrl) - { - select$links <- c(pkgEnvAntareasH5$varLinks, select$links) - } - - ctrl <- FALSE - if(!is.null(select$districts)) - { - if(select$districts[1] == "all"){ - ctrl <- TRUE - } - } - - if(is.null(select$districts) | ctrl) - { - select$districts <- c(pkgEnvAntareasH5$varDistricts, select$districts ) - } - ctrl <- FALSE - if(!is.null(select$clusters)) - { - if(select$clusters[1] == "all"){ - ctrl <- TRUE - } - } - - if(is.null(select$clusters) | ctrl) - { - select$clusters <- c(pkgEnvAntareasH5$varClusters, select$clusters ) - } - - - for(i in names(select)){ - if(length(which(! select[[i]] %in% unselect[[i]])) > 0) - { - select[[i]] <- select[[i]][which(! select[[i]] %in% unselect[[i]])] - } - } - - for(i in 1:length(select)){ - if(length(select[[i]]) > 1){ - if(length(which(select[[i]] == "all")) > 0){ - select[[i]] <- select[[i]][-c(which(select[[i]] == "all"))] - } - } - } - - ##End give select - - areas <- reqInfos$areas - links <- reqInfos$links - clusters <- reqInfos$clusters - districts <- reqInfos$districts - mcYears <- reqInfos$mcYears - synthesis <- reqInfos$synthesis - - synthesis <- is.null(mcYears) - - GP <- timeStep - - ##Open connection to h5 file - fid <- rhdf5::H5Fopen(path) - - #Load attibutes - attrib <- .loadAttributes(fid, timeStep) - - if(is.null(mcYears)){ - mcType <- "mcAll" - mcYears <- "mcAll" - }else{ - mcType <- "mcInd" - } - - ##Load areas - listOut <- list() - areas <- .loadAreas(areas = areas, - fid = fid, - select = select$areas, - mcYears = mcYears, - GP = GP, - mcType = mcType, - synthesis = synthesis, - simplify = simplify, - attrib = attrib) - # if("virtualNodes" %in% names(attrib)){ - # attr(areas, "virtualNodes") <- attrib$virtualNodes - # } - # - if(!is.null(areas)){ - listOut$areas <- areas - rm(areas) - } - - links <- .loadLinks(links = links, - fid = fid, - select = select$links, - mcYears = mcYears, - GP = GP, - mcType = mcType, - synthesis = synthesis, - simplify = simplify, - attrib = attrib) - - - if(!is.null(links)){ - listOut$links <- links - rm(links) - } - - districts <- .loadDistricts(districts = districts, - fid = fid, - select = select$districts, - mcYears = mcYears, - GP = GP, - mcType = mcType, - synthesis = synthesis, - simplify = simplify, - attrib = attrib) - - if(!is.null(districts)){ - listOut$districts <- districts - rm(districts) - } - clusters <- .loadClusters(clusters = clusters, - fid = fid, - select = select$clusters, - mcYears = mcYears, - GP = GP, - mcType = mcType, - synthesis = synthesis, - simplify = simplify, - attrib = attrib) - - if(!is.null(clusters)){ - listOut$clusters <- clusters - rm(clusters) - } - - if(length(listOut) == 1){ - - if(perf){ - TotalTime <-Sys.time() - Beg - cat(paste0("Time for loading : ", round(TotalTime, 3), "\n")) - objectS <- utils::object.size(listOut)/1024^2 - cat(paste0("Size of object loaded : ", round(objectS, 1), "Mo\n")) - cat(paste0("Mo/s loaded : ",round(as.numeric(objectS)/ as.numeric(TotalTime),1), "\n")) - dtaloded <- sum(unlist(lapply(listOut, function(X)prod(dim(X))))) - cat(paste0("Data loaded/s : ", round(dtaloded/ as.numeric(TotalTime) / 1000000, 2), " Millions", "\n")) - } - - listOut[[1]] - }else{ - listOut <- .addClassAndAttributes(listOut, synthesis, timeStep, - attrib$opts, simplify) - if("virtualNodes" %in% names(attrib)){ - attr(listOut, "virtualNodes") <- attrib$virtualNodes - } - - if(perf){ - TotalTime <-Sys.time() - Beg - cat(paste0("Time for loading : ", round(TotalTime, 3), "\n")) - objectS <- utils::object.size(listOut)/1024^2 - cat(paste0("Size of object loaded : ", round(objectS, 1), "Mo\n")) - cat(paste0("Mo/s loaded : ",round(as.numeric(objectS)/ as.numeric(TotalTime),1), "\n")) - dtaloded <- sum(unlist(lapply(listOut, function(X)prod(dim(X))))) - cat(paste0("Data loaded/s : ", round(dtaloded/ as.numeric(TotalTime) / 1000000, 2), " Millions", "\n")) - } - - listOut - } -} - - - - -#' Transform array to data.table -#' -#' @param array \code{array}, array to transform -#' -#' @return data.table -#' -#' @noRd -.arrayToDataTable <- function(array) -{ - dim <- 2 - ecraseDim <- dim(array)[dim] - dimS <- 1:length(dim(array)) - dimNot <- dimS[-dim] - prodDim <- prod(dim(array)[dimNot]) - arrayL <- list() - for(i in 1:dim(array)[2]){ - arrayL[[i]] <- unlist(array[,i,,]) - } - setattr(arrayL, "names", paste("V", 1:ecraseDim, sep = "")) - setattr(arrayL, "row.names", .set_row_names(prodDim)) - setattr(arrayL, "class", c("data.table", "data.frame")) - alloc.col(arrayL) -} - -#' Load antares simulation data -#' -#' @param fid \code{H5IdComponent} id of h5 file open which \link{rhdf5::H5Fopen} -#' @param index \code{list} index of data to load -#' @param GP \code{character} name of group to load -#' -#' @noRd -.optimH5Read <- function(fid, index = NULL, GP){ - - .requireRhdf5_Antares() - - did <- rhdf5::H5Dopen(fid, GP) - if(is.null(index)){ - return(rhdf5::H5Dread(did)) - }else{ - - h5spaceFile <- rhdf5::H5Dget_space(did) - maxSize <- rev(rhdf5::H5Sget_simple_extent_dims(h5spaceFile)$size) - - len <- length(maxSize) - K <- sapply(len:1, function(X){ - if(is.null(index[[len-X + 1]])){ - seq_len(maxSize[X]) - }else{index[[len-X + 1]]}} - ) - size <- unlist(lapply(K,length)) - h5spaceMem = rhdf5::H5Screate_simple(size) - W <- rhdf5::H5Screate_simple(rhdf5::H5Sselect_index(h5spaceFile, as.list(K)))@ID - - rhdf5::H5Dread(did, h5spaceFile = h5spaceFile, - h5spaceMem = h5spaceMem) - } - -} - -#' Give request stucture -#' -#' @param type \code{character} type of request, must be area, link, cluster or district -#' @param selectedRow \code{character} selectoin on raw (country, link, cluster ....) -#' @param selectedCol \code{character} columns to select -#' @param fid \code{H5IdComponent} id of h5 file open which \link[rhdf5]{H5Fopen} -#' @param GP \code{character} name of data.frame to load -#' @param mcType \code{character}, must be mcInd or mcAll -#' @param mcYears \code{numeric or character} mcYears to laod -#' -#' @noRd -.makeStructure <- function(type = "area", selectedRow, - selectedCol, fid, GP, mcType, mcYears){ - if(is.null(selectedCol)){ - selectedCol <- "all" - } - - typeS <- paste0(type, "s") - struct <- .getstructure(fid, paste0(GP, "/", typeS, "/", mcType, "/structure")) - - compname <- NULL - if(type == "cluster"){ - splitClust <- strsplit(struct[[type]], "/") - clusterClean <- unlist(lapply(splitClust, function(X){X[1]})) - struct[[type]] <- clusterClean - compname <- unlist(lapply(splitClust, function(X){X[2]})) - - } - - if(selectedRow[1] == "all"){ - indexType <- NULL - Name <- struct[[type]] - }else{ - indexType <- which(struct[[type]] %in% selectedRow) - Name <- struct[[type]][indexType] - if(type == "cluster"){ - compname <- compname[indexType] - } - } - if(selectedCol[1] == "all"){ - indexVar <- NULL - varKeep <- struct$variable - }else{ - indexVar <- which(struct$variable %in% selectedCol) - indexVar <- unique(c(1, indexVar)) - varKeep <- struct$variable[indexVar] - } - if(mcYears[1] == "all"){ - indexMC <- NULL - mcyLoad <- struct$mcYear - }else{ - if(mcYears[1] == "mcAll"){ - indexMC <- NULL - mcyLoad <- struct$mcYear - }else{ - indexMC <- which(struct$mcYear %in% mcYears) - mcyLoad <- struct$mcYear[indexMC] - } - } - return(list(Name = Name, varKeep = varKeep, index = list(NULL, indexVar, indexType, indexMC), - mcyLoad = mcyLoad, compname = compname)) -} - -#' Load areas -#' -#' @param areas \code{character}, area(s) to load -#' @param fid \code{H5IdComponent} id of h5 file open which \link[rhdf5]{H5Fopen} -#' @param select \code{character} columns to select -#' @param mcYears \code{numeric or character} mcYears to load -#' @param GP \code{character} name of data.frame to load -#' @param mcType \code{character}, must be mcInd or mcAll -#' @param synthesis \code{boolean} -#' @param simplify \code{boolean} -#' @param attrib \code{list} -#' -#' @noRd -#' -#' @importFrom stats setNames -.loadAreas <- function(areas, - fid, - select, - mcYears, - GP, - mcType, - synthesis, - simplify, - attrib){ - - .requireRhdf5_Antares() - - if(!is.null(areas)){ - - if(rhdf5::H5Lexists(fid, paste0(GP, "/areas/", mcType, "/structure"))) - { - - struct <- .makeStructure(type = "area", - selectedRow = areas, - selectedCol = select, - fid = fid, - GP = GP, - mcType = mcType, - mcYears = mcYears) - - - if(all(unlist(lapply(struct$index, is.null)))){ - areas <- .optimH5Read(fid = fid, - GP = paste0(GP, "/areas/", mcType, "/data")) - }else{ - areas <- .optimH5Read(fid = fid, - index = struct$index, - GP = paste0(GP, "/areas/", mcType, "/data")) - - } - - - #Format array - areas <- .formatArray(data = areas, struct = struct, nameColumns = "area", mcType = mcType) - - #Add time - tim <- getAllDateInfoFromDate(fid, GP) - areas[, c(names(tim)) := lapply( - X = setNames(tim, names(tim)), - FUN = rep, length.out = .N - )] - - .addClassAndAttributes(areas, - synthesis, - attrib$timeStep, - attrib$opts, - simplify = simplify, type = "areas") - areas - }else{ - message("No data corresponding to your areas query.") - return(NULL) - } - }else{NULL}} - -#' Load links -#' -#' @param links \code{character}, link(s) to load -#' @param fid \code{H5IdComponent} id of h5 file open which \link[rhdf5]{H5Fopen} -#' @param select \code{character} columns to select -#' @param mcYears \code{numeric or character} mcYears to load -#' @param GP \code{character} name of data.frame to load -#' @param mcType \code{character}, must be mcInd or mcAll -#' @param synthesis \code{boolean} -#' @param simplify \code{boolean} -#' @param attrib \code{list} -#' -#' @noRd -#' -#' @importFrom stats setNames -.loadLinks <- function(links, - fid, - select, - mcYears, - GP, - mcType, - synthesis, - simplify, - attrib){ - - .requireRhdf5_Antares() - - ##Load links - if(!is.null(links)){ - - if(rhdf5::H5Lexists(fid, paste0(GP, "/links/", mcType, "/structure"))) - { - - - struct <- .makeStructure(type = "link", - selectedRow = links, - selectedCol = select, - fid = fid, - GP = GP, - mcType = mcType, - mcYears = mcYears) - - - if(all(unlist(lapply(struct$index, is.null)))){ - links <- .optimH5Read(fid = fid, - GP = paste0(GP, "/links/", mcType, "/data")) - }else{ - links <- .optimH5Read(fid = fid, - index = struct$index, - GP = paste0(GP, "/links/", mcType, "/data")) - - } - - #Format array - links <- .formatArray(data = links, struct = struct, nameColumns = "link", mcType = mcType) - - #Add time - tim <- getAllDateInfoFromDate(fid, GP) - links[, c(names(tim)) := lapply( - X = setNames(tim, names(tim)), - FUN = rep, length.out = .N - )] - - .addClassAndAttributes(links, - synthesis, - attrib$timeStep, - attrib$opts, - simplify = simplify, type = "links") - links - }else{ - message("No data corresponding to your links query.") - return(NULL) - } - }else{NULL} -} - - - -#' Load districts -#' -#' @param districts \code{character}, district(s) to load -#' @param fid \code{H5IdComponent} id of h5 file open which \link[rhdf5]{H5Fopen} -#' @param select \code{character} columns to select -#' @param mcYears \code{numeric or character} mcYears to load -#' @param GP \code{character} name of data.frame to load -#' @param mcType \code{character}, must be mcInd or mcAll -#' @param synthesis \code{boolean} -#' @param simplify \code{boolean} -#' @param attrib \code{list} -#' -#' @noRd -#' -#' @importFrom stats setNames -.loadDistricts <- function(districts, - fid, - select, - mcYears, - GP, - mcType, - synthesis, - simplify, - attrib){ - - .requireRhdf5_Antares() - - if(!is.null(districts)){ - - if(rhdf5::H5Lexists(fid, paste0(GP, "/districts/", mcType, "/structure"))) - { - - struct <- .makeStructure(type = "district", - selectedRow = districts, - selectedCol = select, - fid = fid, - GP = GP, - mcType = mcType, - mcYears = mcYears) - - - if(all(unlist(lapply(struct$index, is.null)))){ - districts <- .optimH5Read(fid = fid, - GP = paste0(GP, "/districts/", mcType, "/data")) - }else{ - districts <- .optimH5Read(fid = fid, - index = struct$index, - GP = paste0(GP, "/districts/", mcType, "/data")) - - } - - - districts <- .formatArray(data = districts, struct = struct, nameColumns = "district", mcType = mcType) - - tim <- getAllDateInfoFromDate(fid, GP) - - #Add time - districts[, c(names(tim)) := lapply( - X = setNames(tim, names(tim)), - FUN = rep, length.out = .N - )] - - .addClassAndAttributes(districts, - synthesis, - attrib$timeStep, - attrib$opts, - simplify = simplify, type = "districts") - districts - }else{ - message("No data corresponding to your districts query.") - return(NULL) - }}else{NULL} -} - - - - -#' Load clusters -#' -#' @param clusters \code{character}, cluster(s) to load -#' @param fid \code{H5IdComponent} id of h5 file open which \link{rhdf5::H5Fopen} -#' @param select \code{character} columns to select -#' @param mcYears \code{numeric or character} mcYears to load -#' @param GP \code{character} name of data.frame to load -#' @param mcType \code{character}, must be mcInd or mcAll -#' @param synthesis \code{boolean} -#' @param simplify \code{boolean} -#' @param attrib \code{list} -#' -#' @noRd -#' -#' @importFrom stats setNames -.loadClusters <- function(clusters, - fid, - select, - mcYears, - GP, - mcType, - synthesis, - simplify, - attrib){ - - .requireRhdf5_Antares() - - if(!is.null(clusters)){ - - if(rhdf5::H5Lexists(fid, paste0(GP, "/clusters/", mcType, "/structure"))) - { - - - struct <- .makeStructure(type = "cluster", - selectedRow = clusters, - selectedCol = select, - fid = fid, - GP = GP, - mcType = mcType, - mcYears = mcYears) - - if(all(unlist(lapply(struct$index, is.null)))){ - clusters <- .optimH5Read(fid = fid, - GP = paste0(GP, "/clusters/", mcType, "/data")) - }else{ - clusters <- .optimH5Read(fid = fid, - index = struct$index, - GP = paste0(GP, "/clusters/", mcType, "/data")) - - } - - - dimclusters <- dim(clusters) - clusters <- .formatArray(data = clusters, struct = struct, nameColumns = "area", mcType = mcType) - - compname <- as.factor(struct$compname) - clusters[, "cluster" := rep(rep(compname, each = dimclusters[1]), dimclusters[4])] - tim <- getAllDateInfoFromDate(fid, GP) - - #Add time - clusters[, c(names(tim)) := lapply( - X = setNames(tim, names(tim)), - FUN = rep, length.out = .N - )] - - .addClassAndAttributes(clusters, - synthesis, - attrib$timeStep, - attrib$opts, - simplify = simplify, type = "clusters") - clusters - }else{ - message("No data corresponding to your clusters query.") - return(NULL) - }}else{NULL} -} - -#' Add structure information to data -#' -#' @param data \code{data.table} data load -#' @param struct \code{list} -#' @param nameColumns \code{character} column names -#' @param mcType \code{character}, must be mcInd, and mcAll -#' -#' @noRd -.formatArray <- function(data, struct, nameColumns, mcType){ - dimData <- dim(data) - data <- .arrayToDataTable(data) - nameS <- struct$varKeep - names(data) <- nameS - dataName <- as.factor(struct$Name) - data[, c(nameColumns[1]):= rep(rep(dataName, each = dimData[1]), dimData[4])] - if(mcType == "mcInd") - { - data[, "mcYear" := rep(struct$mcyLoad, each = dimData[1] * dimData[3])] - } - - integerVariableS <- integerVariable[integerVariable%in%names(data)] - if("timeId" %in% names(data)){ - integerVariableS <- c("timeId", integerVariableS) - } - - if(length(integerVariableS)){ - ordervar <- names(data)[ match(integerVariableS, names(data))] - minmax <- data[,lapply(.SD, max), .SDcols = ordervar]>2*10^9 - ordervar <- colnames(minmax)[!minmax] - if(length(ordervar)>0) - { - data[,c(ordervar) := lapply(.SD, as.integer), .SDcols = ordervar] - } - } - data -} - -#' @param select select column(s) -#' @param var var to add -#' -#' @noRd -.addColumns <- function(select, var){ - if(is.null(select)){ - return(var) - } - if(is.list(select)){ - return(lapply(select, function(X){c(X, var)})) - } - c(var, select) -} - - -.loadAttributes <- function(fid, timeStep){ - - .requireRhdf5_Antares() - - if(rhdf5::H5Lexists(fid, paste0(timeStep, "/attrib"))) - { - - did <- rhdf5::H5Dopen(fid, paste0(timeStep, "/attrib")) - attrib <- unserialize(charToRaw(rhdf5::H5Dread(did))) - rhdf5::H5Dclose(did) - - if(!is.null(attrib$opts$linksDef)){ - attrib$opts$linksDef <- data.table(attrib$opts$linksDef) - } - if(!is.null(attrib$opts$districtsDef)){ - attrib$opts$districtsDef <- data.table(attrib$opts$districtsDef) - } - }else{ - attrib <- NULL - } - attrib -} - - -.getstructure <- function(fid, strgp){ - .requireRhdf5_Antares() - gid <- rhdf5::H5Gopen(fid, strgp) - data <- rhdf5::h5dump(gid) - rhdf5::H5Gclose(gid) - if(length(which(data$reCalcVar!="")) > 0) - { - data$reCalcVar <- data$reCalcVar[which(data$reCalcVar!="")] - data$variable <- c(data$variable, data$reCalcVar) - data$reCalcVar <- NULL - } - data -} - - -#' -#' #' Use to transform inputs arguments to be passable to reading function -#' #' -#' #' -#' #' -#' #' @param select Character vector containing the name of the columns to import. See \link{readAntares} for further information. -#' #' @param areas Vector containing the names of the areas to import. See \link{readAntares} for further information. -#' #' @param links Vector containing the names of the links to import. See \link{readAntares} for further information. -#' #' @param clusters Vector containing the names of the clusters to import. See \link{readAntares} for further information. -#' #' @param districts Vector containing the names of the districts to import. See \link{readAntares} for further information. -#' #' @param mcYears Index of the Monte-Carlo years to import. See \link{readAntares} for further information. -#' #' -#' #' @return \code{list} -#' #' \itemize{ -#' #' \item select -#' #' \item areas -#' #' \item links -#' #' \item clusters -#' #' \item districts -#' #' \item mcYears -#' #' \item synthesis -#' #' \item computeAdd -#' #' } -#' #' -#' #' @noRd -#' .giveInfoRequest <- function(select, -#' areas, -#' links, -#' clusters, -#' districts, -#' mcYears){ -#' -#' if (!is.list(select)) select <- list(areas = select, links = select, districts = select) -#' ##Get unselect columns (by - operator) -#' unselect <- lapply(select, function(X){ -#' minusColumns <- grep("^-", X) -#' if(length(minusColumns)>0) -#' { -#' uns <- X[minusColumns] -#' gsub("^-", "", uns) -#' }else{ -#' NULL -#' } -#' }) -#' -#' ##Remove unselect columns -#' select <- lapply(select, function(X){ -#' minusColumns <- grep("^-", X) -#' if(length(minusColumns) > 0){ -#' X[-c(minusColumns)] -#' }else{ -#' X -#' } -#' }) -#' -#' -#' # Aliases for groups of variables -#' select <- llply(select, function(x) { -#' for (alias in names(pkgEnv$varAliases)) { -#' if (tolower(alias) %in% tolower(x)) x <- append(x, pkgEnv$varAliases[[alias]]$select) -#' } -#' x -#' }) -#' -#' allCompute <- pkgEnv$allCompute -#' computeAdd <- allCompute[allCompute%in%unlist(select)] -#' -#' if ("areas" %in% unlist(select) & is.null(areas)) areas <- "all" -#' if ("links" %in% unlist(select) & is.null(links)) { -#' if (!is.null(areas)) links <- getLinks(getAreas(areas, regexpSelect = FALSE)) -#' else links <- "all" -#' } -#' if ("clusters" %in% unlist(select) & is.null(clusters)) { -#' if (!is.null(areas)) clusters <- areas -#' else clusters <- "all" -#' } -#' if ("mcYears" %in% unlist(select) & is.null(mcYears)) mcYears <- "all" -#' -#' # If all arguments are NULL, import all areas -#' if (is.null(areas) & is.null(links) & is.null(clusters) & is.null(districts)) { -#' areas <- "all" -#' } -#' -#' # Check arguments validity. The function .checkArgs is defined below -#' synthesis <- is.null(mcYears) -#' -#' return(list(select = select, -#' areas = areas, -#' links = links, -#' clusters = clusters, -#' districts = districts, -#' mcYears = mcYears, -#' synthesis = synthesis, -#' computeAdd = computeAdd, -#' unselect = unselect)) -#' } - diff --git a/R/h5_readInputs.R b/R/h5_readInputs.R deleted file mode 100644 index 4d0eed9b..00000000 --- a/R/h5_readInputs.R +++ /dev/null @@ -1,54 +0,0 @@ -#' Import binding constraints -#' -#' @description -#' This function imports the binding constraints of an Antares project form an h5 file see also \link[antaresRead]{readBindingConstraints}. -#' -#' @param opts \code{list} of simulation parameters returned by the function \link[antaresRead]{setSimulationPath}. -#' -#' @noRd -h5ReadBindingConstraints <- function(opts){ - .requireRhdf5_Antares() - fid <- rhdf5::H5Fopen(opts$h5path) - timestep <- .getTimeStep(fid)[1] - out <- unserialize(charToRaw(rhdf5::h5read(fid, paste0(timestep , "/inputs/buildingcte")))) - rhdf5::h5closeAll() - out -} - -#' Import areas layout -#' -#' @description -#' This function imports the position of the areas from an h5 file. It may be useful for plotting the -#' network see also \link[antaresRead]{readLayout}. -#' -#' Be aware that the layout is read in the input files so they may have changed since a simulation has been run. -#' @param opts \code{list} of simulation parameters returned by the function \link[antaresRead]{setSimulationPath}. -#' -#' @noRd -h5ReadLayout <- function(opts){ - .requireRhdf5_Antares() - fid <- rhdf5::H5Fopen(opts$h5path) - timestep <- .getTimeStep(fid)[1] - out <- unserialize(charToRaw(rhdf5::h5read(fid, paste0(timestep , "/inputs/layout")))) - rhdf5::h5closeAll() - out -} - -#' Import cluster description -#' -#' @description -#' This function imports the characteristics of each cluster from an h5 file see also \link[antaresRead]{readClusterDesc}. -#' -#' Be aware that clusters descriptions are read in the input files so they may have changed since a simulation has been run. -#' @param opts \code{list} of simulation parameters returned by the function \link[antaresRead]{setSimulationPath}. -#' -#' @noRd -#' -h5ReadClusterDesc <- function(opts){ - .requireRhdf5_Antares() - fid <- rhdf5::H5Fopen(opts$h5path) - timestep <- .getTimeStep(fid)[1] - out <- unserialize(charToRaw(rhdf5::h5read(fid, paste0(timestep , "/inputs/cldesc")))) - rhdf5::h5closeAll() - out -} diff --git a/R/h5_setSimulationPathH5.R b/R/h5_setSimulationPathH5.R deleted file mode 100644 index a83b3ce1..00000000 --- a/R/h5_setSimulationPathH5.R +++ /dev/null @@ -1,77 +0,0 @@ -#' Set simulation path for h5 file -#' -#' @param path \code{character} path of a .h5 file or a repertory with .h5 file(s) -#' @param simulation \code{character} simulation number or name -#' -#' @noRd -#' -setSimulationPathH5 <- function(path, simulation = NULL){ - if(dir.exists(path)){ - allFiles <- list.files(path) - avaliableFile <- allFiles[grep(".h5$", allFiles)] - - if(length(avaliableFile) == 0){ - stop("Not available .h5 file in your directory") - } - if (!is.null(simulation)) { - if(simulation == 0) simulation <- NULL - } - if (is.null(simulation)) { - if (length(avaliableFile) == 1) { # Case 2 - simulation <- 1 - } else { # Case 3 - cat("Please, choose a simulation\n") - for (i in 1:length(avaliableFile)) { - cat(sprintf(" %s - %s\n", i, avaliableFile[i])) - } - simulation <- utils::type.convert(scan(what = character(), nmax = 1), as.is = TRUE) - } - } - - if(simulation == -1){ - simulation <- length(avaliableFile) - } - - if(is.character(simulation)){ - if(length(which(simulation == avaliableFile)) == 0){ - stop("Simulation not in your study") - } - } - - if(is.numeric(simulation)){ - simulation <- avaliableFile[simulation] - } - - path <- paste0(path, "/", simulation) - } else if(!file.exists(path) & !grepl(".h5$", path)){ - stop("Invalid path argument. File not found. Must be a .h5 file or a repertory with .h5 file(s)") - } else if(file.exists(path) & !grepl(".h5$", path)){ - stop("Invalid path argument. Must be a .h5 file or a repertory with .h5 file(s)") - } - - attributes <- .getOptionsH5(path) - options(antares=attributes) - attributes -} - -#' Get H5 options -#' -#' @param path \code{character} path of h5 file -#' -#' @keywords internal -#' @export -# Need to be export for antaresViz -.getOptionsH5 <- function(path){ - - .requireRhdf5_Antares() - fid <- rhdf5::H5Fopen(path) - attributes <- .loadAttributes(fid, "hourly") - attributes <- attributes$opts - attributes$h5 <- TRUE - attributes$h5path <- normalizePath(path) - attributes$studyPath <- NULL - attributes$simPath <- NULL - attributes$inputPath <- NULL - attributes$simDataPath <- NULL - attributes -} diff --git a/R/h5_timeManagement.R b/R/h5_timeManagement.R deleted file mode 100644 index cecb39ef..00000000 --- a/R/h5_timeManagement.R +++ /dev/null @@ -1,133 +0,0 @@ -#' Get timeId from antares study -#' -#' @param data \code{antaresDataList} see \link{readAntares} -#' @param timeStep \code{character} timeStep -#' -#' @noRd -getTime <- function(data, timeStep){ - time <- unique(data[[1]]$time) - current_locale <- Sys.getlocale(category = "LC_TIME") - Sys.setlocale("LC_TIME", "C") - if(timeStep %in% c("weekly", "monthly")){ - dt_date <- data.table(time = as.character(time)) - - }else if(timeStep == "annual"){ - dt_date <- data.table(time) - - }else{ - dt_date <- data.table(IDateTime(time, tz =" UTC")) - - } - Sys.setlocale("LC_TIME", current_locale) - dt_date -} - -#' Read time and generate column who can be calculate from time -#' -#' @param fid \code{H5IdComponent} id of h5 file open which \link[rhdf5]{H5Fopen} -#' @param group \code{group} group where time are stocked -#' -#' @noRd -#' -getAllDateInfoFromDate <- function(fid, group){ - # affectation des classes - - .requireRhdf5_Antares() - - groupT <- paste0(group, "/time") - did <- rhdf5::H5Dopen(fid, groupT) - datetime_data <- data.table(rhdf5::H5Dread(did)) - - - rhdf5::H5Dclose(did) - if(group %in% c("weekly", "annual")){ - return(datetime_data) - } - - current_locale <- Sys.getlocale(category = "LC_TIME") - # mise en locale english pour le time (extraction des mois) - Sys.setlocale("LC_TIME", "C") - if(group == c("monthly")){ - timCop <- datetime_data$time - timCop <- paste0(timCop, "-01") - timCop <- as.Date(timCop) - class(timCop) <- c("IDate", "Date") - datetime_data$month <- as.factor(toupper(format(timCop, format = "%b"))) - return(datetime_data) - } - idate <- NULL - itime <- NULL - class(datetime_data$idate) <- c("IDate", "Date") - class(datetime_data$itime) <- c("ITime") - # recuperation de la locale actuelle du pc - uniqueDate <- unique(datetime_data[,.SD, .SDcols = "idate"]) - uniqueTime <- unique(datetime_data[,.SD, .SDcols = "itime"]) - - - # Calcul day & mounth - uniqueDate[,c("day", "month") := list( - mday(idate), - as.factor(toupper(format(idate, format = "%b"))))] - - - - # calculs - datetime_data[, c("time") := list( - as.POSIXct(idate, time = itime, tz = "UTC") - )] - - if(group == "daily"){ - datetime_data$time <- as.Date(datetime_data$time) - } - - mthList <- c("APR", - "AUG", - "DEC", - "FEB", - "JAN", - "JUL", - "JUN", - "MAR", - "MAY", - "NOV", - "OCT", - "SEP") - - toAdd <- mthList[!mthList %in% levels(uniqueDate$month)] - if(length(toAdd)>0) - { - levels(uniqueDate$month) <- c(levels(uniqueDate$month), toAdd) - } - - if(group == "hourly") - { - uniqueTime[, c("hour") := as.factor(substring(as.character(itime), 1, 5))] - - } - uniqueDate$month <- factor(uniqueDate$month , levels(uniqueDate$month)[order(levels(uniqueDate$month))]) - - #Merge - datetime_data <- merge(datetime_data, uniqueDate) - datetime_data <- merge(datetime_data, uniqueTime, by = "itime") - datetime_data[, "idate" := NULL] - datetime_data[, "itime" := NULL] - setkey(datetime_data, "time") - Sys.setlocale("LC_TIME", current_locale) - datetime_data -} - - -#' Write time in h5 file -#' -#' @param data \code{antaresDataList} see \link{readAntares} -#' @param path \code{character} path of h5 file -#' @param group \code{group} group where time are stored -#' -#' @noRd -writeTime <- function(data, path, group){ - .requireRhdf5_Antares() - time <- getTime(data, group) - group <- paste0(group, "/time") - rhdf5::h5write(time, path, group) - rhdf5::h5closeAll() -} diff --git a/R/h5_transformData.R b/R/h5_transformData.R deleted file mode 100644 index 53cabc86..00000000 --- a/R/h5_transformData.R +++ /dev/null @@ -1,36 +0,0 @@ -#' Transform a \code{antaresDataList} object to be writable by \link{writeAntaresH5} -#' -#' @param data \code{antaresDataList} -#' @param areasKey \code{character} organization key for areas, define h5 group and subgroup -#' @param linksKey \code{character} organization key for links, define h5 group and subgroup -#' @param districtKey \code{character} organization key for districts, define h5 group and subgroup -#' @param clustersKey \code{character} organization key for clusters, define h5 group and subgroup -#' -#' @import data.table -#' -#' @noRd -#' -transformH5 <- function(data, - areasKey = c("area"), - linksKey = c("link"), - districtKey = c("district"), - clustersKey = c("area", "cluster")){ - - if("areas"%in%names(data)) - { - data$areas <- data$areas[, list(list(.SD)), by = areasKey] - } - if("links"%in%names(data)) - { - data$links <- data$links[, list(list(.SD)), by = linksKey] - } - if("districts"%in%names(data)) - { - data$districts <- data$districts[, list(list(.SD)), by = districtKey] - } - if("clusters"%in%names(data)) - { - data$clusters <- data$clusters[, list(list(.SD)), by = clustersKey] - } - data -} diff --git a/R/h5_writeData.R b/R/h5_writeData.R deleted file mode 100644 index d48137e4..00000000 --- a/R/h5_writeData.R +++ /dev/null @@ -1,157 +0,0 @@ -#' Write antares Study to a .h5 file -#' -#' @param data \code{antaresDataList} -#' @param path \code{character} path of h5 file -#' @param rootGroup \code{character} group will contain all h5 organization -#' @param writeStructure \code{boolean}, write group and subgroup (only for first MCyear) -#' @param mcAll \code{character}, write mcAll -#' @param compress \code{numeric}, compression level -#' -#' -#' @noRd -#' -writeAntaresData <- function(data, - path, - rootGroup = NULL, - writeStructure = TRUE, - mcAll = FALSE, - compress = 0){ - - if(!is.null(data$areas)){ - #Write areas - writeDataType(data = data, path = path, type = "areas", rootGroup = rootGroup, writeStructure = writeStructure, - mcAll = mcAll, compress = compress) - } - - if(!is.null(data$links)){ - #Write links - writeDataType(data = data, path = path, type = "links", rootGroup = rootGroup, writeStructure = writeStructure, - mcAll = mcAll, compress = compress) - } - if(!is.null(data$districts)){ - #Write districts - writeDataType(data = data, path = path, type = "districts", rootGroup = rootGroup, writeStructure = writeStructure, - mcAll = mcAll, compress = compress) - } - - if(!is.null(data$clusters)){ - #Write clusters - writeDataType(data = data, path = path, type = "clusters", rootGroup = rootGroup, writeStructure = writeStructure, - mcAll = mcAll, compress = compress) - } - -} - -#' Write data by type -#' -#' @param data \code{antaresDataList} -#' @param path \code{character} patch of h5 file -#' @param type \code{character} type of data to write, must be areas, links, districts or clusters -#' @param rootGroup \code{character} group will contain all h5 organization -#' @param writeStructure \code{boolean}, write group and subgroup (only for first MCyear) -#' @param mcAll \code{character}, write mcAll -#' @param compress \code{boolean}, compress level -#' -#' @noRd -#' -writeDataType <- function(data, - path, - type, - rootGroup = NULL, - writeStructure = TRUE, - mcAll = FALSE, - compress = 0){ - - - .requireRhdf5_Antares() - - if(!mcAll) - { - mcYears <- attr(data, "opts")$mcYears - }else{ - mcYears <- "all" - } - - data <- data[[type]] - if(type == "clusters"){ - data$cluster <- paste0(data$area, "/", data$cluster) - data[,"area" := NULL] - } - - - Group <- paste0(rootGroup, "/", type) - #Create group by type of data - if(writeStructure & !mcAll){ - rhdf5::h5createGroup(path, Group) - } - - #Control if we write mcAll or mcInd - if(mcAll){ - Group <- paste0(Group, "/", "mcAll") - }else{ - Group <- paste0(Group, "/", "mcInd") - - } - - #Create group for mc-ind or mc-all - if(writeStructure){ - rhdf5::h5createGroup(path, Group) - } - # print(data) - #Give structure for data - dimPreBuild <- names(data)[!names(data)%in%c("mcYear", "V1")] - dimStructure <- list() - dimStructure$variable <- names(data$V1[[1]]) - dimStructure <- c(dimStructure, sapply(dimPreBuild, function(X){ - as.character(unique(unlist(data[, .SD, .SDcols = X]))) - }, simplify = FALSE)) - dimStructure$mcYear <- mcYears - - #Give dim length - nbDim <- length(dimStructure) + 1 - nbTimeId <- nrow(data$V1[[1]]) - dimData <- unlist(c(nbTimeId, lapply(dimStructure, length))) - - - #Create array - groupData <- paste0(Group, "/data") - structData <- paste0(Group, "/structure") - if(writeStructure){ - rhdf5::h5createDataset(path, groupData, dims = dimData, chunk = c(dimData[1], 1, 1, 1), - level = compress, maxdims = c(dimData[1], - dimData[2] + 300, - dimData[3], - dimData[4])) - fid <- rhdf5::H5Fopen(path) - # dimStructure$reCalcVar <- rep("NoVariable", 100) - rhdf5::h5writeDataset(dimStructure, fid, structData, level = compress) - rhdf5::H5Fclose(fid) - structNew <- paste0(structData, "/reCalcVar") - rhdf5::h5createDataset(path, structNew, storage.mode = "character", level = compress , dims = 300, - size = 200) - - - } - - #Convert data to an array - arrayDatatowrite <- array(dim = dimData[1:(length(dimData)-1)]) - for(i in 1:nrow(data)){ - arrayDatatowrite[, , i] <- as.matrix(data$V1[[i]]) - } - - #Control index for write - index <- lapply(1:length(dim(arrayDatatowrite)), function(X)NULL) - - if(!mcAll) - { - index$LastDim <- which(data$mcYear[1] == mcYears) - }else{ - index$LastDim <- 1 - } - - #Write data - fid <- rhdf5::H5Fopen(path) - rhdf5::h5writeDataset(obj = arrayDatatowrite, fid, groupData, index = index) - rhdf5::H5Fclose(fid) - NULL -} diff --git a/R/h5_writeH5ByYear.R b/R/h5_writeH5ByYear.R deleted file mode 100644 index 4998caa7..00000000 --- a/R/h5_writeH5ByYear.R +++ /dev/null @@ -1,492 +0,0 @@ -#' Convert antares output to h5 file -#' -#' @param path \code{character} folder where h5 file will be write (default NULL) -#' @param timeSteps \code{character} timeSteps -#' @param opts \code{list} of simulation parameters returned by the function \link{setSimulationPath}. Default to \code{antaresRead::simOptions()} -#' @param writeMcAll \code{boolean} write mc-all -#' @param compress \code{numeric} compress level -#' @param misc \code{boolean} see \link[antaresRead]{readAntares} -#' @param thermalAvailabilities \code{boolean} see \link[antaresRead]{readAntares} -#' @param hydroStorage \code{boolean} see \link[antaresRead]{readAntares} -#' @param hydroStorageMaxPower \code{boolean} see \link[antaresRead]{readAntares} -#' @param reserve \code{boolean} see \link[antaresRead]{readAntares} -#' @param linkCapacity \code{boolean} see \link[antaresRead]{readAntares} -#' @param mustRun \code{boolean} see \link[antaresRead]{readAntares} -#' @param thermalModulation \code{boolean} see \link[antaresRead]{readAntares} -#' @param allData \code{boolean} add all data with a single call (writeMcAll, misc, thermalAvailabilities, hydroStorage, hydroStorageMaxPower -#' reserve, linkCapacity, mustRun, thermalModulation). -#' @param writeAllSimulations \code{boolean}, write all simulations of your antares study. -#' @param nbCores \code{numeric}, number of cores to use, only used if writeAllSimulations is TRUE -#' @param removeVirtualAreas \code{boolean}, remove virtual areas, see \link[antaresRead]{removeVirtualAreas} -#' @param storageFlexibility \code{character or list}, see \link[antaresRead]{removeVirtualAreas} -#' @param production \code{character or list}, see \link[antaresRead]{removeVirtualAreas} -#' @param reassignCosts \code{boolean or list}, see \link[antaresRead]{removeVirtualAreas} -#' @param newCols \code{boolean or list}, see \link[antaresRead]{removeVirtualAreas} -#' @param overwrite \code{boolean or list}, overwrite old file -#' @param supressMessages \code{boolean}, supress messages from \link[antaresRead]{readAntares} and \link[antaresRead]{removeVirtualAreas} -#' -#' @examples -#' -#' \dontrun{ -#' # Write simulation one by one -#' setSimulationPath("C:/Users/MyUser/Mystudy", 1) -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY") -#' -#' # Write all simulations -#' setSimulationPath("C:/Users/MyUser/Mystudy") -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY", writeAllSimulations = TRUE) -#' -#' # Choose timestep to write -#' setSimulationPath("C:/Users/MyUser/Mystudy", 1) -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly") -#' -#' # Write with additionnal information -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly", -#' misc = TRUE, thermalAvailabilities = TRUE, -#' hydroStorage = TRUE, hydroStorageMaxPower = TRUE, reserve = TRUE, -#' linkCapacity = TRUE, mustRun = TRUE, thermalModulation = TRUE) -#' -#' # Write all data with a shorcut -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY", allData = TRUE) -#' -#' #Remove virtuals areas -#' -#' writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly", overwrite = TRUE, -#' writeMcAll = FALSE, removeVirtualAreas = TRUE, -#' storageFlexibility = "psp in-2", -#' production = NULL, reassignCosts =FALSE, newCols = TRUE) -#' -#' #Remove virtuals areas more than one call -#' writeAntaresH5( -#' path="PATH_TO_YOUR_STUDY", -#' timeSteps = "hourly", -#' overwrite = TRUE, -#' writeMcAll = FALSE, -#' removeVirtualAreas = TRUE, -#' storageFlexibility = list("psp out", "psp in-2"), -#' production = list(NULL, NULL), -#' reassignCosts = list(TRUE, FALSE), -#' newCols = list(FALSE, TRUE) -#' ) -#' -#' -#' } -#' @export -writeAntaresH5 <- function(path = NULL, timeSteps = c("hourly", "daily", "weekly", "monthly", "annual"), - opts = simOptions(), - writeMcAll = TRUE, - compress = 1, - misc = FALSE, - thermalAvailabilities = FALSE, - hydroStorage = FALSE, - hydroStorageMaxPower = FALSE, - reserve = FALSE, - linkCapacity = FALSE, - mustRun = FALSE, - thermalModulation = FALSE, - allData = FALSE, - writeAllSimulations = FALSE, - nbCores = 4, - removeVirtualAreas = FALSE, - storageFlexibility = NULL, - production = NULL, - reassignCosts = FALSE, - newCols = TRUE, - overwrite = FALSE, supressMessages = FALSE){ - - if(!dir.exists(path)){ - stop(paste0("Folder ", path, " not found.")) - } - - if(allData){ - writeMcAll <- TRUE - misc <- TRUE - thermalAvailabilities <- TRUE - hydroStorage <- TRUE - hydroStorageMaxPower <- TRUE - reserve <- TRUE - linkCapacity <- TRUE - mustRun <- TRUE - thermalModulation <- TRUE - } - - .requireRhdf5_Antares() - - rhdf5::h5closeAll() - - if(!writeAllSimulations){ - simName <- unlist(strsplit(opts$simPath, "/")) - simName <- simName[length(simName)] - path <- paste0(path, "/", simName, ".h5") - - if(overwrite & file.exists(path)){ - file.remove(path) - } - - if(file.exists(path)){ - stop(paste0("File ", path, " already exist you must use overwrite argument if you want to overwrite")) - } - - - .writeAntaresH5Fun(path = path, - timeSteps = timeSteps, - opts = opts, - writeMcAll = writeMcAll, - compress = compress, - misc = misc, - thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, - hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, - linkCapacity = linkCapacity, - mustRun = mustRun, - thermalModulation = thermalModulation, - removeVirtualAreas = removeVirtualAreas, - storageFlexibility = storageFlexibility, - production = production, - reassignCosts = reassignCosts, - newCols = newCols, - supressMessages = supressMessages) - }else{ - studieSToWrite <- list.dirs(paste0(opts$studyPath, "/output"), recursive = FALSE, full.names = FALSE) - studieSToWrite <- setdiff(studieSToWrite, "maps") - if(length(studieSToWrite) > 0){ - studyPath <- opts$studyPath - if(nbCores>1) - { - if(!requireNamespace("parallel")) stop("Error loading 'parallel' package.") - - cl <- parallel::makeCluster(nbCores) - parallel::clusterEvalQ(cl, { - library(antaresRead) - }) - parallel::clusterExport(cl, c("path","opts","studyPath", - "timeSteps", - "writeMcAll", - "compress", - "misc", - "thermalAvailabilities", - "hydroStorage", - "hydroStorageMaxPower", - "reserve", - "linkCapacity", - "mustRun", - "thermalModulation", - "removeVirtualAreas", - "storageFlexibility", - "production", - "reassignCosts", - "newCols", - "overwrite", - ".writeAntaresH5Fun", - "supressMessages" - ), envir = environment()) - - parallel::parSapplyLB(cl, studieSToWrite, function(X){ - if(supressMessages) - { - opts <- suppressWarnings(suppressMessages(setSimulationPath(studyPath, X))) - }else{ - opts <- setSimulationPath(studyPath, X) - } - - if(!is.null(path)){ - pathStud <- paste0(path, "/", X, ".h5") - } - - if(overwrite & file.exists(pathStud)){ - file.remove(pathStud) - } - - if(file.exists(pathStud)){ - stop(paste0("File ", pathStud, " already exist you must use overwrite argument if you want to overwrite")) - } - - - .writeAntaresH5Fun(path = pathStud, - timeSteps = timeSteps, - opts = opts, - writeMcAll = writeMcAll, - compress = compress, - misc = misc, - thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, - hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, - linkCapacity = linkCapacity, - mustRun = mustRun, - thermalModulation = thermalModulation, - removeVirtualAreas = removeVirtualAreas, - storageFlexibility = storageFlexibility, - production = production, - reassignCosts = reassignCosts, - newCols = newCols, - supressMessages = supressMessages) - - - }) - parallel::stopCluster(cl) - - }else{ - sapply(studieSToWrite, function(X){ - if(supressMessages) - { - opts <- suppressWarnings(suppressMessages(setSimulationPath(studyPath, X))) - }else{ - opts <- setSimulationPath(studyPath, X) - } - - if(!is.null(path)){ - pathStud <- paste0(path, "/", X, ".h5") - } - - if(overwrite & file.exists(pathStud)){ - file.remove(pathStud) - } - if(file.exists(pathStud)){ - stop(paste0("File ", pathStud, " already exist you must use overwrite argument if you want to overwrite")) - } - - .writeAntaresH5Fun(path = pathStud, - timeSteps = timeSteps, - opts = opts, - writeMcAll = writeMcAll, - compress = compress, - misc = misc, - thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, - hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, - linkCapacity = linkCapacity, - mustRun = mustRun, - thermalModulation = thermalModulation, - removeVirtualAreas = removeVirtualAreas, - storageFlexibility = storageFlexibility, - production = production, - reassignCosts = reassignCosts, - newCols = newCols, - supressMessages = supressMessages) - - - }) - - } - } else { - message("No study.") - } - } - - - -} - -#' Convert antares output to h5 file -#' -#' @keywords internal -#' @export -.writeAntaresH5Fun <- function(path, - timeSteps, - opts, - writeMcAll, - compress, - misc, - thermalAvailabilities, - hydroStorage, - hydroStorageMaxPower, - reserve, - linkCapacity, - mustRun, - thermalModulation, - removeVirtualAreas, - storageFlexibility, - production, - reassignCosts, - newCols, - supressMessages){ - - .requireRhdf5_Antares() - - - if(is.null(path)){ - studPath <- unlist(strsplit(opts$simPath, "/")) - studName <- studPath[length(studPath)] - path <- paste0(studName, ".h5") - } - - #Close connection if exist - rhdf5::h5closeAll() - - #Create h5 file - rhdf5::h5createFile(path) - - #loop on timeStep - sapply(timeSteps, function(timeStep){ - - #Add mcAll - allMcYears <- opts$mcYears - if(writeMcAll){ - allMcYears <- c(allMcYears, -1) - } - - #Loop on MCyear - sapply(allMcYears, function(mcY) - { - - messageS <- ifelse(allMcYears[1] == mcY & timeSteps[1] == timeStep && !supressMessages, TRUE, FALSE) - if(allMcYears[1] == mcY){ - writeStructure = TRUE - }else{ - writeStructure = FALSE - } - mcAll <- FALSE - if(mcY == -1){ - mcY <- NULL - writeStructure <- TRUE - mcAll <- TRUE - } - - #Read data - if(messageS){ - res <- readAntares(areas = "all" , - links = "all", - clusters = "all", - districts = "all", - mcYears = mcY, - timeStep = timeStep, opts = opts, showProgress = FALSE, - misc = misc, thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, linkCapacity = linkCapacity, mustRun = mustRun, - thermalModulation = thermalModulation) - - - }else{ - res <- suppressWarnings(suppressMessages(readAntares(areas = "all" , - links = "all", - clusters = "all", - districts = "all", - mcYears = mcY, - timeStep = timeStep, opts = opts, showProgress = FALSE, - misc = misc, thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, linkCapacity = linkCapacity, mustRun = mustRun, - thermalModulation = thermalModulation))) - - } - - if(removeVirtualAreas){ - if(!(is.list(storageFlexibility))) - { - if(messageS){ - res <- removeVirtualAreas(res, - storageFlexibility = storageFlexibility, - production = production, - reassignCosts = reassignCosts, - newCols = newCols) - }else{ - res <- suppressWarnings(suppressMessages(removeVirtualAreas(res, - storageFlexibility = storageFlexibility, - production = production, - reassignCosts = reassignCosts, - newCols = newCols))) - } - }else{ - - for(i in 1:length(storageFlexibility)){ - res <- suppressWarnings( - suppressMessages( - removeVirtualAreas(res, - storageFlexibility = storageFlexibility[[i]], - production = production[[i]], - reassignCosts = reassignCosts[[i]], - newCols = newCols[[i]]))) - } - - } - } - - if(writeStructure & !mcAll){ - - - # Create group - rhdf5::h5closeAll() - rhdf5::h5createGroup(path, timeStep) - rhdf5::h5closeAll() - #Write time - writeTime(res, path, timeStep) - rhdf5::h5closeAll() - #Write attributes - - attrib <- attributes(res) - s <- serialize(attrib, NULL, ascii = TRUE) - rhdf5::h5write(rawToChar(s), path, paste0(timeStep, "/attrib")) - - # .writeAttributes(res = res, path = path, timeStep = timeStep) - - - - ###Write inputs - rhdf5::h5createGroup(path, paste0(timeStep, "/inputs")) - if(messageS){ - layout <- readLayout() - }else{ - layout <- suppressWarnings(suppressMessages(readLayout())) - } - s <- serialize(layout, NULL, ascii = TRUE) - rhdf5::h5write(rawToChar(s), path, paste0(timeStep, "/inputs/layout")) - - if(messageS){ - cldesc <- readClusterDesc() - }else{ - cldesc <- suppressWarnings(suppressMessages(readClusterDesc())) - } - s <- serialize(cldesc, NULL, ascii = TRUE) - rhdf5::h5write(rawToChar(s), path, paste0(timeStep, "/inputs/cldesc")) - if(messageS){ - bc <- readBindingConstraints() - }else{ - bc <- suppressWarnings(suppressMessages(readBindingConstraints())) - } - s <- serialize(bc, NULL, ascii = TRUE) - rhdf5::h5write(rawToChar(s), path, paste0(timeStep, "/inputs/buildingcte")) - - } - - #Remove useless data - ctrl <- sapply(1:length(res), function(i){ - if("day" %in% names(res[[i]])){ - res[[i]][, "day" := NULL] - } - if("month" %in% names(res[[i]])){ - res[[i]][, "month" := NULL] - } - if("hour" %in% names(res[[i]])){ - res[[i]][, "hour" := NULL] - } - if("time" %in% names(res[[i]])){ - res[[i]][, "time" := NULL] - } - invisible() - }) - gc() - - - if(is.null(mcY)){ - - lapply(res, function(X){ - X[, "mcYear" := "mcAll"] - - }) - } - #Transform for write - res <- transformH5(res,areasKey = c("area", "mcYear"), - linksKey = c("link", "mcYear"), - districtKey = c("district", "mcYear"), - clustersKey = c("area", "cluster", "mcYear")) - #Write data - writeAntaresData(res, path, timeStep, writeStructure, mcAll, compress) - }) - }) - rhdf5::h5closeAll() - message(paste0("File .h5 writed : ", path, "\n")) - invisible() -} diff --git a/R/h5utils.R b/R/h5utils.R deleted file mode 100644 index e7a71a70..00000000 --- a/R/h5utils.R +++ /dev/null @@ -1,26 +0,0 @@ -#Copyright © 2016 RTE Réseau de transport d’électricité - -#' Test if opts is h5 -#' -#' @description Test if the value returned by setSimulationPath() is referring to an h5 file -#' -#' @param opts , opts -#' @export -isH5Opts <- function(opts){ - v <- FALSE - if(!is.null(opts$h5)){ - if(opts$h5){ - v <- TRUE - } - } - v -} - - -.getTimeStep <- function(fid){ - .requireRhdf5_Antares() - timeSteps <- sapply(c("hourly", "daily", "weekly", "monthly", "annual"), function(X){ - rhdf5::H5Lexists(fid, X) - }) - names(timeSteps[which(timeSteps == TRUE)]) -} diff --git a/R/hvdcModification.R b/R/hvdcModification.R index 4fbe0a47..4bc82d98 100644 --- a/R/hvdcModification.R +++ b/R/hvdcModification.R @@ -8,6 +8,9 @@ #' @param removeHvdcAreas \code{boolean} remove HVDC areas. #' @param reafectLinks \code{boolean} . #' +#' @return Object of class "antaresDataList" is returned. +#' It is a list of data.tables, each element representing one type of element (areas, links, clusters) +#' #' @examples #' \dontrun{ #' diff --git a/R/importInput.R b/R/importInput.R index 9dea2ad0..f0360646 100644 --- a/R/importInput.R +++ b/R/importInput.R @@ -369,8 +369,13 @@ data$area <- area data$cluster <- cl - data <- data[opts$timeIdMin:opts$timeIdMax] - data$timeId <- opts$timeIdMin:opts$timeIdMax + + # index blocks + a <- opts$parameters$general$simulation.start + b <- opts$parameters$general$simulation.end + + data <- data[a:b] + data$timeId <- a:b changeTimeStep(data, timeStep, "daily", fun = "mean") }) diff --git a/R/importOutput.R b/R/importOutput.R index 2cb25a80..95036833 100644 --- a/R/importOutput.R +++ b/R/importOutput.R @@ -47,6 +47,7 @@ #' - "areas", "values" => areas #' - "areas", "details" => clusters #' - "areas", "details-res" => renewables clusters +#' - "areas", "details-STstorage" => short-term clusters #' - "links", "values" => links #' #' @return @@ -281,30 +282,37 @@ #' .get_value_columns_details_file #' -#' Private function used to get the column names for the details-timeStep.txt or details-res-timeStep.txt. -#' Used in .importOutputForClusters() and .importOutputForResClusters() +#' Private function used to get the column names for the details-timeStep.txt, details-res-timeStep.txt, or details-STstorage-timeStep.txt. +#' Used in .importOutputForClusters(), .importOutputForResClusters(), and .importOutputForSTClusters() #' From the opts, we detect which outputs the user decides to take #' #' @return #' a vector +#' +#' @importFrom assertthat assert_that #' #' @noRd #' .get_value_columns_details_file <- function(opts, type) { - if(type == "details") { - # Order is important. There is a correspondance between elements. - all_thematic_variables <- c("DTG by plant", "NP Cost by plant", "NODU by plant") - colNames <- c("production", "NP Cost", "NODU") - if (opts$antaresVersion >= 830){ - all_thematic_variables <- c(all_thematic_variables, "Profit by plant") - colNames <- c(colNames, "profit") - } - } else if(type == "details-res") { - # Order is important. There is a correspondance between elements. - all_thematic_variables <- c("RES generation by plant") - colNames <- c("production") - } + assert_that(type %in% c("details","details-res","details-STstorage")) + + simulation_variables_names_by_support <- read.table(system.file( + "format_output","simulation_variables_names_by_support.csv",package="antaresRead" + ),sep=";",fileEncoding="UTF-8",header = TRUE) + + filtered_variables_names <- subset(simulation_variables_names_by_support,DETAILS_FILES_TYPE==type) + if (type=="details" && opts$antaresVersion < 830) + filtered_variables_names <- subset(filtered_variables_names,ANTARES_DISPLAYED_NAME!="Profit by plant") + + # Order is important. There is a correspondance between elements + ordered_filtered_variables_names <- filtered_variables_names[ + order(filtered_variables_names$ORDINAL_POSITION_BY_TOPIC), + ] + + all_thematic_variables <- ordered_filtered_variables_names$ANTARES_DISPLAYED_NAME + colNames <- ordered_filtered_variables_names$RPACKAGE_DISPLAYED_NAME + # With thematic-trimming enabled if (opts$parameters$general$`thematic-trimming`) { if ("variables selection" %in% names(opts$parameters)) { @@ -346,36 +354,8 @@ .importOutputForClusters <- function(areas, timeStep, select = NULL, mcYears = NULL, showProgress, opts, mustRun = FALSE, parallel) { - # In output files, there is one file per area with the follwing form: - # cluster1-var1 | cluster2-var1 | cluster1-var2 | cluster2-var2 - # the following function reshapes the result to have variable cluster in column. - # To improve greatly the performance we use our knowledge of the position of - # the columns instead of using more general functions like dcast. - reshapeFun <- function(x) { - - # Get cluster names - n <- names(x) - idx <- ! n %in% pkgEnv$idVars - clusterNames <- tolower(unique(n[idx])) - - # Id vars names - idVarsId <- which(!idx) - idVarsNames <- n[idVarsId] - - # Column names of the output table - colNames <- .get_value_columns_details_file(opts, "details") - - # Loop over clusters - nclusters <- length(clusterNames) - - res <- llply(1:nclusters, function(i) { - dt <- x[, c(nclusters * 0:(length(colNames) - 1) + i, idVarsId), with = FALSE] - setnames(dt, c(colNames, idVarsNames)) - dt[, cluster := as.factor(clusterNames[i])] - dt - }) - - rbindlist(res) + reshapeFun <- function(x){ + .reshape_details_file(x,file_type="details",opts=opts) } if (!mustRun) { @@ -488,6 +468,47 @@ } +#' .reshape_details_file +#' +#' In output files, there is one file per area with the follwing form: +#' cluster1-var1 | cluster2-var1 | cluster1-var2 | cluster2-var2 +#' the following function reshapes the result to have variable cluster in column. +#' To improve greatly the performance we use our knowledge of the position of +#' the columns instead of using more general functions like dcast. +#' +#' @return +#' a data.table +#' +#' @noRd +#' +.reshape_details_file <- function(x,file_type,opts) { + + # Get cluster names + n <- names(x) + idx <- ! n %in% pkgEnv$idVars + clusterNames <- tolower(unique(n[idx])) + + # Id vars names + idVarsId <- which(!idx) + idVarsNames <- n[idVarsId] + + # Column names of the output table + colNames <- .get_value_columns_details_file(opts=opts,type=file_type) + + # Loop over clusters + nclusters <- length(clusterNames) + + res <- llply(1:nclusters, function(i) { + dt <- x[, c(nclusters * 0:(length(colNames) - 1) + i, idVarsId), with = FALSE] + setnames(dt, c(colNames, idVarsNames)) + dt[, cluster := as.factor(clusterNames[i])] + dt + }) + + rbindlist(res) +} + + #' .importOutputForResClusters #' #' Private function used to import the output for the renewable clusters of one area @@ -500,38 +521,11 @@ .importOutputForResClusters <- function(areas, timeStep, select = NULL, mcYears = NULL, showProgress, opts, parallel) { - # In output files, there is one file per area with the follwing form: - # cluster1-var1 | cluster2-var1 | cluster1-var2 | cluster2-var2 - # the following function reshapes the result to have variable cluster in column. - # To improve greatly the performance we use our knowledge of the position of - # the columns instead of using more general functions like dcast. + reshapeFun <- function(x) { - - # Get cluster names - n <- names(x) - idx <- ! n %in% pkgEnv$idVars - clusterNames <- tolower(unique(n[idx])) - - # Id vars names - idVarsId <- which(!idx) - idVarsNames <- n[idVarsId] - - # Column names of the output table - colNames <- .get_value_columns_details_file(opts, "details-res") - - # Loop over clusters - nclusters <- length(clusterNames) - - res <- llply(1:nclusters, function(i) { - dt <- x[, c(nclusters * 0:(length(colNames) - 1) + i, idVarsId), with = FALSE] - setnames(dt, c(colNames, idVarsNames)) - dt[, cluster := as.factor(clusterNames[i])] - dt - }) - - rbindlist(res) + .reshape_details_file(x,file_type="details-res",opts=opts) } - + suppressWarnings( .importOutput("areas", "details-res", "area", areas, timeStep, NULL, mcYears, showProgress, opts, reshapeFun, sameNames = FALSE, @@ -540,6 +534,29 @@ } +#' .importOutputForSTClusters +#' +#' Private function used to import the output for the short-term clusters of one area +#' +#' @return +#' a data.table +#' +#' @noRd +#' +.importOutputForSTClusters <- function(areas, timeStep, select = NULL, mcYears = NULL, + showProgress, opts, parallel) { + + reshapeFun <- function(x) { + .reshape_details_file(x,file_type="details-STstorage",opts=opts) + } + + suppressWarnings( + .importOutput("areas", "details-STstorage", "area", areas, timeStep, NULL, + mcYears, showProgress, opts, reshapeFun, sameNames = FALSE, + objectDisplayName = "clustersST", parallel = parallel) + ) +} + #' .importOutputForBindingConstraints #' #' Private function used to import the output for binding constraints. diff --git a/R/ponderateMcAggregation.R b/R/ponderateMcAggregation.R index 43657eba..61dc64c2 100644 --- a/R/ponderateMcAggregation.R +++ b/R/ponderateMcAggregation.R @@ -4,6 +4,9 @@ #' @param fun \code{function} function to use #' @param ... \code{args} others args pass to fun #' +#' @return +#' Object of class "antaresDataTable". +#' #' @examples #' \dontrun{ #' data <- readAntares(areas = 'all', mcYears = 'all') diff --git a/R/readAntares.R b/R/readAntares.R index fdeb20e2..dfc1ac62 100644 --- a/R/readAntares.R +++ b/R/readAntares.R @@ -14,6 +14,8 @@ #' Read the data of an Antares simulation #' #' @description +#' `r antaresRead:::badge_api_ok()` +#' #' \code{readAntares} is a swiss-army-knife function used to read almost every #' possible time series of an antares Project at any desired time resolution #' (hourly, daily, weekly, monthly or annual). @@ -85,6 +87,11 @@ #' import results at renewable cluster level. If \code{NULL} no cluster is imported. The #' special value \code{"all"} tells the function to import renewable clusters from all #' areas. +#' @param clustersST +#' Vector containing the name of the areas for which you want to +#' import results at short-term cluster level. If \code{NULL} no cluster is imported. The +#' special value \code{"all"} tells the function to import short-term clusters from all +#' areas. #' @param bindingConstraints #' Should binding constraints be imported (v8.4+)? #' @param districts @@ -210,8 +217,8 @@ #' @export #' readAntares <- function(areas = NULL, links = NULL, clusters = NULL, - districts = NULL, clustersRes = NULL, bindingConstraints = FALSE, - misc = FALSE, thermalAvailabilities = FALSE, + districts = NULL, clustersRes = NULL, clustersST = NULL, + bindingConstraints = FALSE, misc = FALSE, thermalAvailabilities = FALSE, hydroStorage = FALSE, hydroStorageMaxPower = FALSE, reserve = FALSE, linkCapacity = FALSE, mustRun = FALSE, thermalModulation = FALSE, @@ -221,7 +228,7 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, mcWeights = NULL, opts = simOptions(), parallel = FALSE, simplify = TRUE, showProgress = TRUE) { - + if((!is.null(opts$parameters$`other preferences`$`renewable-generation-modelling`) && !opts$parameters$`other preferences`$`renewable-generation-modelling` %in% "clusters") || is.null(opts$parameters$`other preferences`$`renewable-generation-modelling`)){ @@ -276,31 +283,31 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, } } - if(isH5Opts(opts)){ - - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(.h5ReadAntares(path = opts$h5path, - areas = areas, - links = links, - clusters = clusters, - districts = districts, - misc = misc, - thermalAvailabilities = thermalAvailabilities, - hydroStorage = hydroStorage, - hydroStorageMaxPower = hydroStorageMaxPower, - reserve = reserve, - linkCapacity = linkCapacity, - mustRun = mustRun, - thermalModulation = thermalModulation, - select = select, - mcYears = mcYears, - timeStep = timeStep[1], - showProgress = showProgress, - simplify = simplify)) - } else { - stop(rhdf5_message) - } - } + # if(isH5Opts(opts)){ + # + # if(.requireRhdf5_Antares(stopP = FALSE)){ + # return(.h5ReadAntares(path = opts$h5path, + # areas = areas, + # links = links, + # clusters = clusters, + # districts = districts, + # misc = misc, + # thermalAvailabilities = thermalAvailabilities, + # hydroStorage = hydroStorage, + # hydroStorageMaxPower = hydroStorageMaxPower, + # reserve = reserve, + # linkCapacity = linkCapacity, + # mustRun = mustRun, + # thermalModulation = thermalModulation, + # select = select, + # mcYears = mcYears, + # timeStep = timeStep[1], + # showProgress = showProgress, + # simplify = simplify)) + # } else { + # stop(rhdf5_message) + # } + # } if (opts$mode == "Input") stop("Cannot use 'readAntares' in 'Input' mode.") @@ -309,6 +316,7 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, links = links, clusters = clusters, clustersRes = clustersRes, + clustersST = clustersST, districts = districts, mcYears = mcYears) @@ -317,6 +325,7 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, links <- reqInfos$links clusters <- reqInfos$clusters clustersRes <- reqInfos$clustersRes + clustersST <- reqInfos$clustersST districts <- reqInfos$districts mcYears <- reqInfos$mcYears synthesis <- reqInfos$synthesis @@ -328,7 +337,7 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, return(aggregateResult(opts = opts, verbose = showProgress, filtering = TRUE, - selected = list(areas = areas, links = links, clusters = clusters, clustersRes = clustersRes), + selected = list(areas = areas, links = links, clusters = clusters, clustersRes = clustersRes, clustersST = clustersST), timestep = timeStep, writeOutput = FALSE, mcWeights = mcWeights, mcYears = mcYears)) @@ -337,12 +346,27 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, if(length(reqInfos$computeAdd)>0) { for (v in reqInfos$computeAdd) { - assign(v, TRUE) + if(v%in%"misc") + misc <- TRUE + if(v%in%"thermalAvailabilities") + thermalAvailabilities <- TRUE + if(v%in%"hydroStorage") + hydroStorage <- TRUE + if(v%in%"hydroStorageMaxPower") + hydroStorageMaxPower <- TRUE + if(v%in%"reserve") + reserve <- TRUE + if(v%in%"linkCapacity") + linkCapacity <- TRUE + if(v%in%"mustRun") + mustRun <- TRUE + if(v%in%"thermalModulation") + thermalModulation <- TRUE } } # If all arguments are NULL, import all areas - if (is.null(areas) & is.null(links) & is.null(clusters) & is.null(clustersRes) & is.null(districts)) { + if (is.null(areas) & is.null(links) & is.null(clusters) & is.null(clustersRes) & is.null(clustersST) & is.null(districts)) { areas <- "all" } @@ -353,6 +377,7 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, links <- .checkArg(links, opts$linkList, "Links %s do not exist in the simulation.") clusters <- .checkArg(clusters, opts$areasWithClusters, "Areas %s do not exist in the simulation or do not have any thermal cluster.") clustersRes <- .checkArg(clustersRes, opts$areasWithResClusters, "Areas %s do not exist in the simulation or do not have any renewable cluster.") + clustersST <- .checkArg(clustersST, opts$areasWithSTClusters, "Areas %s do not exist in the simulation or do not have any short-term cluster.") districts <- .checkArg(districts, opts$districtList, "Districts %s do not exist in the simulation.") mcYears <- .checkArg(mcYears, opts$mcYears, "Monte-Carlo years %s have not been exported.", allowDup = TRUE) @@ -476,6 +501,12 @@ readAntares <- function(areas = NULL, links = NULL, clusters = NULL, opts, parallel = parallel) if(!is.null(res$clustersRes) && nrow(res$clustersRes) == 0) res$clustersRes <- NULL + # Import short-term clusters + res$clustersST <- .importOutputForSTClusters(clustersST, timeStep, NULL, + mcYears, showProgress, + opts, parallel = parallel) + if(!is.null(res$clustersST) && nrow(res$clustersST) == 0) res$clustersST <- NULL + # Import thermal clusters and eventually must run if (!mustRun) { res$clusters <- .importOutputForClusters(clusters, timeStep, NULL, mcYears, @@ -820,6 +851,7 @@ readAntaresAreas <- function(areas, links = TRUE, clusters = TRUE, clustersRes = links, clusters, clustersRes, + clustersST, districts, mcYears){ @@ -873,6 +905,10 @@ readAntaresAreas <- function(areas, links = TRUE, clusters = TRUE, clustersRes = if (!is.null(areas)) clustersRes <- areas else clustersRes <- "all" } + if ("clustersST" %in% unlist(select) & is.null(clustersST)) { + if (!is.null(areas)) clustersST <- areas + else clustersST <- "all" + } if ("mcYears" %in% unlist(select) & is.null(mcYears)) mcYears <- "all" # If all arguments are NULL, import all areas @@ -888,6 +924,7 @@ readAntaresAreas <- function(areas, links = TRUE, clusters = TRUE, clustersRes = links = links, clusters = clusters, clustersRes = clustersRes, + clustersST = clustersST, districts = districts, mcYears = mcYears, synthesis = synthesis, diff --git a/R/readBindingConstraints.R b/R/readBindingConstraints.R index 4cd4d63c..ebe65506 100644 --- a/R/readBindingConstraints.R +++ b/R/readBindingConstraints.R @@ -64,13 +64,6 @@ #' #' @export readBindingConstraints <- function(opts = simOptions()) { - if(isH5Opts(opts)){ - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(h5ReadBindingConstraints(opts)) - } else { - stop(rhdf5_message) - } - } ## # API BLOC diff --git a/R/readClusterDesc.R b/R/readClusterDesc.R index dad0515e..5f9ea0c1 100644 --- a/R/readClusterDesc.R +++ b/R/readClusterDesc.R @@ -88,22 +88,10 @@ readClusterSTDesc <- function(opts = simOptions()) { .readClusterDesc(opts = opts, dir = "st-storage/clusters") } - +#' @importFrom stats setNames .readClusterDesc <- function(opts = simOptions(), dir = "thermal/clusters") { - if(isH5Opts(opts)){ - if(dir %in% "thermal/clusters"){ - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(h5ReadClusterDesc(opts)) - } else { - stop(rhdf5_message, call. = FALSE) - } - } else { - stop("Read cluster Description from '", dir, "' not available using .h5", call. = FALSE) - } - } - path <- file.path(opts$inputPath, dir) columns <- .generate_columns_by_type(dir = dir) diff --git a/R/readInputClusters.R b/R/readInputClusters.R index 61dc812d..185f0fc3 100644 --- a/R/readInputClusters.R +++ b/R/readInputClusters.R @@ -7,7 +7,9 @@ #' project. But contrary to \code{\link{readAntares}}, it only reads time series #' stored in the input folder, so it can work in "input" mode. #' +#' @param areas vector of areas names for which thermal time series must be read. #' @param clusters vector of clusters names for which thermal time series must be read. +#' @param thermalAvailabilities if TRUE, return thermalAvailabilities data #' @param thermalModulation if TRUE, return thermalModulation data #' @param thermalData if TRUE, return thermalData from prepro #' @inheritParams readAntares @@ -27,13 +29,24 @@ #' \code{\link{getAreas}}, \code{\link{getLinks}} #' #' @export -readInputThermal <- function(clusters = NULL, thermalModulation = FALSE, thermalData = FALSE, +readInputThermal <- function(areas = "all", + clusters, + thermalAvailabilities = TRUE, + thermalModulation = FALSE, + thermalData = FALSE, opts = simOptions(), timeStep = c("hourly", "daily", "weekly", "monthly", "annual"), - simplify = TRUE, parallel = FALSE, + simplify = TRUE, + parallel = FALSE, showProgress = TRUE) { + if(!any(thermalAvailabilities, thermalModulation, thermalData)){ + stop("At least one type of data should be selected") + } + timeStep <- match.arg(timeStep) + areas <- tolower(unique(areas)) + clusters <- tolower(unique(clusters)) # Can the importation be parallelized ? if (parallel) { @@ -41,46 +54,73 @@ readInputThermal <- function(clusters = NULL, thermalModulation = FALSE, thermal if (!foreach::getDoParRegistered()) stop("Parallelized importation impossible. Please register a parallel backend, for instance with function 'registerDoParallel'") } - allAreasClusters <- readClusterDesc(opts = opts)[area %in% opts$areasWithClusters, c("area", "cluster")] - allClusters <- unique(allAreasClusters$cluster) - # Manage special value "all" - if(identical(clusters, "all")) clusters <- allClusters - - if (length(setdiff(tolower(clusters), tolower(allClusters))) > 0){ - cat(c("the following clusters are not available : ",setdiff(tolower(clusters), tolower(allClusters)))) - stop("Some clusters are not available in the areas specified") + allAreasClusters <- readClusterDesc(opts = opts)[, c("area", "cluster")] + + #To compare with area and cluster selected + allAreasClusters$lower_area <- tolower(allAreasClusters$area) + allAreasClusters$lower_cluster <- tolower(allAreasClusters$cluster) + + if (identical(areas, "all")) { + areas <- allAreasClusters$area + }else{ + # Check for unavailable areas + diff_areas <- setdiff(areas, allAreasClusters$lower_area) + if (length(diff_areas) > 0) { + stop(paste0("the following areas are not available:", diff_areas)) + } + } + # All areas selected with corresponding clusters + allAreasClusters_filtered_area <- allAreasClusters[area %in% areas] + + if (identical(clusters, "all")) { + clusters <- allAreasClusters_filtered_area$cluster + }else{ + # Check for unavailable clusters + diff_clusters <- setdiff(clusters, allAreasClusters_filtered_area$lower_cluster) + if (length(diff_clusters) > 0) { + stop(paste0("the following clusters are not available:", diff_clusters)) + } } + # Couple areas/clusters of interest. + allAreasClusters_filtered <- allAreasClusters_filtered_area[cluster %in% clusters] + + # To loop + clusters <- unique(allAreasClusters_filtered$cluster) - ind_cluster <- which(tolower(allClusters) %in% tolower(clusters)) - clusters <- unique(allClusters[ind_cluster]) res <- list() # Object the function will return - thermalTS <- as.data.table(ldply(clusters, function(cl) { + # ThermalAvailabilities processing (/series) + if (thermalAvailabilities){ + thermalTS <- as.data.table(ldply(clusters, function(cl) { + areas <- allAreasClusters_filtered[cluster == cl]$area + resCl <- ldply(areas, function(x){ + filePattern <- sprintf("%s/%s/%%s/series.txt", "thermal/series", x) + mid <- .importInputTS(cl, timeStep, opts, filePattern, "ThermalAvailabilities", + inputTimeStep = "hourly", type = "matrix") + + if (is.null(mid)){ + nb_rows_ts <- opts$timeIdMax + timeId_value <- seq(1,nb_rows_ts) + tsId_value <- replicate(nb_rows_ts,1) + ThermalAvailabilities_value <- replicate(nb_rows_ts,0) + mid <- data.table("timeId" = timeId_value, "tsId" = tsId_value, "ThermalAvailabilities" = ThermalAvailabilities_value) + } + mid$area <- x + mid$cluster <- cl + mid + }) + resCl <- dcast(as.data.table(resCl), area + cluster + timeId ~ tsId, value.var = "ThermalAvailabilities") + })) - area <- unique(allAreasClusters[cluster == cl]$area) - if (length(area) > 1) warning(cl," is in more than one area") - resCl <- ldply(area, function(x){ - filePattern <- sprintf("%s/%s/%%s/series.txt", "thermal/series", x) - mid <- .importInputTS(cl, timeStep, opts, filePattern, "ThermalAvailabilities", - inputTimeStep = "hourly", type = "matrix") - if (is.null(mid)) return (data.table()) - mid$area <- x - mid$cluster <- cl - mid - }) + tsCols <- setdiff(colnames(thermalTS), c("area", "cluster", "timeId")) + setnames(thermalTS, tsCols, paste0("ts",tsCols)) + setcolorder(thermalTS, c("area", "cluster", "timeId", setdiff(names(thermalTS), c("area", "cluster", "timeId")))) - resCl <- dcast(as.data.table(resCl), area + cluster + timeId ~ tsId, value.var = "ThermalAvailabilities") - })) - - tsCols <- setdiff(colnames(thermalTS), c("area", "cluster", "timeId")) - setnames(thermalTS, tsCols, paste0("ts",tsCols)) - setcolorder(thermalTS, c("area", "cluster", "timeId", setdiff(names(thermalTS), c("area", "cluster", "timeId")))) - - if (nrow(thermalTS) > 0) res$thermalAvailabilities <- thermalTS + if (nrow(thermalTS) > 0) res$thermalAvailabilities <- thermalTS + } - # thermalModulation processing + # thermalModulation processing (/prepro/.../.../modulation.txt) if (thermalModulation){ - areas <- unique(allAreasClusters[cluster %in% clusters]$area) thermalMod <- as.data.table(ldply(areas, .importThermalModulation, opts = opts, timeStep = timeStep)) thermalMod <- thermalMod[cluster %in% clusters] setcolorder(thermalMod, c("area", "cluster", "timeId", setdiff(names(thermalMod), c("area", "cluster", "timeId")))) @@ -88,21 +128,21 @@ readInputThermal <- function(clusters = NULL, thermalModulation = FALSE, thermal if (nrow(thermalMod) > 0) res$thermalModulation <- thermalMod } - # thermalData processing + # thermalData processing (/prepro/.../.../data.txt) if (thermalData){ - areas <- unique(allAreasClusters[cluster %in% clusters]$area) thermalDat <- as.data.table(ldply(areas, .importThermalData, opts = opts, timeStep = timeStep)) thermalDat <- thermalDat[cluster %in% clusters] setcolorder(thermalDat, c("area", "cluster", "timeId", setdiff(names(thermalDat), c("area", "cluster", "timeId")))) if (nrow(thermalDat) > 0) res$thermalData <- thermalDat } - + if (length(res) == 0) stop("At least one argument of readInputTS has to be defined.") # Class and attributes res <- .addClassAndAttributes(res, NULL, timeStep, opts, simplify) addDateTimeColumns(res) + } @@ -115,6 +155,7 @@ readInputThermal <- function(clusters = NULL, thermalModulation = FALSE, thermal #' project. But contrary to \code{\link{readAntares}}, it only reads time series #' stored in the input folder, so it can work in "input" mode. #' +#' @param areas vector of RES areas names for which renewable time series must be read. #' @param clusters vector of RES clusters names for which renewable time series must be read. #' @inheritParams readAntares #' @@ -126,12 +167,17 @@ readInputThermal <- function(clusters = NULL, thermalModulation = FALSE, thermal #' \code{\link{getAreas}}, \code{\link{getLinks}} #' #' @export -readInputRES <- function(clusters = NULL, opts = simOptions(), +readInputRES <- function(areas = "all", + clusters, + opts = simOptions(), timeStep = c("hourly", "daily", "weekly", "monthly", "annual"), - simplify = TRUE, parallel = FALSE, + simplify = TRUE, + parallel = FALSE, showProgress = TRUE) { timeStep <- match.arg(timeStep) + areas <- tolower(unique(areas)) + clusters <- tolower(unique(clusters)) # Can the importation be parallelized ? if (parallel) { @@ -140,28 +186,48 @@ readInputRES <- function(clusters = NULL, opts = simOptions(), } allAreasClusters <- readClusterResDesc(opts = opts)[area %in% opts$areasWithResClusters, c("area", "cluster")] - allClusters <- unique(allAreasClusters$cluster) - # Manage special value "all" - if(identical(clusters, "all")) clusters <- allClusters - - if (length(setdiff(tolower(clusters), tolower(allClusters))) > 0){ - cat(c("the following clusters are not available : ",setdiff(tolower(clusters), tolower(allClusters)))) - stop("Some clusters are not available in the areas specified") + allAreasClusters$lower_area <- tolower(allAreasClusters$area) + allAreasClusters$lower_cluster <- tolower(allAreasClusters$cluster) + + if (identical(areas, "all")) { + areas <- allAreasClusters$area + }else{ + # Check for unavailable areas + diff_areas <- setdiff(areas, allAreasClusters$lower_area) + if (length(diff_areas) > 0) { + stop(paste0("the following areas are not available:", diff_areas)) + } + } + allAreasClusters_filtered_area <- allAreasClusters[area %in% areas] + + if (identical(clusters, "all")) { + clusters <- allAreasClusters_filtered_area$cluster + }else{ + # Check for unavailable clusters + diff_clusters <- setdiff(clusters, allAreasClusters_filtered_area$lower_cluster) + if (length(diff_clusters) > 0) { + stop(paste0("the following clusters are not available:", diff_clusters)) + } } + allAreasClusters_filtered <- allAreasClusters_filtered_area[cluster %in% clusters] + clusters <- unique(allAreasClusters_filtered$cluster) - ind_cluster <- which(tolower(allClusters) %in% tolower(clusters)) - clusters <- unique(allClusters[ind_cluster]) res <- list() # Object the function will return ResTS <- as.data.table(ldply(clusters, function(cl) { - area <- unique(allAreasClusters[cluster == cl]$area) - if (length(area) > 1) warning(cl," is in more than one area") - resCl <- ldply(area, function(x){ + areas <- allAreasClusters_filtered[cluster == cl]$area + resCl <- ldply(areas, function(x){ filePattern <- sprintf("%s/%s/%%s/series.txt", "renewables/series", x) mid <- .importInputTS(cl, timeStep, opts, filePattern, "production", inputTimeStep = "hourly", type = "matrix") - if (is.null(mid)) return (data.table()) + if (is.null(mid)){ + nb_rows_ts <- opts$timeIdMax + timeId_value <- seq(1,nb_rows_ts) + tsId_value <- replicate(nb_rows_ts,1) + production_value <- replicate(nb_rows_ts,0) + mid <- data.table("timeId" = timeId_value, "tsId" = tsId_value, "production" = production_value) + } mid$area <- x mid$cluster <- cl mid diff --git a/R/readLayout.R b/R/readLayout.R index ca3ffdbb..125078c9 100644 --- a/R/readLayout.R +++ b/R/readLayout.R @@ -88,13 +88,13 @@ readLayout <- function(opts = simOptions(), xyCompare = c("union","intersect")) stopifnot(class(opts) %in% "simOptions") - if(isH5Opts(opts)){ - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(h5ReadLayout(opts)) - } else { - stop(rhdf5_message) - } - } + # if(isH5Opts(opts)){ + # if(.requireRhdf5_Antares(stopP = FALSE)){ + # return(h5ReadLayout(opts)) + # } else { + # stop(rhdf5_message) + # } + # } #if there are no areas return NULL if(length(opts$areaList)==0 | identical(opts$areaList,"")) { diff --git a/R/setHvdcAreas.R b/R/setHvdcAreas.R index a170febf..265c5f09 100644 --- a/R/setHvdcAreas.R +++ b/R/setHvdcAreas.R @@ -10,7 +10,8 @@ #' #' #' -#' @return A list with three elements: +#' @return Object of class "antaresDataList" is returned. +#' It is a list of data.tables, each element representing one type of element (areas, links, clusters) #' #' #' @examples diff --git a/R/setSimulationPath.R b/R/setSimulationPath.R index 5df538e3..9b5498e5 100644 --- a/R/setSimulationPath.R +++ b/R/setSimulationPath.R @@ -56,8 +56,11 @@ #' \item{districtList}{Vector of the available districts.} #' \item{linkList}{Vector of the available links.} #' \item{areasWithClusters}{Vector of areas containing clusters.} +#' \item{areasWithResClusters}{Vector of areas containing clusters renewable.} +#' \item{areasWithSTClusters}{Vector of areas containing clusters storage (>=v8.6.0).} #' \item{variables}{Available variables for areas, districts and links.} #' \item{parameters}{Other parameters of the simulation.} +#' \item{binding}{Table of time series dimensions for each group (>=v8.7.0).} #' \item{timeIdMin}{ #' Minimum time id of the simulation. It is generally equal to one but can #' be higher if working on a subperiod. @@ -178,40 +181,19 @@ #' #' @rdname setSimulationPath setSimulationPath <- function(path, simulation = NULL) { - + if (missing(path)) { if (exists("choose.dir", getNamespace("utils"))) { # choose.dir is defined only on Windows - path <- utils::choose.dir(getwd(), "Select an Antares simulation directory") + path <- utils::choose.dir("./", "Select an Antares simulation directory") if (is.na(path)) stop("You have canceled the execution.") } else { stop("Please specify a path to an Antares simulation") } } - # Get study, simulation and input paths - # .h5 ? - if(grepl(".h5$", path)){ - if(file.exists(path)){ - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(setSimulationPathH5(path)) - } else { - stop(rhdf5_message) - } - } else { - stop("Invalid path argument. File .h5 not found") - } - } - - # else local file system + # # Get study, simulation and input paths res <- .getPaths(path, simulation) - if(res[1] == "H5"){ - if(.requireRhdf5_Antares(stopP = FALSE)){ - return(setSimulationPathH5(path, simulation)) - } else { - stop(rhdf5_message) - } - } res$studyName <- readIniFile(file.path(res$studyPath, "study.antares"))$antares$caption @@ -219,6 +201,8 @@ setSimulationPath <- function(path, simulation = NULL) { # the simulation folder. if (is.null(res$simPath)) { res <- append(res, .getInputOptions(res)) + if(res$antaresVersion>=870) + res <- append(res, .getDimBCGroups(res)) } else { res <- append(res, .getSimOptions(res)) } @@ -274,6 +258,11 @@ setSimulationPath <- function(path, simulation = NULL) { # Private function that extracts study, simulation and input paths from the # path specified by the user. .getPaths <- function(path, simulation) { + # check path must be length 1 + if(length(path)!=1) + stop("Only one path is required", + call. = FALSE) + path <- gsub("[/\\]$", "", path) path <- normalizePath(path, winslash = "/") @@ -292,19 +281,9 @@ setSimulationPath <- function(path, simulation = NULL) { # - 2. there is only one study in the output. Select it # - 3. asks the user to interactively choose one simulation - if (!file.exists(file.path(path, "study.antares"))){ - allFiles <- list.files(path) - avaliableFile <- allFiles[grep(".h5$", allFiles)] - if(length(avaliableFile) == 0) - { + if (!file.exists(file.path(path, "study.antares"))) stop("Directory is not an Antares study.") - }else{ - ##H5 mode - return("H5") - } - } - outputPath <- file.path(path, "output") outputContent <- list.dirs(outputPath, recursive = FALSE) @@ -477,6 +456,20 @@ setSimulationPath <- function(path, simulation = NULL) { areasWithResClusters <- sort(union(areaList_mc_all[hasResClusters_mc_all], areaList_mc_ind[hasResClusters_mc_ind])) + + # Areas containing short-term clusters + hasSTClusters_mc_all <- laply(file.path(dataPath_mc_all, "areas", areaList_mc_all), function(x) { + f <- list.files(x) + any(grepl("details-STstorage-", f)) + }) + hasSTClusters_mc_ind <- laply(file.path(dataPath_mc_ind, "areas", areaList_mc_ind), function(x) { + f <- list.files(x) + any(grepl("details-STstorage-", f)) + }) + + areasWithSTClusters <- sort(union(areaList_mc_all[hasSTClusters_mc_all], + areaList_mc_ind[hasSTClusters_mc_ind])) + # Available variables variables <- list() @@ -518,6 +511,7 @@ setSimulationPath <- function(path, simulation = NULL) { linksDef = linksDef, areasWithClusters = areasWithClusters, areasWithResClusters = areasWithResClusters, + areasWithSTClusters = areasWithSTClusters, variables = variables, parameters = params ) @@ -628,3 +622,60 @@ setSimulationPath <- function(path, simulation = NULL) { data.table(link = character(), from = character(), to = character()) } } + +# >= v8.7.0 to have dimension of TS for binding constraints +.getDimBCGroups <- function(list_options){ + # list files + bc_path <- file.path(list_options$inputPath, "bindingconstraints") + bc_all_files <- list.files(bc_path, full.names = TRUE) + vector_size <- file.size(bc_all_files) + + # return NULL if no BC + if(sum(vector_size)==0) + return(NULL) + else{ + # return NULL if no .txt files (no values) + search_values <- grepl(x = bc_all_files, pattern = ".txt") + if(!any(search_values)) + return(NULL) + + # keep only values size >0 + bc_name_values_files <- gsub('(.*)_.*', + '\\1', + grep(x = list.files(bc_path), + pattern = ".txt", + value = TRUE)) + + df_info_files <- data.table(path = bc_all_files[search_values], + size = vector_size[search_values], + bc_name = bc_name_values_files) + df_info_files <- df_info_files[size>0,] + + # extract name + group from .ini properties + properties_group <- readIniFile(file = bc_all_files[!search_values]) + + df_groups <- do.call("rbind", + lapply(properties_group, function(x){ + data.table(x$id, + x$group) + })) + names(df_groups)<-c("bc_name", "name_group") + + # merge information + df_groups <- merge(df_info_files, df_groups) + + # read + dim values files + res <- sapply(df_groups$path, function(x){ + file <- data.table::fread(file = x) + dim(file)[2] + }) + + df_groups$dim <- res + + # filter df with only one group with dim > 1 + df_groups <- unique(df_groups[, c("name_group", "dim")]) + df_groups <- df_groups[dim>1] + + return(list(binding = df_groups)) + } +} diff --git a/R/showAliases.R b/R/showAliases.R index 270e424d..64b6c580 100644 --- a/R/showAliases.R +++ b/R/showAliases.R @@ -75,9 +75,9 @@ showAliases <- function(names = NULL) { #' #' @export setAlias <- function(name, desc, select) { - if (!exists("varAliases", envir = pkgEnv)) { - assign("pkgEnv", list(), envir = pkgEnv) - } + if (!exists("varAliases", envir = pkgEnv)) + message("varAliases is not defined and will be created") + pkgEnv$varAliases[[name]] <- list(desc = desc, select = select) invisible(TRUE) diff --git a/R/utils_api.R b/R/utils_api.R index 40284089..38743b77 100644 --- a/R/utils_api.R +++ b/R/utils_api.R @@ -162,6 +162,15 @@ read_secure_json <- function(url, token = NULL, timeout = 60, config = list()) { ) areasWithResClusters <- names(hasResClusters)[hasResClusters] + + hasSTClusters <- unlist( + lapply( + read_secure_json(file.path(dataPath, "areas&depth=2"), ...), + function(x) any(grepl("details-STstorage-", names(x))) + ) + ) + + areasWithSTClusters <- names(hasSTClusters)[hasSTClusters] # Available variables variables <- list() @@ -212,6 +221,7 @@ read_secure_json <- function(url, token = NULL, timeout = 60, config = list()) { linksDef = linksDef, areasWithClusters = intersect(areasWithClusters, areaList), areasWithResClusters = intersect(areasWithResClusters, areaList), + areasWithSTClusters = intersect(areasWithSTClusters, areaList), variables = variables, parameters = params ) @@ -461,6 +471,10 @@ setSimulationPathAPI <- function(host, study_id, token, simulation = NULL, #' list of simulation parameters returned by the function #' \code{\link{setSimulationPathAPI}} #' @param timeout \code{numeric} API timeout (seconds). Default to 60. +#' +#' @return +#' Object of class `simOptions`, list of options used to read the data contained in the last +#' simulation read by \code{\link{setTimeoutAPI}}. #' #' @export #' diff --git a/R/zzz.R b/R/zzz.R index e1a7f4da..70e125cd 100644 --- a/R/zzz.R +++ b/R/zzz.R @@ -104,155 +104,10 @@ utils::globalVariables( "NODU", "min.stable.power", "thermalPmin", "name", "value", "Folder", "Mode", "Stats", "Name", "progNam", "mrgprice", "isLOLD_cum", "...To", "upstream", "downstream", "LOLD", "LOLD_data", "LOLP", "warn_for_status", - "MRG. PRICE", "H. LEV", "V2", "V1") + "MRG. PRICE", "H. LEV", "V2", "V1", "size", "ORDINAL_POSITION_BY_TOPIC", + "DETAILS_FILES_TYPE","ANTARES_DISPLAYED_NAME") ) -#----------------------------- HDF5 ------------------------------------# - - -is.installed <- function(mypkg) is.element(mypkg, utils::installed.packages()[,1]) - -rhdf5_version <- "2.24.0" -rhdf5_message <- "This function require 'rhdf5' (>= 2.24.0) package. - This is a bioconductor package. You can install it with : - source('https://bioconductor.org/biocLite.R') - biocLite('rhdf5')" - -# !! parameter versionCheck of requireNamespace does not work correctly, use utils::package_version instead -.requireRhdf5_Antares <- function(stopP = TRUE){ - if(.check_rhdf5(stopP = stopP)){ - if(.check_rhdf5_version(stopP = stopP)){ - return(TRUE) - } - } - return(FALSE) -} - -.stop_rhdf5_version <- function(stopP = TRUE) { - if(stopP){ - stop(rhdf5_message) - }else{ - return(FALSE) - } -} - -.check_rhdf5 <- function(stopP = TRUE){ - if(requireNamespace("rhdf5", quietly = TRUE)){ - return(TRUE) - }else{ - .stop_rhdf5_version(stopP) - } -} - -.check_rhdf5_version <- function(stopP = TRUE){ - if(utils::packageVersion("rhdf5") >= rhdf5_version){ - return(TRUE) - }else{ - .stop_rhdf5_version(stopP) - } -} - -# .addClassAndAttributes <- antaresRead:::.addClassAndAttributes - -pkgEnvAntareasH5 <- new.env() - -pkgEnvAntareasH5$varAreas <- c("OV. COST", "OP. COST", "MRG. PRICE", "CO2 EMIS.", "BALANCE", - "ROW BAL.", "PSP", "MISC. NDG", "LOAD", "H. ROR", "WIND", "SOLAR", - "NUCLEAR", "LIGNITE", "COAL", "GAS", "OIL", "MIX. FUEL", "MISC. DTG", - "H. STOR", "UNSP. ENRG", "SPIL. ENRG", "LOLD", "LOLP", "AVL DTG", - "DTG MRG", "MAX MRG", "NP COST", "NODU") - -pkgEnvAntareasH5$varAreas <- as.vector(sapply(pkgEnvAntareasH5$varAreas, function(X){paste0(X, c("", "_min", "_max", "_std"))})) - -pkgEnvAntareasH5$varDistricts <- pkgEnvAntareasH5$varAreas - -pkgEnvAntareasH5$varLinks <- c("FLOW LIN.", "UCAP LIN.", "FLOW QUAD.", - "CONG. FEE (ALG.)", "CONG. FEE (ABS.)", - "MARG. COST", "CONG. PROB +", "CONG. PROB -", "HURDLE COST") - -pkgEnvAntareasH5$varLinks <- as.vector(sapply(pkgEnvAntareasH5$varLinks, function(X){paste0(X, c("", "_min", "_max", "_std"))})) - -pkgEnvAntareasH5$varClusters <- c("production", "NP Cost", "NODU") - -pkgEnvAntareasH5$varAliasCreated <- list() - - -#misc -pkgEnvAntareasH5$varAliasCreated$misc$areas <- c("CHP", - "Bio_mass", - "Bio_gas", - "mustRunWasteTotal", - "GeoThermal", - "Other", - "PSP_input", - "ROW_Balance") - -pkgEnvAntareasH5$varAliasCreated$misc$districts <- c("CHP", - "Bio_mass", - "Bio_gas", - "mustRunWasteTotal", - "GeoThermal", - "Other", - "PSP_input", - "ROW_Balance") -#thermalAvailabilities -pkgEnvAntareasH5$varAliasCreated$thermalAvailabilities$clusters <- c("thermalAvailability", - "availableUnits") - - -#hydroStorage -pkgEnvAntareasH5$varAliasCreated$hydroStorage$areas <- c("hydroStorage") - -pkgEnvAntareasH5$varAliasCreated$hydroStorage$districts <- c("hydroStorage") - -#hydroStorageMaxPower -pkgEnvAntareasH5$varAliasCreated$hydroStorageMaxPower$areas <- c("hstorPMaxLow", - "hstorPMaxAvg", - "hstorPMaxHigh") - -pkgEnvAntareasH5$varAliasCreated$hydroStorageMaxPower$districts <- c("hstorPMaxLow", - "hstorPMaxAvg", - "hstorPMaxHigh") - -#reserve -pkgEnvAntareasH5$varAliasCreated$reserve$areas <- c("primaryRes", - "strategicRes", - "DSM", - "dayAhead") - -pkgEnvAntareasH5$varAliasCreated$reserve$districts <- c("primaryRes", - "strategicRes", - "DSM", - "dayAhead") - -#linkCapacity -pkgEnvAntareasH5$varAliasCreated$linkCapacity$links <- c("transCapacityDirect", - "transCapacityIndirect", - "impedances", - "hurdlesCostDirect", - "hurdlesCostIndirect") - -#mustRun -pkgEnvAntareasH5$varAliasCreated$mustRun$areas <- c("thermalPmin", - "mustRun", - "mustRunPartial", - "mustRunTotal") - -pkgEnvAntareasH5$varAliasCreated$mustRun$districts <- c("thermalPmin", - "mustRun", - "mustRunPartial", - "mustRunTotal") - -pkgEnvAntareasH5$varAliasCreated$mustRun$clusters <- c("thermalPmin", - "mustRun", - "mustRunPartial", - "mustRunTotal") - -pkgEnvAntareasH5$varAliasCreated$thermalModulation$clusters <- c("marginalCostModulation", - "marketBidModulation", - "capacityModulation", - "minGenModulation") - integerVariable <- as.character(unique(pkgEnv$formatName$Name[which(pkgEnv$formatName$digits == 0)])) integerVariable <- unlist(apply(expand.grid(integerVariable, c("", "_std", "_min", "_max")), 1, function(X){paste0(X, collapse = "")})) diff --git a/cran-comments.md b/cran-comments.md new file mode 100644 index 00000000..88eeb7d4 --- /dev/null +++ b/cran-comments.md @@ -0,0 +1,26 @@ + +# Release 2.7.1 + + - To fix problem with cran check (see log "M1mac"). + - Add some bug fix (see news.md) + - Fix multiple "path" bug in `setSimulationPath()` + - Manage package : + - Add `\value` section in documentation + - Using foo::f instead of foo:::f to access exported objects + - Replace `options(warn=-1)` by `suppressWarnings()` + - Delete references to .GlobalEnv (`assign()`) + + + +## R CMD check results OK + +0 errors | 0 warnings | 0 notes + +## rev dep check OK + +── CHECK ─────── 3 packages ── +✔ antaresEditObject 0.7.0 ── E: 0 | W: 0 | N: 0 +✔ antaresProcessing 0.18.2 ── E: 0 | W: 0 | N: 0 +✔ antaresViz 0.18.1 + + diff --git a/inst/format_output/simulation_variables_names_by_support.csv b/inst/format_output/simulation_variables_names_by_support.csv new file mode 100644 index 00000000..8e7dc32d --- /dev/null +++ b/inst/format_output/simulation_variables_names_by_support.csv @@ -0,0 +1,9 @@ +"TOPIC";"DETAILS_FILES_TYPE";"ANTARES_DISPLAYED_NAME";"ORDINAL_POSITION_BY_TOPIC";"TITLE";"ALIAS";"MIN_VERSION";"OUTPUT_DISPLAYED_NAME";"RPACKAGE_DISPLAYED_NAME" +"Generation / Thermal";"details";"DTG by plant";1;"Dispatchable Thermal Generation by Thermal Cluster (MWh)";"dtgByPlant";;"MWh";"production" +"Generation / Thermal";"details";"NODU by plant";3;"Number of Dispatched Units by Thermal Cluster";"noduByPlant";;"NODU";"NODU" +"Generation / Thermal";"details";"NP Cost by plant";2;"Non-Proportional Costs by Thermal Cluster (€)";"npCostByPlant";;"NP Cost - Euro";"NP Cost" +"Generation / Thermal";"details";"Profit by plant";4;"Net Profit by Thermal Cluster (€)";"profitByPlant";830;"Profit - Euro";"profit" +"Generation / Renewables";"details-res";"RES generation by plant";1;"Renewable Energy Generation by Power Plant Cluster (MWh)";"resGenerationByPlant";810;"MWh";"production" +"Generation / Short-Term Storages";"details-STstorage";"STS inj by plant";1;"Short-Term Storage Injection by Power Plant (MWh)";"stsInjByPlant";860;"P-injection - MW";"P.injection" +"Generation / Short-Term Storages";"details-STstorage";"STS lvl by plant";2;"Short-Term Storage Level by Power Plant (MWh)";"stsLvlByPlant";860;"Levels - MWh";"levels" +"Generation / Short-Term Storages";"details-STstorage";"STS withdrawal by plant";3;"Short-Term Storage Withdrawal by Power Plant (MWh)";"stsWithdrawalByPlant";860;"P-withdrawal - MW";"P.withdrawal" diff --git a/man/aggregatate_mc_all.Rd b/man/aggregatate_mc_all.Rd index 4b4a1bc4..caad54c8 100644 --- a/man/aggregatate_mc_all.Rd +++ b/man/aggregatate_mc_all.Rd @@ -56,6 +56,10 @@ aggregateResult( \item{legacy}{\code{boolean} run old version of the function} } +\value{ +Object \code{list} of data.tables, each element representing one type +of element (areas, links, clusters) +} \description{ Creation of Mc_all new (only antares > V6) } diff --git a/man/antaresRead-package.Rd b/man/antaresRead-package.Rd index 6e14dd10..79f26bc3 100644 --- a/man/antaresRead-package.Rd +++ b/man/antaresRead-package.Rd @@ -37,6 +37,7 @@ Other contributors: \item Clement Berthet [contributor] \item Kamel Kemiha [contributor] \item Abdallah Mahoudi [contributor] + \item Nicolas Boitard [contributor] \item RTE [copyright holder] } diff --git a/man/dot-getOptionsH5.Rd b/man/dot-getOptionsH5.Rd deleted file mode 100644 index 96801eeb..00000000 --- a/man/dot-getOptionsH5.Rd +++ /dev/null @@ -1,15 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/h5_setSimulationPathH5.R -\name{.getOptionsH5} -\alias{.getOptionsH5} -\title{Get H5 options} -\usage{ -.getOptionsH5(path) -} -\arguments{ -\item{path}{\code{character} path of h5 file} -} -\description{ -Get H5 options -} -\keyword{internal} diff --git a/man/dot-h5ReadAntares.Rd b/man/dot-h5ReadAntares.Rd deleted file mode 100644 index d7890a01..00000000 --- a/man/dot-h5ReadAntares.Rd +++ /dev/null @@ -1,71 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/h5_antaresReadH5.R -\name{.h5ReadAntares} -\alias{.h5ReadAntares} -\title{Read data} -\usage{ -.h5ReadAntares( - path, - areas = NULL, - links = NULL, - clusters = NULL, - districts = NULL, - mcYears = NULL, - misc = FALSE, - thermalAvailabilities = FALSE, - hydroStorage = FALSE, - hydroStorageMaxPower = FALSE, - reserve = FALSE, - linkCapacity = FALSE, - mustRun = FALSE, - thermalModulation = FALSE, - timeStep = "hourly", - select = NULL, - showProgress = TRUE, - simplify = TRUE, - perf = FALSE -) -} -\arguments{ -\item{path}{{character} path of h5file to load} - -\item{areas}{see \link[antaresRead]{readAntares}} - -\item{links}{see \link[antaresRead]{readAntares}} - -\item{clusters}{see \link[antaresRead]{readAntares}} - -\item{districts}{see \link[antaresRead]{readAntares}} - -\item{mcYears}{see \link[antaresRead]{readAntares}} - -\item{misc}{see \link[antaresRead]{readAntares}} - -\item{thermalAvailabilities}{see \link[antaresRead]{readAntares}} - -\item{hydroStorage}{see \link[antaresRead]{readAntares}} - -\item{hydroStorageMaxPower}{see \link[antaresRead]{readAntares}} - -\item{reserve}{see \link[antaresRead]{readAntares}} - -\item{linkCapacity}{see \link[antaresRead]{readAntares}} - -\item{mustRun}{see \link[antaresRead]{readAntares}} - -\item{thermalModulation}{see \link[antaresRead]{readAntares}} - -\item{timeStep}{see \link[antaresRead]{readAntares}} - -\item{select}{see \link[antaresRead]{readAntares}} - -\item{showProgress}{see \link[antaresRead]{readAntares}} - -\item{simplify}{see \link[antaresRead]{readAntares}} - -\item{perf}{\code{boolean}, eval performance during developpement time, to remove} -} -\description{ -Read data -} -\keyword{internal} diff --git a/man/dot-writeAntaresH5Fun.Rd b/man/dot-writeAntaresH5Fun.Rd deleted file mode 100644 index 5b58e29c..00000000 --- a/man/dot-writeAntaresH5Fun.Rd +++ /dev/null @@ -1,32 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/h5_writeH5ByYear.R -\name{.writeAntaresH5Fun} -\alias{.writeAntaresH5Fun} -\title{Convert antares output to h5 file} -\usage{ -.writeAntaresH5Fun( - path, - timeSteps, - opts, - writeMcAll, - compress, - misc, - thermalAvailabilities, - hydroStorage, - hydroStorageMaxPower, - reserve, - linkCapacity, - mustRun, - thermalModulation, - removeVirtualAreas, - storageFlexibility, - production, - reassignCosts, - newCols, - supressMessages -) -} -\description{ -Convert antares output to h5 file -} -\keyword{internal} diff --git a/man/dot-writeIni.Rd b/man/dot-writeIni.Rd index 743af8ad..c898a2d3 100644 --- a/man/dot-writeIni.Rd +++ b/man/dot-writeIni.Rd @@ -2,28 +2,28 @@ % Please edit documentation in R/aggregateResult.R \name{.writeIni} \alias{.writeIni} -\title{Write ini file from list obtain by antaresRead:::readIniFile and modify by user} +\title{Write ini file from list obtain by antaresRead::readIniFile and modify by user} \usage{ .writeIni(listData, pathIni, overwrite = FALSE) } \arguments{ -\item{listData}{\code{list}, modified list obtained by antaresRead:::readIniFile.} +\item{listData}{\code{list}, modified list obtained by antaresRead::readIniFile.} \item{pathIni}{\code{Character}, Path to ini file.} \item{overwrite}{logical, should file be overwritten if already exist?} } \description{ -Write ini file from list obtain by antaresRead:::readIniFile and modify by user +Write ini file from list obtain by antaresRead::readIniFile and modify by user } \examples{ \dontrun{ pathIni <- "D:/exemple_test/settings/generaldata.ini" -generalSetting <- antaresRead:::readIniFile(pathIni) +generalSetting <- antaresRead::readIniFile(pathIni) generalSetting$output$synthesis <- FALSE writeIni(generalSetting, pathIni) } - } +\keyword{internal} diff --git a/man/hvdcModification.Rd b/man/hvdcModification.Rd index 3af6b72c..02d1fa07 100644 --- a/man/hvdcModification.Rd +++ b/man/hvdcModification.Rd @@ -13,6 +13,10 @@ hvdcModification(data, removeHvdcAreas = TRUE, reafectLinks = FALSE) \item{reafectLinks}{\code{boolean} .} } +\value{ +Object of class "antaresDataList" is returned. +It is a list of data.tables, each element representing one type of element (areas, links, clusters) +} \description{ usage for hvdc } diff --git a/man/isH5Opts.Rd b/man/isH5Opts.Rd deleted file mode 100644 index b2f35aa9..00000000 --- a/man/isH5Opts.Rd +++ /dev/null @@ -1,14 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/h5utils.R -\name{isH5Opts} -\alias{isH5Opts} -\title{Test if opts is h5} -\usage{ -isH5Opts(opts) -} -\arguments{ -\item{opts}{, opts} -} -\description{ -Test if the value returned by setSimulationPath() is referring to an h5 file -} diff --git a/man/ponderateMcAggregation.Rd b/man/ponderateMcAggregation.Rd index 4b29c825..367db546 100644 --- a/man/ponderateMcAggregation.Rd +++ b/man/ponderateMcAggregation.Rd @@ -13,6 +13,9 @@ ponderateMcAggregation(x, fun = weighted.mean, ...) \item{...}{\code{args} others args pass to fun} } +\value{ +Object of class "antaresDataTable". +} \description{ Mcyear aggregation weigthed by wd } diff --git a/man/readAntares.Rd b/man/readAntares.Rd index 92f153bc..2431cc23 100644 --- a/man/readAntares.Rd +++ b/man/readAntares.Rd @@ -10,6 +10,7 @@ readAntares( clusters = NULL, districts = NULL, clustersRes = NULL, + clustersST = NULL, bindingConstraints = FALSE, misc = FALSE, thermalAvailabilities = FALSE, @@ -53,6 +54,11 @@ import results at renewable cluster level. If \code{NULL} no cluster is imported special value \code{"all"} tells the function to import renewable clusters from all areas.} +\item{clustersST}{Vector containing the name of the areas for which you want to +import results at short-term cluster level. If \code{NULL} no cluster is imported. The +special value \code{"all"} tells the function to import short-term clusters from all +areas.} + \item{bindingConstraints}{Should binding constraints be imported (v8.4+)?} \item{misc}{Vector containing the name of the areas for which you want to @@ -131,6 +137,8 @@ data.tables, each element representing one type of element (areas, links, clusters) } \description{ +\ifelse{html}{\figure{badge_api_ok.svg}{options: alt='Antares API OK'}}{Antares API: \strong{OK}} + \code{readAntares} is a swiss-army-knife function used to read almost every possible time series of an antares Project at any desired time resolution (hourly, daily, weekly, monthly or annual). diff --git a/man/readInputRES.Rd b/man/readInputRES.Rd index a3176644..0261a942 100644 --- a/man/readInputRES.Rd +++ b/man/readInputRES.Rd @@ -5,7 +5,8 @@ \title{Read Input RES time series} \usage{ readInputRES( - clusters = NULL, + areas = "all", + clusters, opts = simOptions(), timeStep = c("hourly", "daily", "weekly", "monthly", "annual"), simplify = TRUE, @@ -14,6 +15,8 @@ readInputRES( ) } \arguments{ +\item{areas}{vector of RES areas names for which renewable time series must be read.} + \item{clusters}{vector of RES clusters names for which renewable time series must be read.} \item{opts}{list of simulation parameters returned by the function diff --git a/man/readInputThermal.Rd b/man/readInputThermal.Rd index b59424e6..b2eac352 100644 --- a/man/readInputThermal.Rd +++ b/man/readInputThermal.Rd @@ -5,7 +5,9 @@ \title{Read Input thermal time series} \usage{ readInputThermal( - clusters = NULL, + areas = "all", + clusters, + thermalAvailabilities = TRUE, thermalModulation = FALSE, thermalData = FALSE, opts = simOptions(), @@ -16,8 +18,12 @@ readInputThermal( ) } \arguments{ +\item{areas}{vector of areas names for which thermal time series must be read.} + \item{clusters}{vector of clusters names for which thermal time series must be read.} +\item{thermalAvailabilities}{if TRUE, return thermalAvailabilities data} + \item{thermalModulation}{if TRUE, return thermalModulation data} \item{thermalData}{if TRUE, return thermalData from prepro} diff --git a/man/setHvdcAreas.Rd b/man/setHvdcAreas.Rd index fb30659e..f271fd2d 100644 --- a/man/setHvdcAreas.Rd +++ b/man/setHvdcAreas.Rd @@ -12,7 +12,8 @@ setHvdcAreas(data, areas) \item{areas}{\code{character} hvdc areas list.} } \value{ -A list with three elements: +Object of class "antaresDataList" is returned. +It is a list of data.tables, each element representing one type of element (areas, links, clusters) } \description{ This function add hvdc attribute diff --git a/man/setRam.Rd b/man/setRam.Rd index 9a6c2420..e315a0a8 100644 --- a/man/setRam.Rd +++ b/man/setRam.Rd @@ -9,6 +9,9 @@ setRam(x) \arguments{ \item{x}{\code{numeric} RAM limit in Go} } +\value{ +\code{list} (returned by \code{\link[=options]{options()}}) +} \description{ This function specify RAM limit (in Go) of the value returned by \link[antaresRead]{readAntares}. } diff --git a/man/setSimulationPath.Rd b/man/setSimulationPath.Rd index 2045eead..f085a136 100644 --- a/man/setSimulationPath.Rd +++ b/man/setSimulationPath.Rd @@ -62,8 +62,11 @@ each Monte-Carlo simulation.} \item{districtList}{Vector of the available districts.} \item{linkList}{Vector of the available links.} \item{areasWithClusters}{Vector of areas containing clusters.} +\item{areasWithResClusters}{Vector of areas containing clusters renewable.} +\item{areasWithSTClusters}{Vector of areas containing clusters storage (>=v8.6.0).} \item{variables}{Available variables for areas, districts and links.} \item{parameters}{Other parameters of the simulation.} +\item{binding}{Table of time series dimensions for each group (>=v8.7.0).} \item{timeIdMin}{ Minimum time id of the simulation. It is generally equal to one but can be higher if working on a subperiod. diff --git a/man/setTimeoutAPI.Rd b/man/setTimeoutAPI.Rd index d158c487..b464ead2 100644 --- a/man/setTimeoutAPI.Rd +++ b/man/setTimeoutAPI.Rd @@ -12,6 +12,10 @@ setTimeoutAPI(opts, timeout) \item{timeout}{\code{numeric} API timeout (seconds). Default to 60.} } +\value{ +Object of class \code{simOptions}, list of options used to read the data contained in the last +simulation read by \code{\link{setTimeoutAPI}}. +} \description{ Change API Timeout } diff --git a/man/writeAntaresH5.Rd b/man/writeAntaresH5.Rd deleted file mode 100644 index f76baab0..00000000 --- a/man/writeAntaresH5.Rd +++ /dev/null @@ -1,130 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/h5_writeH5ByYear.R -\name{writeAntaresH5} -\alias{writeAntaresH5} -\title{Convert antares output to h5 file} -\usage{ -writeAntaresH5( - path = NULL, - timeSteps = c("hourly", "daily", "weekly", "monthly", "annual"), - opts = simOptions(), - writeMcAll = TRUE, - compress = 1, - misc = FALSE, - thermalAvailabilities = FALSE, - hydroStorage = FALSE, - hydroStorageMaxPower = FALSE, - reserve = FALSE, - linkCapacity = FALSE, - mustRun = FALSE, - thermalModulation = FALSE, - allData = FALSE, - writeAllSimulations = FALSE, - nbCores = 4, - removeVirtualAreas = FALSE, - storageFlexibility = NULL, - production = NULL, - reassignCosts = FALSE, - newCols = TRUE, - overwrite = FALSE, - supressMessages = FALSE -) -} -\arguments{ -\item{path}{\code{character} folder where h5 file will be write (default NULL)} - -\item{timeSteps}{\code{character} timeSteps} - -\item{opts}{\code{list} of simulation parameters returned by the function \link{setSimulationPath}. Default to \code{antaresRead::simOptions()}} - -\item{writeMcAll}{\code{boolean} write mc-all} - -\item{compress}{\code{numeric} compress level} - -\item{misc}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{thermalAvailabilities}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{hydroStorage}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{hydroStorageMaxPower}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{reserve}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{linkCapacity}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{mustRun}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{thermalModulation}{\code{boolean} see \link[antaresRead]{readAntares}} - -\item{allData}{\code{boolean} add all data with a single call (writeMcAll, misc, thermalAvailabilities, hydroStorage, hydroStorageMaxPower -reserve, linkCapacity, mustRun, thermalModulation).} - -\item{writeAllSimulations}{\code{boolean}, write all simulations of your antares study.} - -\item{nbCores}{\code{numeric}, number of cores to use, only used if writeAllSimulations is TRUE} - -\item{removeVirtualAreas}{\code{boolean}, remove virtual areas, see \link[antaresRead]{removeVirtualAreas}} - -\item{storageFlexibility}{\code{character or list}, see \link[antaresRead]{removeVirtualAreas}} - -\item{production}{\code{character or list}, see \link[antaresRead]{removeVirtualAreas}} - -\item{reassignCosts}{\code{boolean or list}, see \link[antaresRead]{removeVirtualAreas}} - -\item{newCols}{\code{boolean or list}, see \link[antaresRead]{removeVirtualAreas}} - -\item{overwrite}{\code{boolean or list}, overwrite old file} - -\item{supressMessages}{\code{boolean}, supress messages from \link[antaresRead]{readAntares} and \link[antaresRead]{removeVirtualAreas}} -} -\description{ -Convert antares output to h5 file -} -\examples{ - -\dontrun{ -# Write simulation one by one -setSimulationPath("C:/Users/MyUser/Mystudy", 1) -writeAntaresH5(path="PATH_TO_YOUR_STUDY") - -# Write all simulations -setSimulationPath("C:/Users/MyUser/Mystudy") -writeAntaresH5(path="PATH_TO_YOUR_STUDY", writeAllSimulations = TRUE) - -# Choose timestep to write -setSimulationPath("C:/Users/MyUser/Mystudy", 1) -writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly") - -# Write with additionnal information -writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly", - misc = TRUE, thermalAvailabilities = TRUE, - hydroStorage = TRUE, hydroStorageMaxPower = TRUE, reserve = TRUE, - linkCapacity = TRUE, mustRun = TRUE, thermalModulation = TRUE) - -# Write all data with a shorcut -writeAntaresH5(path="PATH_TO_YOUR_STUDY", allData = TRUE) - -#Remove virtuals areas - -writeAntaresH5(path="PATH_TO_YOUR_STUDY", timeSteps = "hourly", overwrite = TRUE, - writeMcAll = FALSE, removeVirtualAreas = TRUE, - storageFlexibility = "psp in-2", - production = NULL, reassignCosts =FALSE, newCols = TRUE) - -#Remove virtuals areas more than one call -writeAntaresH5( - path="PATH_TO_YOUR_STUDY", - timeSteps = "hourly", - overwrite = TRUE, - writeMcAll = FALSE, - removeVirtualAreas = TRUE, - storageFlexibility = list("psp out", "psp in-2"), - production = list(NULL, NULL), - reassignCosts = list(TRUE, FALSE), - newCols = list(FALSE, TRUE) - ) - - -} -} diff --git a/revdep/README.md b/revdep/README.md index 16a7e2e3..a9ec6cea 100644 --- a/revdep/README.md +++ b/revdep/README.md @@ -10,19 +10,23 @@ |collate |French_France.utf8 | |ctype |French_France.utf8 | |tz |Europe/Paris | -|date |2024-05-27 | +|date |2024-06-26 | |rstudio |2023.12.0+369 Ocean Storm (desktop) | |pandoc |NA | # Dependencies -|package |old |new |Δ | -|:-----------|:-----|:-----|:--| -|antaresRead |2.6.1 |2.7.0 |* | -|cachem |NA |1.0.8 |* | -|fastmap |NA |1.1.1 |* | -|openssl |NA |2.1.2 |* | -|stringi |NA |1.8.3 |* | +|package |old |new |Δ | +|:-----------|:-----|:------|:--| +|antaresRead |2.7.0 |2.7.1 |* | +|cachem |NA |1.1.0 |* | +|cli |NA |3.6.3 |* | +|crayon |NA |1.5.3 |* | +|digest |NA |0.6.36 |* | +|fastmap |NA |1.2.0 |* | +|openssl |NA |2.2.0 |* | +|rlang |NA |1.1.4 |* | +|stringi |NA |1.8.4 |* | # Revdeps diff --git a/tests/testthat/helper_init.R b/tests/testthat/helper_init.R index 6f99f670..bc627223 100644 --- a/tests/testthat/helper_init.R +++ b/tests/testthat/helper_init.R @@ -1,19 +1,11 @@ #Copyright © 2016 RTE Réseau de transport d’électricité -options("antaresRead.skip_h5_on_cran" = TRUE) -options("antaresRead.skip_h5" = TRUE) -options("antaresRead.skip_h5_on_travis" = TRUE) -options("antaresRead.skip_h5_on_appveyor" = TRUE) - - # Copy the test study in a temporary folder path0 <- tempdir() -#sourcedir <- system.file("inst/testdata", package = "antaresRead") sourcedir <- system.file("testdata", package = "antaresRead") -testH5 <- TRUE -#if(sourcedir == ""){ } + ## force tests to be executed if in dev release which we define as ## having a sub-release, eg 0.9.15.5 is one whereas 0.9.16 is not @@ -21,7 +13,6 @@ testH5 <- TRUE if (length(strsplit(packageDescription("antaresRead")$Version, "\\.")[[1]]) > 3) { Sys.setenv("RunAllAntaresReadTests"="yes") } -.runH5Test <- FALSE #Sys.getenv("RunAllAntaresReadTests") == "yes" compareValue <- function(A, B, res = NULL){ if(class(A)[3] == "list"){ @@ -65,65 +56,13 @@ if (sourcedir != "") { untar(file.path(sourcedir, studies[s]), exdir = file.path(path0, studies_names[s])) } - if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - - path_v6 <- file.path(path0, "antares-test-study-v6") - opts <- setSimulationPath(file.path(path_v6, "/test_case")) - suppressMessages({ - suppressWarnings({ - - #On cran we have only 2 threads so nbCore <- 1 - if(.runH5Test){ - nbCoresTestHelper <- 4 - }else{ - nbCoresTestHelper <- 1 - } - writeAntaresH5(path = path_v6, - misc = TRUE, thermalAvailabilities = TRUE, - hydroStorage = TRUE, hydroStorageMaxPower = TRUE, reserve = TRUE, - linkCapacity = TRUE,mustRun = TRUE, thermalModulation = TRUE, - overwrite=TRUE, nbCores = nbCoresTestHelper) - }) - }) - - #if you change the tar file then you must also change this file - # h5file <- "20190321-2217eco-test.h5" - h5file <- "20180423-1734eco-test.h5" - - deprintize<-function(f){ - return(function(...) {capture.output(w<-f(...));return(w);}); - } - - silentf <- deprintize(showAliases) - - alias <- silentf()$name - alias <- as.character(alias) - - - - timeStep <- c("hourly", "daily", "weekly", "monthly", "annual") - - assign("silentf", silentf, envir = globalenv()) - assign("tpDir", path_v6, envir = globalenv()) - assign("pathF", file.path(path_v6, "/", h5file), envir = globalenv()) - assign("h5file", h5file, envir = globalenv()) - assign("alias", alias, envir = globalenv()) - assign("compareValue", compareValue, envir = globalenv()) - assign("timeStep", timeStep, envir = globalenv()) - assign("optsG", opts, envir = globalenv()) - - } + studyPathS <- file.path(path0, studies_names, "test_case") - assign( - x = "studyPathS", - value = file.path(path0, studies_names, "test_case"), - envir = globalenv() - ) + nweeks <- 2 + nmonths <- 2 + firstDay <- 113 + lastDay <- 126 - assign("nweeks", 2, envir = globalenv()) - assign("nmonths", 2, envir = globalenv()) - assign("firstDay", 113, envir = globalenv()) - assign("lastDay", 126, envir = globalenv()) } @@ -148,26 +87,21 @@ if(sourcedir_V8 != ""){ dir.create(file.path(path0, studies_names[s])) untar(file.path(sourcedir_V8, studies[s]), exdir = file.path(path0, studies_names[s])) } - assign( - x = "studyPathSV8", - value = file.path(path0, studies_names, "test_case"), - envir = globalenv() - ) + + studyPathSV8 <- file.path(path0, studies_names, "test_case") } - - -skip_according_to_options <- function() { - if (isTRUE(getOption("antaresRead.skip_h5_on_cran"))) - skip_on_cran() - if (isTRUE(getOption("antaresRead.skip_h5"))) - skip("h5 test skipped") - if (isTRUE(getOption("antaresRead.skip_h5_on_travis"))) - skip_on_travis() - if (isTRUE(getOption("antaresRead.skip_h5_on_appveyor"))) - skip_on_appveyor() -} +# skip_according_to_options <- function() { +# if (isTRUE(getOption("antaresRead.skip_h5_on_cran"))) +# skip_on_cran() +# if (isTRUE(getOption("antaresRead.skip_h5"))) +# skip("h5 test skipped") +# if (isTRUE(getOption("antaresRead.skip_h5_on_travis"))) +# skip_on_travis() +# if (isTRUE(getOption("antaresRead.skip_h5_on_appveyor"))) +# skip_on_appveyor() +# } pathAPI <- "http://localhost:8080/studies/antaresStd/" @@ -183,15 +117,14 @@ setup_study_empty <- function(dir_path){ full.names = TRUE) # choose pattern studies <- studies[grep(x = studies, - pattern = "v87")] + pattern = "empty_study_v870")] # untar etude path_sty <- file.path(tempdir(), "study_empty_latest_version") untar(studies[1], exdir = path_sty) # version latest study_temp_path <- file.path(path_sty, "test_case") - assign("study_empty_latest_version", - file.path(path_sty, - "test_case"), - envir = globalenv()) -} + study_empty_latest_version <- file.path(path_sty, + "test_case") + return(study_empty_latest_version) + } diff --git a/tests/testthat/test-.timeIdToDate.R b/tests/testthat/test-.timeIdToDate.R index a55d4aad..f6ceb07e 100644 --- a/tests/testthat/test-.timeIdToDate.R +++ b/tests/testthat/test-.timeIdToDate.R @@ -1,5 +1,5 @@ context(".timeIdToDate") -suppressWarnings(suppressPackageStartupMessages(require(lubridate))) +# suppressWarnings(suppressPackageStartupMessages(require(lubridate))) describe(".timeIdToDate", { diff --git a/tests/testthat/test-aggregateResult.R b/tests/testthat/test-aggregateResult.R index 3c7dddc7..1a09aea6 100644 --- a/tests/testthat/test-aggregateResult.R +++ b/tests/testthat/test-aggregateResult.R @@ -1,35 +1,35 @@ -context("Function aggregateResult") - -skip("") - -pathstd <- tempdir() - -opts <- setSimulationPath(studyPathSV8) - - -test_that("test parallel aggregate", { - - mc_all_path <- file.path(opts$simDataPath, "mc-all", "grid", "digest.txt") - - mdate_original <- file.mtime(mc_all_path) - - parAggregateMCall(opts, nbcl = 2, mcYears = c(1,2), verbose = 0) - - expect_true(file.exists(mc_all_path)) - expect_false(file.mtime(mc_all_path) == mdate_original) - expect_true(file.size(mc_all_path) > 0) - - mdate_original <- file.mtime(mc_all_path) - - parAggregateMCall(opts, nbcl = 1, mcYears = c(1,2), verbose = 0) - - expect_true(file.exists(mc_all_path)) - expect_false(file.mtime(mc_all_path) == mdate_original) - expect_true(file.size(mc_all_path) > 0) - -}) +# context("Function aggregateResult") +# +# skip("") +# +# pathstd <- tempdir() +# +# opts <- setSimulationPath(studyPathSV8) +# +# +# test_that("test parallel aggregate", { +# +# mc_all_path <- file.path(opts$simDataPath, "mc-all", "grid", "digest.txt") +# +# mdate_original <- file.mtime(mc_all_path) +# +# parAggregateMCall(opts, nbcl = 2, mcYears = c(1,2), verbose = 0) +# +# expect_true(file.exists(mc_all_path)) +# expect_false(file.mtime(mc_all_path) == mdate_original) +# expect_true(file.size(mc_all_path) > 0) +# +# mdate_original <- file.mtime(mc_all_path) +# +# parAggregateMCall(opts, nbcl = 1, mcYears = c(1,2), verbose = 0) +# +# expect_true(file.exists(mc_all_path)) +# expect_false(file.mtime(mc_all_path) == mdate_original) +# expect_true(file.size(mc_all_path) > 0) +# +# }) diff --git a/tests/testthat/test-h5ReadAntares.R b/tests/testthat/test-h5ReadAntares.R deleted file mode 100644 index 1672b2c0..00000000 --- a/tests/testthat/test-h5ReadAntares.R +++ /dev/null @@ -1,175 +0,0 @@ -context(".h5ReadAntares") - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - sapply(pkgEnv$allCompute, function(X){ - test_that(paste0("Select : ", X, " timeStep : "),{ - skip_according_to_options() - - param1 <- list(path = pathF, areas = "a", mcYears = 1, select = X) - param2 <- list(path = pathF, areas = "a", mcYears = 1) - param2[[X]] <- TRUE - testthat::expect_true(identical(do.call(.h5ReadAntares, param1), - do.call(.h5ReadAntares, param2))) - }) - }) - - - ##Test - paramComparaison <- list( - areasAll = list(areas = "all"), - linksAll = list(links = "all"), - clustersAll = list(clusters = "all"), - districtsAll = list(districts = "all"), - areasAllMc1 = list(areas = "all", mcYears = 1), - linksAllMc1 = list(links = "all", mcYears = 1), - clustersAllMc1 = list(clusters = "all", mcYears = 1), - districtsAllMc1 = list(districts = "all", mcYears = 1), - areasAllMcAll = list(areas = "all", mcYears = "all"), - linksAllMcAll = list(links = "all", mcYears = "all"), - clustersAllMcAll = list(clusters = "all", mcYears = "all"), - districtsAllMcAll = list(districts = "all", mcYears = "all"), - areasaMcAll = list(area = "a", mcYears = "all"), - linksBCMcAll = list(links = "b - c", mcYears = "all"), - clustersaMcAll = list(clusters = "a", mcYears = "all"), - districtsABMcAll = list(districts = "a and b", mcYears = "all"), - linksFolowIn = list(links = "all", select = "FLOW LIN."), - areasSelectAll = list(areas = "all", select = "all"), - linksSelectAll = list(links = "all", select = "all"), - clusterSelectAll = list(clusters = "all", select = "all"), - districtsSelectAll = list(districts = "all", select = "all"), - allData = list(areas = "all", links = "all", clusters = "all", districts = "all"), - allDataMc1 = list(areas = "all", links = "all", clusters = "all", districts = "all", mcYears = 1), - allDataMc2 = list(areas = "all", links = "all", clusters = "all", districts = "all", mcYears = 2), - allDataMcAll = list(areas = "all", links = "all", clusters = "all", districts = "all", mcYears = "all"), - hourly = list(areas = "all", links = "all", clusters = "all", districts = "all", timeStep = "hourly"), - daily = list(areas = "all", links = "all", clusters = "all", districts = "all", timeStep = "daily"), - weekly = list(areas = "all", links = "all", clusters = "all", districts = "all", timeStep = "weekly"), - monthly = list(areas = "all", links = "all", clusters = "all", districts = "all", timeStep = "monthly"), - annual = list(areas = "all", links = "all", clusters = "all", districts = "all", timeStep = "annual") - ) - - sapply(names(paramComparaison), function(Z){ - test_that(paste(Z), { - skip_according_to_options() - - param1 <- paramComparaison[[Z]] - param2 <- param1 - - ##Silent - param1$showProgress <- FALSE - param2$perf <- FALSE - - ##End silent - param2$path <- pathF - - DF1 <- suppressWarnings({do.call(readAntares, param1)}) - DF2 <- do.call(.h5ReadAntares, param2) - expect_true(all(unlist(compareValue(DF1, DF2)))) - }) - }) - - test_that("Show perf", { - skip_according_to_options() - - param1 <- list(areas = "all") - param2 <- param1 - - ##Silent - param1$showProgress <- FALSE - param2$perf <- FALSE - - ##End silent - param2$path <- pathF - DF1 <- suppressWarnings({do.call(readAntares, param1)}) - DF2 <- do.call(.h5ReadAntares, param2) - expect_true(all(unlist(compareValue( DF1,DF2)))) - }) - - test_that("Show perf multi request", { - skip_according_to_options() - - param1 <- list(areas = "all", links = "all") - param2 <- param1 - - ##Silent - param1$showProgress <- FALSE - param2$perf <- FALSE - - ##End silent - param2$path <- pathF - DF1 <- suppressWarnings({do.call(readAntares, param1)}) - DF2 <- do.call(.h5ReadAntares, param2) - expect_true(all(unlist(compareValue( DF1,DF2)))) - }) - - - - #Test alias request - for(i in alias){ - paramComparaison[[i]] <- list(select = i) - } - - - #Test remove - for(i in alias){ - var <- strsplit(as.character(silentf(i)$select[1]), ",")[[1]] - var <- gsub("^ ", "",var) - for(j in var) - { - minus <- paste0("-", j) - paramComparaison[[paste(i,minus)]] <- list(select = c(i, minus)) - } - } - - cgtrl <- sapply("hourly", function(Z){ - ctrl <- sapply(names(paramComparaison), function(X){ - oldw <- getOption("warn") - options(warn = -1) - - test_that(paste(X, Z), { - skip_according_to_options() - - param1 <- paramComparaison[[X]] - param1$timeStep <- Z - param2 <- param1 - - ##Silent - param1$showProgress <- FALSE - param2$perf <- FALSE - - ##End silent - param2$path <- pathF - DF1 <- suppressWarnings({do.call(readAntares, param1)}) - DF2 <- do.call(.h5ReadAntares, param2) - if(!is(DF1, "antaresDataList")) - { - setorderv(DF1, getIdCols(DF1)) - }else{ - for(i in 1:length(DF1)){ - setorderv(DF1[[i]], getIdCols(DF1[[i]])) - } - } - if(!is(DF2, "antaresDataList")) - { - setorderv(DF2, getIdCols(DF2)) - }else{ - for(i in 1:length(DF2)){ - setorderv(DF2[[i]], getIdCols(DF2[[i]])) - } - } - expect_true(all(unlist(compareValue( DF1,DF2)))) - }) - invisible() - options(warn = oldw) - }) - invisible() - }) - - test_that("Bad path", { - skip_according_to_options() - - expect_error(.h5ReadAntares("toto"), "File toto not exist.") - - }) - -} diff --git a/tests/testthat/test-h5_nodata.R b/tests/testthat/test-h5_nodata.R deleted file mode 100644 index 4b0de6e0..00000000 --- a/tests/testthat/test-h5_nodata.R +++ /dev/null @@ -1,14 +0,0 @@ -context("h5 : No data") - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - test_that("h5 : no data", { - skip_according_to_options() - - rhdf5::h5createFile("testnodata.h5") - rhdf5::h5createGroup("testnodata.h5", "hourly") - DF1 <- .h5ReadAntares("testnodata.h5", areas = "all", links = "all", clusters = "all", districts = "all") - rhdf5::h5closeAll() - expect_true(length(DF1) == 0) - unlink("testnodata.h5") - }) -} diff --git a/tests/testthat/test-h5_readInputs.R b/tests/testthat/test-h5_readInputs.R deleted file mode 100644 index ebc9927b..00000000 --- a/tests/testthat/test-h5_readInputs.R +++ /dev/null @@ -1,42 +0,0 @@ -context("h5 : read inputs") - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - test_that("h5 : h5ReadBindingConstraints", { - skip_according_to_options() - - optsH5 <- setSimulationPathH5(tpDir, h5file) - re1 <- h5ReadBindingConstraints(optsH5) - re2 <- antaresRead::readBindingConstraints(opts) - for(i in 1:length(re1)){ - re1[[i]]$values <- data.frame(re1[[i]]$values ) - re2[[i]]$values <- data.frame(re2[[i]]$values ) - - } - expect_true(identical(re1, re2)) - }) - - - test_that("h5 : h5ReadLayout", { - skip_according_to_options() - - optsH5 <- setSimulationPathH5(tpDir, h5file) - re1 <- h5ReadLayout(optsH5) - re2 <- antaresRead::readLayout(opts) - - for(i in 1:length(re1)){ - re1[[i]] <- data.frame(re1[[i]]) - re2[[i]] <- data.frame(re2[[i]]) - } - expect_true(identical(re1, re2)) - }) - - test_that("h5 : h5ReadClusterDesc", { - skip_according_to_options() - - optsH5 <- setSimulationPathH5(tpDir, h5file) - re1 <- data.frame(h5ReadClusterDesc(optsH5)) - re2 <- data.frame(antaresRead::readClusterDesc(opts)) - expect_true(identical(re1, re2)) - }) - -} diff --git a/tests/testthat/test-h5_setSimulationPathH5.R b/tests/testthat/test-h5_setSimulationPathH5.R deleted file mode 100644 index 32cfe327..00000000 --- a/tests/testthat/test-h5_setSimulationPathH5.R +++ /dev/null @@ -1,19 +0,0 @@ -context("h5 : setSimulationPathH5") - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - test_that("h5 : identical setSimulationPathH5", { - skip_according_to_options() - - identical(setSimulationPathH5(tpDir), setSimulationPathH5(tpDir, 1)) - expect_identical(setSimulationPathH5(tpDir), setSimulationPathH5(tpDir, 1)) - }) - - test_that("h5 : Error no file", { - skip_according_to_options() - - expect_error(setSimulationPathH5("badfilename"), - "Invalid path argument. File not found. Must be a .h5 file or a repertory with .h5 file(s)", fixed=TRUE) - }) - - -} diff --git a/tests/testthat/test-h5_write.R b/tests/testthat/test-h5_write.R deleted file mode 100644 index c823c3b7..00000000 --- a/tests/testthat/test-h5_write.R +++ /dev/null @@ -1,61 +0,0 @@ -context("h5 : write data") - -# for use travis in parallel -Sys.unsetenv("R_TESTS") - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - tpDir2 <- gsub("[\\]", "/", tpDir) - tptpDir <- file.path(tpDir2, "/tpDir") - - dir.create(tptpDir) - test_that("h5 : write more than one studies mono thread", { - skip_according_to_options() - - writeAntaresH5(path = tptpDir, timeSteps = "annual", - writeAllSimulations = TRUE, nbCores = 1, opts = optsG) - - }) - VV <- utils::sessionInfo() - DoPar <- as.numeric(paste0(VV$R.version$major, VV$R.version$minor))>34 - - if(DoPar) - { - test_that("h5 : overwrite + alldata + multi-thread", { - skip_on_cran() - skip_according_to_options() - - writeAntaresH5(path = tptpDir, overwrite = TRUE, allData = TRUE, - timeSteps = "annual", writeAllSimulations = TRUE, - nbCores = 2, opts = optsG) - filesTptpDir<-dir(tptpDir) - expect_true(TRUE %in% grepl("h5", filesTptpDir)) - - }) - } - - test_that("h5 : overwrite + removeVirtualAreas", { - skip_according_to_options() - - writeAntaresH5(path = tptpDir, - overwrite = TRUE, - opts = optsG, - timeSteps = "hourly", - removeVirtualAreas = TRUE, - storageFlexibility = "c", - nbCores = 1) - filesTptpDir<-dir(tptpDir) - expect_true(TRUE %in% grepl("h5", filesTptpDir)) - - }) - - - unlink(tptpDir, recursive = TRUE) - - test_that("h5 : Bad path", { - skip_according_to_options() - - expect_error( writeAntaresH5(path='badPath'), "Folder badPath not found.") - - }) - -} diff --git a/tests/testthat/test-importOutputForClusters.R b/tests/testthat/test-importOutputForClusters.R new file mode 100644 index 00000000..33d1d730 --- /dev/null +++ b/tests/testthat/test-importOutputForClusters.R @@ -0,0 +1,62 @@ +# #Copyright © 2016 RTE Réseau de transport d’électricité +# +# context("Functions .importOutput") +# +# path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) +# +# opts <- setSimulationPath(path_study_test,simulation="20240105-0934eco") +# +# test_that(".importOutputForClusters is ok", { +# +# OutputForClusters <- .importOutputForClusters( +# areas="fr", +# timeStep="annual", +# showProgress=FALSE, +# parallel=FALSE, +# opts=opts +# ) +# +# required_order_simulation_variables <- c("production","NP Cost","NODU","profit") +# +# order_simulation_variables <- colnames(OutputForClusters)[colnames(OutputForClusters) %in% required_order_simulation_variables] +# +# expect_equal(order_simulation_variables,required_order_simulation_variables) +# expect_equal(nrow(OutputForClusters),1) +# }) +# +# +# test_that(".importOutputForResClusters is ok", { +# +# OutputForResClusters <- .importOutputForResClusters( +# areas="fr", +# timeStep="annual", +# showProgress=FALSE, +# parallel=FALSE, +# opts=opts +# ) +# +# required_order_simulation_variables <- c("production") +# +# order_simulation_variables <- colnames(OutputForResClusters)[colnames(OutputForResClusters) %in% required_order_simulation_variables] +# +# expect_equal(order_simulation_variables,required_order_simulation_variables) +# expect_equal(nrow(OutputForResClusters),1) +# }) +# +# test_that(".importOutputForSTClusters is ok", { +# +# OutputForSTClusters <- .importOutputForSTClusters( +# areas="fr", +# timeStep="annual", +# showProgress=FALSE, +# parallel=FALSE, +# opts=opts +# ) +# +# required_order_simulation_variables <- c("P.injection","levels","P.withdrawal") +# +# order_simulation_variables <- colnames(OutputForSTClusters)[colnames(OutputForSTClusters) %in% required_order_simulation_variables] +# +# expect_equal(order_simulation_variables,required_order_simulation_variables) +# expect_equal(nrow(OutputForSTClusters),1) +# }) diff --git a/tests/testthat/test-ponderate.R b/tests/testthat/test-ponderate.R index e27e4b3c..50534f8a 100644 --- a/tests/testthat/test-ponderate.R +++ b/tests/testthat/test-ponderate.R @@ -1,55 +1,55 @@ -context("mcWeights") - -skip("readAntares empty when mcYears is NULL") - -setSimulationPath(studyPathSV8) - -sapply(c("hourly", "daily", "weekly", "monthly", "annual"), function(tim){ - correct_mc_weights <- 1:2 - output_correct_weights_hourly <- readAntares(areas = "all", links = "all", clusters = "all", mcYears = "all", timeStep = tim, mcWeights = correct_mc_weights, showProgress = F) - output_synthetic_hourly <- readAntares(areas = "all", links = "all", clusters = "all", mcYears = NULL, timeStep = tim, showProgress = FALSE) - - ### Areas ### - output_weighted_areas <- output_correct_weights_hourly$areas - output_synthetic_areas <- output_synthetic_hourly$areas - data.table::setcolorder(output_weighted_areas, colnames(output_synthetic_areas)) - - # output_weighted_areas <- output_weighted_areas[, round(.SD, 2), .SDcols = sapply(output_weighted_areas, is.numeric)] - # output_synthetic_areas <- output_synthetic_areas[, .SD, .SDcols = sapply(output_synthetic_areas, is.numeric)] - test <- output_synthetic_areas[, .SD, .SDcols = !getIdCols(output_weighted_areas)] - output_weighted_areas[, .SD, .SDcols = !getIdCols(output_weighted_areas)] - test <- round(test, 0) - test$PSP <- test$`MISC. NDG` <- NULL - non_null_cols <- colMeans(abs(test)) - non_null_cols <- which(non_null_cols > 1) - test2 <- test[, .SD, .SDcols = non_null_cols] - expect_true(ncol(test2) == 0) - - ### Links ### - output_weighted_links <- output_correct_weights_hourly$links - output_synthetic_links <- output_synthetic_hourly$links - setcolorder(output_weighted_links, colnames(output_synthetic_links)) - - output_weighted_links <- output_weighted_links[, round(.SD, 2), .SDcols = sapply(output_weighted_links, is.numeric)] - output_synthetic_links <- output_synthetic_links[, .SD, .SDcols = sapply(output_synthetic_links, is.numeric)] - test <- output_synthetic_links - output_weighted_links - test <- round(test, 0) - non_null_cols <- colMeans(abs(test)) - non_null_cols <- which(non_null_cols > 20) - test2 <- test[, .SD, .SDcols = non_null_cols] - expect_true(ncol(test2) == 0) - - ### Clusters ### - output_weighted_clusters <- output_correct_weights_hourly$clusters - output_synthetic_clusters <- output_synthetic_hourly$clusters - setcolorder(output_weighted_clusters, colnames(output_synthetic_clusters)) - - output_weighted_clusters <- output_weighted_clusters[, round(.SD, 2), .SDcols = sapply(output_weighted_clusters, is.numeric)] - output_synthetic_clusters <- output_synthetic_clusters[, .SD, .SDcols = sapply(output_synthetic_clusters, is.numeric)] - test <- output_synthetic_clusters - output_weighted_clusters - test <- round(test, 0) - non_null_cols <- colMeans(abs(test)) - non_null_cols <- which(non_null_cols > 1) - test2 <- test[, .SD, .SDcols = non_null_cols] - expect_true(ncol(test2) == 0) - -}) +# context("mcWeights") +# +# skip("readAntares empty when mcYears is NULL") +# +# setSimulationPath(studyPathSV8) +# +# sapply(c("hourly", "daily", "weekly", "monthly", "annual"), function(tim){ +# correct_mc_weights <- 1:2 +# output_correct_weights_hourly <- readAntares(areas = "all", links = "all", clusters = "all", mcYears = "all", timeStep = tim, mcWeights = correct_mc_weights, showProgress = F) +# output_synthetic_hourly <- readAntares(areas = "all", links = "all", clusters = "all", mcYears = NULL, timeStep = tim, showProgress = FALSE) +# +# ### Areas ### +# output_weighted_areas <- output_correct_weights_hourly$areas +# output_synthetic_areas <- output_synthetic_hourly$areas +# data.table::setcolorder(output_weighted_areas, colnames(output_synthetic_areas)) +# +# # output_weighted_areas <- output_weighted_areas[, round(.SD, 2), .SDcols = sapply(output_weighted_areas, is.numeric)] +# # output_synthetic_areas <- output_synthetic_areas[, .SD, .SDcols = sapply(output_synthetic_areas, is.numeric)] +# test <- output_synthetic_areas[, .SD, .SDcols = !getIdCols(output_weighted_areas)] - output_weighted_areas[, .SD, .SDcols = !getIdCols(output_weighted_areas)] +# test <- round(test, 0) +# test$PSP <- test$`MISC. NDG` <- NULL +# non_null_cols <- colMeans(abs(test)) +# non_null_cols <- which(non_null_cols > 1) +# test2 <- test[, .SD, .SDcols = non_null_cols] +# expect_true(ncol(test2) == 0) +# +# ### Links ### +# output_weighted_links <- output_correct_weights_hourly$links +# output_synthetic_links <- output_synthetic_hourly$links +# setcolorder(output_weighted_links, colnames(output_synthetic_links)) +# +# output_weighted_links <- output_weighted_links[, round(.SD, 2), .SDcols = sapply(output_weighted_links, is.numeric)] +# output_synthetic_links <- output_synthetic_links[, .SD, .SDcols = sapply(output_synthetic_links, is.numeric)] +# test <- output_synthetic_links - output_weighted_links +# test <- round(test, 0) +# non_null_cols <- colMeans(abs(test)) +# non_null_cols <- which(non_null_cols > 20) +# test2 <- test[, .SD, .SDcols = non_null_cols] +# expect_true(ncol(test2) == 0) +# +# ### Clusters ### +# output_weighted_clusters <- output_correct_weights_hourly$clusters +# output_synthetic_clusters <- output_synthetic_hourly$clusters +# setcolorder(output_weighted_clusters, colnames(output_synthetic_clusters)) +# +# output_weighted_clusters <- output_weighted_clusters[, round(.SD, 2), .SDcols = sapply(output_weighted_clusters, is.numeric)] +# output_synthetic_clusters <- output_synthetic_clusters[, .SD, .SDcols = sapply(output_synthetic_clusters, is.numeric)] +# test <- output_synthetic_clusters - output_weighted_clusters +# test <- round(test, 0) +# non_null_cols <- colMeans(abs(test)) +# non_null_cols <- which(non_null_cols > 1) +# test2 <- test[, .SD, .SDcols = non_null_cols] +# expect_true(ncol(test2) == 0) +# +# }) diff --git a/tests/testthat/test-readAntares_STclusters.R b/tests/testthat/test-readAntares_STclusters.R new file mode 100644 index 00000000..e0c0dc67 --- /dev/null +++ b/tests/testthat/test-readAntares_STclusters.R @@ -0,0 +1,15 @@ +#Copyright © 2016 RTE Réseau de transport d’électricité + + + + +# test_that("ST clusters importation is ok", { +# path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) +# opts <- setSimulationPath(path_study_test,simulation="20240105-0934eco") +# +# clustersST <- readAntares(clustersST="all",timeStep="annual",opts = opts)$clustersST +# expect_true(all(opts$areasWithSTClusters %in% clustersST$area)) +# +# clustersST_fr <- readAntares(clustersST="fr",timeStep="annual",opts = opts)$clustersST +# expect_true("fr"==unique(clustersST_fr$area)) +# }) diff --git a/tests/testthat/test-readBindingConstraints.R b/tests/testthat/test-readBindingConstraints.R index 2e9128d1..14710db6 100644 --- a/tests/testthat/test-readBindingConstraints.R +++ b/tests/testthat/test-readBindingConstraints.R @@ -49,12 +49,12 @@ test_that("test if exist data value file", { # >= v870 ---- # read latest version of empty study -setup_study_empty(sourcedir_empty_study) +study_empty_latest_version <- setup_study_empty(sourcedir_empty_study) opts_test_empty <- antaresRead::setSimulationPath(study_empty_latest_version, "input") # read latest version study -path_study_test <- grep(pattern = "87", x = studyPathSV8, value = TRUE) +path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) opts_study_test <- setSimulationPath(path_study_test, simulation = "input") ## empty study test ---- diff --git a/tests/testthat/test-readClusterDesc.R b/tests/testthat/test-readClusterDesc.R index 3b938df7..75929a29 100644 --- a/tests/testthat/test-readClusterDesc.R +++ b/tests/testthat/test-readClusterDesc.R @@ -1,6 +1,6 @@ # read study ---- # latest version -path_study_test <- grep(pattern = "87", x = studyPathSV8, value = TRUE) +path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) opts_study_test <- setSimulationPath(path_study_test, simulation = "input") # all version ---- diff --git a/tests/testthat/test-readInputClusters.R b/tests/testthat/test-readInputClusters.R index 7d45eafd..f9d6f8d8 100644 --- a/tests/testthat/test-readInputClusters.R +++ b/tests/testthat/test-readInputClusters.R @@ -5,31 +5,85 @@ sapply(studyPathS, function(studyPath){ opts <- setSimulationPath(studyPath) + test_that("Thermal availabilities importation works", { + # read /series files (default) + input <- readInputThermal(clusters = "peak_must_run_partial", + showProgress = FALSE) + expect_is(input, "antaresDataTable") + expect_gt(nrow(input), 0) + expect_equal(nrow(input) %% (24 * 7 * nweeks), 0) + }) - if(!isH5Opts(opts)){ - - test_that("Thermal availabilities importation works", { - input <- readInputThermal(clusters = "peak_must_run_partial", showProgress = FALSE) - expect_is(input, "antaresDataTable") - expect_gt(nrow(input), 0) - expect_equal(nrow(input) %% (24 * 7 * nweeks), 0) - }) - - test_that("Thermal modulation importation works", { - input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE) - expect_is(input, "antaresDataList") - expect_is(input$thermalModulation, "antaresDataTable") - expect_gt(nrow(input$thermalModulation), 0) - expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0) - }) - - test_that("Thermal data importation works", { - input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE) - expect_is(input, "antaresDataList") - expect_is(input$thermalModulation, "antaresDataTable") - expect_gt(nrow(input$thermalModulation), 0) - expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0) - }) + test_that("Thermal modulation importation works", { + # read /series + /prepro/modulation.txt + input <- readInputThermal(clusters = "peak_must_run_partial", + thermalModulation = TRUE, + showProgress = FALSE) + expect_is(input, "antaresDataList") + expect_is(input$thermalModulation, "antaresDataTable") + expect_gt(nrow(input$thermalModulation), 0) + expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0) + }) + + test_that("Thermal data importation works", { + # read /series + /prepro/data.txt + input <- readInputThermal(clusters = "peak_must_run_partial", + thermalData = TRUE, + showProgress = FALSE) + expect_is(input, "antaresDataList") + expect_is(input$thermalData, "antaresDataTable") + expect_gt(nrow(input$thermalData), 0) + expect_equal(nrow(input$thermalData) %% (24 * 7 * nweeks), 0) + }) + + test_that("Wrong area", { + expect_error(readInputThermal(areas = "BAD_AREA", + clusters = "peak_must_run_partial"), + regexp = "areas are not available") + }) + + test_that("Wrong cluster", { + expect_error(readInputThermal(areas = "all", + clusters = "BAD_CLUSTER"), + regexp = "clusters are not available") + }) + + test_that("No thermal data selected", { + expect_error(readInputThermal(clusters = "peak_must_run_partial", + thermalAvailabilities = FALSE), + regexp = "one type of data should be selected") + }) - } +}) + +# >= v870 ---- +## RES ---- +test_that("test reading TS RES", { + + # read latest version study + path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) + setSimulationPath(path_study_test, simulation = "input") + + res_clust_properties <- readClusterResDesc() + + test_that("read one cluster", { + # read /series files (default) + input <- readInputRES(areas = "all", + clusters = unique(res_clust_properties$cluster)[1]) + expect_is(input, "antaresDataTable") + expect_gt(nrow(input), 0) + expect_equal(nrow(input) %% (24 * 7 * nweeks), 0) + }) + + test_that("read various clusters", { + nb_cluster <- length(unique(res_clust_properties$cluster)) + # read /series files (default) + input <- readInputRES(areas = "all", + clusters = unique(res_clust_properties$cluster)) + expect_is(input, "antaresDataTable") + expect_gt(nrow(input), 0) + expect_equal(nrow(input) %% (24 * 7 * nweeks), 0) + }) + + }) diff --git a/tests/testthat/test-readInputTS.R b/tests/testthat/test-readInputTS.R index 1fbc3e81..d47b8ccb 100644 --- a/tests/testthat/test-readInputTS.R +++ b/tests/testthat/test-readInputTS.R @@ -7,9 +7,6 @@ sapply(studyPathS, function(studyPath){ opts <- setSimulationPath(studyPath) - -if(!isH5Opts(opts)){ - test_that("Load importation works", { input <- readInputTS(load = "all", showProgress = FALSE) expect_is(input, "antaresDataTable") @@ -122,11 +119,10 @@ test_that("readInputTs must work if we change opts$timeIdMin and opts$timeIdMax" }) -} }) # read latest version study -path_study_test <- grep(pattern = "87", x = studyPathSV8, value = TRUE) +path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) opts_study_test <- setSimulationPath(path_study_test, simulation = "input") # >= v860---- diff --git a/tests/testthat/test-readLayout.R b/tests/testthat/test-readLayout.R index 0870c4c5..eb2269e0 100644 --- a/tests/testthat/test-readLayout.R +++ b/tests/testthat/test-readLayout.R @@ -18,8 +18,6 @@ describe("readLayout", { }) - if(!isH5Opts(opts)) - { districtDefFile <- file.path(opts$inputPath, "areas/sets.ini") it("still works when there is no district (#50)", { @@ -38,7 +36,6 @@ describe("readLayout", { file.remove(districtDefFile) file.rename(paste0(districtDefFile, ".back"), districtDefFile) - } }) diff --git a/tests/testthat/test-read_optim_criteria.R b/tests/testthat/test-read_optim_criteria.R index 0c313bce..ec33f190 100644 --- a/tests/testthat/test-read_optim_criteria.R +++ b/tests/testthat/test-read_optim_criteria.R @@ -5,8 +5,6 @@ sapply(studyPathS, function(studyPath){ opts <- setSimulationPath(studyPath, 1) -if(!isH5Opts(opts)) -{ describe("readOptimCriteria", { it("returns an antaresDataTable", { optimCrit <- readOptimCriteria(opts) @@ -16,5 +14,4 @@ describe("readOptimCriteria", { expect_equal(attr(optimCrit, "timeStep"), "weekly") }) }) -} }) diff --git a/tests/testthat/test-setup.R b/tests/testthat/test-setSimulationPath.R similarity index 77% rename from tests/testthat/test-setup.R rename to tests/testthat/test-setSimulationPath.R index 42a6746a..a4a3c0c2 100644 --- a/tests/testthat/test-setup.R +++ b/tests/testthat/test-setSimulationPath.R @@ -2,10 +2,11 @@ context("Setup functions") +# v710---- sapply(studyPathS, function(studyPath){ -suppressPackageStartupMessages(require(lubridate)) -suppressWarnings(suppressPackageStartupMessages(require(data.table))) +# suppressPackageStartupMessages(require(lubridate)) +# suppressWarnings(suppressPackageStartupMessages(require(data.table))) # Reading of study options ##################################################### @@ -45,21 +46,17 @@ test_that("R option 'antares' is set", { opts <- setSimulationPath(studyPath) expect_identical(opts, getOption("antares")) }) -opts <- setSimulationPath(studyPath) -if(!isH5Opts(opts)) -{ + test_that("setSimulationPath fails if path is not an antares Ouput directory", { expect_error(setSimulationPath(file.path(studyPath, "../.."))) }) -} opts <- setSimulationPath(studyPath) -if(!isH5Opts(opts)) -{ + test_that("setSimulationPath can read info in input", { opts <- setSimulationPath(studyPath, "input") for (v in c("studyName", "areaList", "districtList", "linkList", - "areasWithClusters", "timeIdMin", "timeIdMax", "start", + "areasWithClusters", "timeIdMin", "timeIdMax", "start", "firstWeekday")) { expect_equal(opts[[v]], trueOpts[[v]]) } @@ -67,20 +64,20 @@ test_that("setSimulationPath can read info in input", { test_that("setSimulationPath works if synthesis and some MC years are not saved (#31)", { opts <- setSimulationPath(studyPath) - - file.rename(file.path(opts$simDataPath, "mc-all"), + + file.rename(file.path(opts$simDataPath, "mc-all"), file.path(opts$simDataPath, "mc-all_back")) - file.rename(file.path(opts$simDataPath, "mc-ind/00001"), + file.rename(file.path(opts$simDataPath, "mc-ind/00001"), file.path(opts$simDataPath, "mc-ind/00001_back")) - + opts <- setSimulationPath(studyPath) trueOpts$synthesis <- FALSE trueOpts$mcYears <- 2 expect_equal(opts[names(trueOpts)], trueOpts) - - file.rename(file.path(opts$simDataPath, "mc-all_back"), + + file.rename(file.path(opts$simDataPath, "mc-all_back"), file.path(opts$simDataPath, "mc-all")) - file.rename(file.path(opts$simDataPath, "mc-ind/00001_back"), + file.rename(file.path(opts$simDataPath, "mc-ind/00001_back"), file.path(opts$simDataPath, "mc-ind/00001")) }) @@ -120,7 +117,7 @@ test_that("select simulation with negative index", { opts <- setSimulationPath(studyPath, -2) expect_equal(opts[names(trueOpts)], trueOpts) }) -# +# # test_that("select simulation interactively (number)", { # with_mock( # `base::scan` = function(...) {"1"}, @@ -130,7 +127,7 @@ test_that("select simulation with negative index", { # } # ) # }) -# +# # test_that("select simulation interactively (name)", { # with_mock( # scan = function(...) {"eco-test"}, @@ -141,6 +138,11 @@ test_that("select simulation with negative index", { # ) # }) +test_that("Bad multiple path", { + expect_error(setSimulationPath(studyPathSV8, simulation = "input"), + regexp = "Only one path is required") +}) + # Remove fake study unlink(file.path(studyPath, "output/30000101-0000fake"), TRUE, TRUE) @@ -155,12 +157,12 @@ describe("No simulation", { it("Error if the user tries to read simulation results", { expect_error(setSimulationPath(studyPath, 1)) }) - + it("User can read input data", { expect_silent(opts <- setSimulationPath(studyPath, 0)) expect_equal(opts$mode, "Input") }) - + it("Read input data by default", { expect_silent(opts <- setSimulationPath(studyPath)) expect_equal(opts$mode, "Input") @@ -175,5 +177,34 @@ test_that("Folder 'maps' is not interpreted as a study (#49)", { expect_silent(opts <- setSimulationPath(studyPath, -1)) }) -} + +test_that("No meta info areas with a ST cluster < 860", { + opts <- setSimulationPath(studyPath, "input") + expect_true(length(opts$areasWithSTClusters)==0) +}) + +test_that("No meta info binding study < 870", { + opts <- setSimulationPath(studyPath, "input") + expect_null(opts$binding) +}) + +}) + +# v860---- +test_that("New meta data for areas with a ST cluster", { + # read latest version study + path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) + opts_study_test <- setSimulationPath(path_study_test, simulation = "20240105-0934eco") + + expect_false(is.null(opts_study_test$areasWithSTClusters)) +}) + + +# v870---- +test_that("New meta data for group dimension of binding constraints", { + # read latest version study + path_study_test <- grep(pattern = "test_case_study_v870", x = studyPathSV8, value = TRUE) + opts_study_test <- setSimulationPath(path_study_test, simulation = "input") + + expect_is(opts_study_test$binding, "data.table") }) diff --git a/tests/testthat/test-viewAntares.R b/tests/testthat/test-viewAntares.R index 7e0290f5..6a67ba3c 100644 --- a/tests/testthat/test-viewAntares.R +++ b/tests/testthat/test-viewAntares.R @@ -1,45 +1,45 @@ -context("viewAntares") - -sapply(studyPathS, function(studyPath){ - -opts <- setSimulationPath(studyPath) - -# describe("viewAntares", { -# with_mock( -# `utils::View` = function(x, title) {print(title)}, -# { -# it("views an antaresDataTable", { -# data <- readAntares(timeStep = "annual", select = "LOAD", showProgress = FALSE) -# expect_output(viewAntares(data), "data") -# }) -# -# it("views an antaresDataList", { -# data <- readAntares("all", "all", timeStep = "annual", -# select = c("LOAD", "FLOW LIN."), showProgress = FALSE) -# expect_output(viewAntares(data), "data\\$areas") -# expect_output(viewAntares(data), "data\\$links") -# }) -# } -# ) +# context("viewAntares") +# +# sapply(studyPathS, function(studyPath){ # -# with_mock( -# `utils::View` = function(x, title) {return(NULL)}, -# { -# it("displays a warning if there is more than 100 columns", { -# data <- readAntares(timeStep = "annual", showProgress = FALSE) -# expect_warning(viewAntares(data), "100 columns") -# }) -# } -# ) +# opts <- setSimulationPath(studyPath) +# +# # describe("viewAntares", { +# # with_mock( +# # `utils::View` = function(x, title) {print(title)}, +# # { +# # it("views an antaresDataTable", { +# # data <- readAntares(timeStep = "annual", select = "LOAD", showProgress = FALSE) +# # expect_output(viewAntares(data), "data") +# # }) +# # +# # it("views an antaresDataList", { +# # data <- readAntares("all", "all", timeStep = "annual", +# # select = c("LOAD", "FLOW LIN."), showProgress = FALSE) +# # expect_output(viewAntares(data), "data\\$areas") +# # expect_output(viewAntares(data), "data\\$links") +# # }) +# # } +# # ) +# # +# # with_mock( +# # `utils::View` = function(x, title) {return(NULL)}, +# # { +# # it("displays a warning if there is more than 100 columns", { +# # data <- readAntares(timeStep = "annual", showProgress = FALSE) +# # expect_warning(viewAntares(data), "100 columns") +# # }) +# # } +# # ) +# # }) # }) -}) -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - rhdf5::h5closeAll() -} - -if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ - if(dir.exists(tpDir)) - { - unlink(tpDir, recursive = TRUE) - } -} +# if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ +# rhdf5::h5closeAll() +# } +# +# if(.requireRhdf5_Antares(stopP = FALSE) & .runH5Test){ +# if(dir.exists(tpDir)) +# { +# unlink(tpDir, recursive = TRUE) +# } +# } diff --git a/vignettes/antaresH5.Rmd b/vignettes/antaresH5.Rmd deleted file mode 100644 index ee292c1a..00000000 --- a/vignettes/antaresH5.Rmd +++ /dev/null @@ -1,143 +0,0 @@ ---- -title: "Use h5 file format with 'antaresRead'" -author: "Titouan Robert" -date: "`r Sys.Date()`" -output: rmarkdown::html_vignette -css: vignette.css -vignette: > - %\VignetteIndexEntry{Use h5 file format with 'antaresRead'} - %\VignetteEngine{knitr::rmarkdown} - %\VignetteEncoding{UTF-8} ---- - - -This document describes use of the `antaresRead` package with h5 file format. - - -## Installation - -To use **h5** file format, you must install packages ``zlibbioc`` and ``rhdf5 (>= 2.20.0)`` from **bioconductor**. If your proxy allows it, you can use : - -With R < 3.5.0, use: - -```r -source("https://bioconductor.org/biocLite.R') -biocLite("rhdf5') -``` - -With R >= 3.5.0 : - -```r -install.packages("BiocManager") -BiocManager::install("rhdf5") -``` - - -An other solution is to install manually these two packages from zip file. They are available here : -https://bioconductor.org/packages/release/bioc/html/zlibbioc.html - - -## Avantages of h5 transformation - -With h5 file, ``antaresRead`` will be running faster. Your reading will take between **2 and 4 times** less time. An other avantage is the **compression**, a study can pass from **15Go** in txt to **2.5Go** in h5. Furthermore, a h5 transformation convert lot of txt file in a single h5 file, it will be **easier to move** it, a copy/paste of a study with 137 thousand files (15Go) takes 45 minutes in txt and 30 seconds in h5. - -Finally, you can use in the same way function of ``antaresProcessing`` and ``antaresViz`` packages. You can also use ``addProcessingH5`` function to add columns to your h5 file. - - -## Comparaison between Conventional and H5 - -![](../man/figures/h5_comparison.PNG){width=100%} - - -## Write h5 file from antares study - -The function ```writeAntaresH5``` is used to write h5 file from antares output. You can add inputs data like in ``readAntares``. - -### Basic use - -```r -setSimulationPath("study_path/output/simulation_name") -writeAntaresH5() -``` - -### Add intput data - -```r -setSimulationPath("study_path/output/simulation_name") -writeAntaresH5(misc = TRUE, thermalAvailabilities = TRUE, - hydroStorage = TRUE, hydroStorageMaxPower = TRUE, reserve = TRUE, - linkCapacity = TRUE, mustRun = TRUE, thermalModulation = TRUE, - writeAllSimulations = TRUE) - -#with a shorcut -writeAntaresH5(allData = TRUE, - writeAllSimulations = TRUE) - -``` - -### Overwrite - -By default ``writeAntaresH5`` returns an error if the **h5** file already exists. You can use ``overwrite = TRUE``. -```r -setSimulationPath("study_path/output/simulation_name") -writeAntaresH5(overwrite = TRUE) -``` - -### Write all your simulations in parallel - -```r -setSimulationPath("study_path/output/simulation_name") -writeAntaresH5(nbCores = 5, writeAllSimulations = TRUE) -``` - -## Read data from your h5 file - -You can use ```setSimulationPath``` and ```readAntares``` in the same way than on normal study. - -```r -setSimulationPath("mySim.h5") -readAntares() -readAntares(areas = "all", links = "all") -readAntares(areas = "all", mcYears = "all") -readAntares(timeStep = "weekly") -``` - - - -## Add columns to your h5 file (hourly data only) - -You can use function ```addProcessingH5``` from ``antaresProcessing`` package. - -``` - library(antaresProcessing) - opts <- setSimulationPath("mySim.h5") - addProcessingH5(opts = opts, mcY = "mcInd", - addDownwardMargin = TRUE, - evalAreas = list(Tota = "`H. STOR` + `MISC. DTG`", - Tota2 = "`NODU` + `NP COST` + 1") - ) -``` - -For read data after adding **addDownwardMargin**, you can use alias ``Out_addDownwardMargin``, use ```showAliases()``` to see them. - -``` -readAntares(mcYears = "all", select = c("Tota", "Tota2")) -readAntares(mcYears = "all", select = "Out_addDownwardMargin") -``` - -## Use h5 opts for vizualisation - -It's possible to use opts from **h5** with all ``antaresViz`` modules. - -``` -library(antaresViz) -opts <- setSimulationPath("mySim.h5") - -prodStack(opts) -plot(opts) -exchangesStack(opts) - -#Run on more than one opts : -opts2 <- setSimulationPath("mySim2.h5") -plot(list(opts, opts2)) -```