Skip to content

Commit

Permalink
Declaring HADES-wide release 2023Q3
Browse files Browse the repository at this point in the history
  • Loading branch information
schuemie committed Oct 13, 2023
1 parent e768143 commit b380e75
Show file tree
Hide file tree
Showing 10 changed files with 3,467 additions and 181 deletions.
2 changes: 1 addition & 1 deletion Rmd/index.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ See the Support section for instructions on [setting up the R environment](rSetu

## Technology

HADES is a set of R packages that execute against data in a database server. HADES supports traditional database systems (PostgreSQL, Microsoft SQL Server, and Oracle), parallel data warehouses (Microsoft APS, IBM Netezza, and Amazon RedShift), as well as 'Big Data' platforms (Hadoop through Apache Impala, Apache Spark, and Google BigQuery). HADES does *not* support MySQL.
HADES is a set of R packages that execute against data in a database server. HADES supports traditional database systems (PostgreSQL, Microsoft SQL Server, and Oracle), parallel data warehouses (e.g. Amazon RedShift), as well as 'Big Data' platforms (e.g. Google BigQuery). HADES does *not* support MySQL. The full list of supported database platforms can be found [here](supportedPlatforms.html).

## License

Expand Down
19 changes: 18 additions & 1 deletion Rmd/installingHades.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,21 @@ options(install.packages.compile.from.source = "never")
library(remotes)
update_packages()
```
```

# HADES-wide releases

At the end of quarter 1 and 3 of each year a HADES-wide release is created. This is a snapshot of all HADES packages and their dependencies at one point in time. Additional checks are executed to ensure all packages and their dependencies work together. As such, these HADES-wide releases form a stable foundation for studies that may not require the absolute cutting-edge in HADES functionality.

These releases are currently captured as [renv](https://rstudio.github.io/renv/articles/renv.html) lock files. The following releases are available:

- 2023Q1: [renv lock file](https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q1/renv.lock)
- 2023Q3: [renv lock file](https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q3/renv.lock)

To build the R library corresponding to the latest release in your current RStudio project, you can use:

```r
download.file("https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q3/renv.lock", "renv.lock")
install.packages("renv")
renv::restore()
```
8 changes: 4 additions & 4 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -483,10 +483,10 @@ <h1>Learn How to Use HADES</h1>
<h2>Technology</h2>
<p>HADES is a set of R packages that execute against data in a database
server. HADES supports traditional database systems (PostgreSQL,
Microsoft SQL Server, and Oracle), parallel data warehouses (Microsoft
APS, IBM Netezza, and Amazon RedShift), as well as ‘Big Data’ platforms
(Hadoop through Apache Impala, Apache Spark, and Google BigQuery). HADES
does <em>not</em> support MySQL.</p>
Microsoft SQL Server, and Oracle), parallel data warehouses (e.g. Amazon
RedShift), as well as ‘Big Data’ platforms (e.g. Google BigQuery). HADES
does <em>not</em> support MySQL. The full list of supported database
platforms can be found <a href="supportedPlatforms.html">here</a>.</p>
</div>
<div id="license" class="section level2">
<h2>License</h2>
Expand Down
25 changes: 25 additions & 0 deletions docs/installingHades.html
Original file line number Diff line number Diff line change
Expand Up @@ -482,6 +482,31 @@ <h1>Updating HADES</h1>
library(remotes)
update_packages()</code></pre>
</div>
<div id="hades-wide-releases" class="section level1">
<h1>HADES-wide releases</h1>
<p>At the end of quarter 1 and 3 of each year a HADES-wide release is
created. This is a snapshot of all HADES packages and their dependencies
at one point in time. Additional checks are executed to ensure all
packages and their dependencies work together. As such, these HADES-wide
releases form a stable foundation for studies that may not require the
absolute cutting-edge in HADES functionality.</p>
<p>These releases are currently captured as <a
href="https://rstudio.github.io/renv/articles/renv.html">renv</a> lock
files. The following releases are available:</p>
<ul>
<li>2023Q1: <a
href="https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q1/renv.lock">renv
lock file</a></li>
<li>2023Q3: <a
href="https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q3/renv.lock">renv
lock file</a></li>
</ul>
<p>To build the R library corresponding to the latest release in your
current RStudio project, you can use:</p>
<pre class="r"><code>download.file(&quot;https://raw.githubusercontent.com/OHDSI/Hades/main/hadesWideReleases/2023Q3/renv.lock&quot;, &quot;renv.lock&quot;)
install.packages(&quot;renv&quot;)
renv::restore()</code></pre>
</div>



Expand Down
106 changes: 106 additions & 0 deletions extras/Releasing/TestCohortDiagnostics.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# Unit tests on CohortDiagnostics using testing servers take too long, so using local
# Postgres server
library(dplyr)
library(Capr)

connectionDetails <- DatabaseConnector::createConnectionDetails(
dbms = "postgresql",
user = Sys.getenv("LOCAL_POSTGRES_USER"),
password = Sys.getenv("LOCAL_POSTGRES_PASSWORD"),
server = Sys.getenv("LOCAL_POSTGRES_SERVER")
)
cdmDatabaseSchema <- Sys.getenv("LOCAL_POSTGRES_CDM_SCHEMA")
cohortDatabaseSchema <- Sys.getenv("LOCAL_POSTGRES_OHDSI_SCHEMA")
cohortTable <- "cd_test"
folder <- "e:/temp/cdOutput"

# Create cohorts using Capr ----------------------------------------------------
osteoArthritisOfKneeConceptId <- 4079750
celecoxibConceptId <- 1118084
diclofenacConceptId <- 1124300
osteoArthritisOfKnee <- cs(
descendants(osteoArthritisOfKneeConceptId),
name = "Osteoarthritis of knee"
)
attrition = attrition(
"prior osteoarthritis of knee" = withAll(
atLeast(1, conditionOccurrence(osteoArthritisOfKnee), duringInterval(eventStarts(-Inf, 0)))
)
)
celecoxib <- cs(
descendants(celecoxibConceptId),
name = "Celecoxib"
)
diclofenac <- cs(
descendants(diclofenacConceptId),
name = "Diclofenac"
)
celecoxibCohort <- cohort(
entry = entry(
drugExposure(celecoxib, firstOccurrence()),
observationWindow = continuousObservation(priorDays = 365)
),
# attrition = attrition,
exit = exit(endStrategy = drugExit(celecoxib,
persistenceWindow = 30,
surveillanceWindow = 0))
)
diclofenacCohort <- cohort(
entry = entry(
drugExposure(diclofenac, firstOccurrence()),
observationWindow = continuousObservation(priorDays = 365)
),
# attrition = attrition,
exit = exit(endStrategy = drugExit(diclofenac,
persistenceWindow = 30,
surveillanceWindow = 0))
)
cohortDefinitionSet <- Capr::makeCohortSet(celecoxibCohort, diclofenacCohort)

# Generate cohorts -------------------------------------------------------------
cohortTableNames <- CohortGenerator::getCohortTableNames(cohortTable = cohortTable)
CohortGenerator::createCohortTables(
connectionDetails = connectionDetails,
cohortTableNames = cohortTableNames,
cohortDatabaseSchema = cohortDatabaseSchema,
incremental = FALSE
)
CohortGenerator::generateCohortSet(
connectionDetails = connectionDetails,
cdmDatabaseSchema = cdmDatabaseSchema,
cohortDatabaseSchema = cohortDatabaseSchema,
cohortTableNames = cohortTableNames,
cohortDefinitionSet = cohortDefinitionSet,
incremental = FALSE
)
CohortGenerator::getCohortCounts(connectionDetails = connectionDetails,
cohortDatabaseSchema = cohortDatabaseSchema,
cohortTable = cohortTable)

# Run CohortDiagnostics --------------------------------------------------------
dir.create(folder)

cohortDefinitionSet$cohortId <- as.double(cohortDefinitionSet$cohortId)
CohortDiagnostics::executeDiagnostics(
cohortDefinitionSet = cohortDefinitionSet,
connectionDetails = connectionDetails,
cdmDatabaseSchema = cdmDatabaseSchema,
cohortDatabaseSchema = cohortDatabaseSchema,
cohortTable = cohortTable,
cohortIds = cohortDefinitionSet$cohortId,
exportFolder = file.path(folder, "export"),
databaseId = "Synpuf",
runInclusionStatistics = TRUE,
runBreakdownIndexEvents = TRUE,
runTemporalCohortCharacterization = TRUE,
runIncidenceRate = TRUE,
runIncludedSourceConcepts = TRUE,
runOrphanConcepts = TRUE,
runTimeSeries = TRUE,
runCohortRelationship = TRUE,
minCellCount = 5,
)

file.exists(file.path(
folder, "export", paste0("Results_Synpuf.zip")
))
Loading

0 comments on commit b380e75

Please sign in to comment.