Skip to content

Commit

Permalink
Merge pull request #1807 from EnterpriseDB/develop
Browse files Browse the repository at this point in the history
  • Loading branch information
josh-heyer authored Aug 30, 2021
2 parents dbc034f + 201bf30 commit 5c956da
Show file tree
Hide file tree
Showing 17 changed files with 362 additions and 171 deletions.
67 changes: 67 additions & 0 deletions .github/workflows/sync-and-process-files.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
name: sync and process files from another repo
on:
repository_dispatch:
types: [sync_files]
jobs:
sync-and-process-files:
env:
# The body text of the PR requests that will be created
BODY: "Automated changes to pull in and process updates from repo: ${{ github.event.client_payload.repo }} ref: ${{ github.event.client_payload.ref }}"

# The name of the branch that will be created
BRANCH_NAME: automatic_docs_update/repo_${{ github.event.client_payload.repo }}/ref_${{ github.event.client_payload.ref }}

# The users that should be assigned to the PR as a comma separated list of github usernames.
REVIEWERS:

# The title of the PR request that will be created
TITLE: "Process changes to docs from: repo: ${{ github.event.client_payload.repo }} ref: ${{ github.event.client_payload.ref }}"

runs-on: ubuntu-latest
steps:
- name: Checkout destination
uses: actions/checkout@v2
with:
path: destination

- name: Checkout source repo
uses: actions/checkout@v2
with:
ref: ${{ github.event.client_payload.sha }}
repository: ${{ github.event.client_payload.repo }}
token: ${{ secrets.SYNC_FILES_TOKEN }}
path: source

- name: setup node
uses: actions/setup-node@v2
with:
node-version: '14'

- name: Process changes
run: |
case ${{ github.event.client_payload.repo }} in
EnterpriseDB/cloud-native-postgresql)
${{ github.workspace }}/destination/scripts/source/process-cnp-docs.sh ${{ github.workspace }}/source ${{ github.workspace }}/destination
;;
EnterpriseDB/fe)
mkdir -p ${{ github.workspace }}/destination/icons-pkg && \
cp -fr utils/icons-placeholder/output/* ${{ github.workspace }}/destination/icons-pkg/
;;
*)
echo "The workflow has not been configured for the ${{ github.event.client_payload.repo }} repo"
exit 1
;;
esac
working-directory: source

- name: Create pull request
uses: peter-evans/[email protected]
with:
body: ${{ env.BODY }}
branch: ${{ env.BRANCH_NAME }}
path: destination/
reviewers: ${{ env.REVIEWERS }}
title: ${{ env.TITLE }}
52 changes: 0 additions & 52 deletions .github/workflows/sync-files.yml

This file was deleted.

18 changes: 5 additions & 13 deletions docs/how-tos/sync-cnp-docs.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,15 @@
# Sync Cloud-Native-PostgreSQL Docs

Currently we need to manually sync over [cloud-native-postgresql][cnp]("CNP")
docs whenever there's a new release. The long term goal is to automate this via
GitHub action dispatch and automated event handling.
Documentation from [cloud-native-postgresql][cnp]("CNP") should be synced over automatically when there is a new release, however in the event that needs to be done manually, the following process can be used:

1. The CNP team informs us that there's a new version.
1. Check out the appropriate version from the [CNP][] repo.
1. Replace `docs:temp_kubernetes/docs/` with `cloud-native-postgresql:docs/`.

`temp_kubernetes/docs/` is not tracked via Git, so if it's not present
already, you'll need to create the directory yourself.

1. Transpile original source documentation into MDX format:
1. Run the processor script

**note:** replace `path/to/cnp/checkout` below to the actual path of your CNP checkout. If you are not running the script from this project's root, you will need to update `.` below to be the path to this project's checkout.
```sh
python scripts/source/source_cloud_native_operator.py
scripts/source/process-cnp-docs.sh path/to/cnp/checkout .
```
1. The script will handle updating and moving the files from the [CNP][] repo into place.

1. Replace `advocacy_docs/kubernetes/cloud-native-postgresql/` with
`temp_kubernetes/build/`.

[cnp]: https://github.com/EnterpriseDB/cloud-native-postgresql
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,21 @@ The `REPLICA IDENTITY FULL` setting on a source table ensures that certain types
Table filters are not supported on binary data type columns. A binary data type is the Postgres data type `BYTEA`. In addition, table filters are not supported on Advanced Server columns with data types `BINARY`, `VARBINARY`, `BLOB`, `LONG RAW`, and `RAW` as these are alias names for the `BYTEA` data type.
**Filtering Restrictions on Operators**
XDB supports modulus operator (denoted by %) to define a filter clause. However, the following restrictions apply:
- You can have only a single filter condition using the modulus operator
- You cannot use it with any other conditions using `AND` or `OR` operators
XDB supports the modulus filter in the following formats:
`deptno%3=0`
`@deptno%3=0`
## Roadmap for Further Instructions
The specific details on implementing table filtering depend upon whether you are using a single-master replication system or a multi-master replication system. The following is a roadmap to the relevant sections for each type of replication system.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,12 @@ For replicating Postgres partitioned tables see [Replicating Postgres Partitione
- `BYTEA`
- `RAW`

PostgreSQL or EDB Postgres Advanced Server database tables that include the following data types cannot be replicated to the Oracle database:

- `JSON`
- `JSONB`


Postgres tables that include `OID` based large objects cannot be replicated. For information on `OID` based large objects see `pg_largeobject` in the PostgreSQL Core Documentation located at:

> <https://www.postgresql.org/docs/current/static/catalog-pg-largeobject.html>
Expand Down
1 change: 1 addition & 0 deletions scripts/fileProcessor/.prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.md
3 changes: 3 additions & 0 deletions scripts/fileProcessor/.prettierrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"trailingComma": "all"
}
22 changes: 22 additions & 0 deletions scripts/fileProcessor/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# File Processor

This tool can be used to automatically modify files. This works by opening specified files, and applying a processor script to the file's name and content. It is intended to be used by workflows which pull content from other repositories into this one.

## Usage

In the directory that you'd like to modify files in, run something like the following:
```
node fileProcessor/main.mjs -f **/*.md -p dummy
```

### options
| flag | alias | description |
|---------------|-------|-------------|
| `--files` | `-f` | The glob the script uses to look for files.More than one `--files` flag can be passed in, but the processor will only run on files which match all of the globs passed in|
| `--processor` | `-p` | The processor to apply to files. The script will look for these in the `processors` directory. More than one processor can be added, and they will be run in the order they are passed in.

## adding new processors

The main script will attempt to import processors passed in with `--processor` flags by looking for a file with a matching name in the `processors` directory.

A processor should be saved with the `.mjs` extension, and export a function named `process` which accepts two arguments. The file name will be passed into the first argument, and file contents will be passed into the second argument. This function should return an object with the keys `newFileContent` and `newFilename`.
76 changes: 76 additions & 0 deletions scripts/fileProcessor/main.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
import arg from "arg";
import fs from "fs/promises";
import { globby } from "globby";
import { dirname } from "path";
import { fileURLToPath } from "url";

const args = arg({
"--files": [String],
"--processor": [String],

"-f": "--files",
"-p": "--processor",
});

const __dirname = dirname(fileURLToPath(import.meta.url));

const processFiles = async () => {
const paths = await globby(args["--files"]);

console.log(`Processing ${paths.length} files`);

paths.forEach(processSingleFile);
};

const processSingleFile = async (filename) => {
console.log(`Processing ${filename}`);

// run the processor scripts
const { newFilename, newContent } = await runProcessorsForFile(
filename,
await fs.readFile(filename, "utf8"),
);

if (newFilename != filename) {
console.log(`Writing ${newFilename} (previously ${filename})`);
} else {
console.log(`Writing ${newFilename}`);
}

fs.writeFile(newFilename, newContent)
.catch((err) => {
console.error(err);
process.exit(1);
})
.then(() => {
// if the filename has changed, then remove the old one
if (newFilename != filename) {
console.log(`Removing ${filename}`);

fs.rm(filename).catch((err) => {
console.error(err);
process.exit(1);
});
}
});
};

const runProcessorsForFile = async (filename, content) => {
let newFilename = filename;
let newContent = content;

for (const index in args["--processor"]) {
await import(
`${__dirname}/processors/${args["--processor"][index]}.mjs`
).then(async (module) => {
const output = await module.process(newFilename, newContent);

newFilename = output.newFilename;
newContent = output.newContent;
});
}

return { newFilename, newContent };
};

processFiles();
20 changes: 20 additions & 0 deletions scripts/fileProcessor/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"name": "fileprocessor",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"arg": "^5.0.1",
"globby": "^12.0.1",
"js-yaml": "^4.1.0"
},
"devDependencies": {
"prettier": "^2.3.2"
}
}
69 changes: 69 additions & 0 deletions scripts/fileProcessor/processors/cnp/add-frontmatters.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import fs from "fs/promises";
import yaml from "js-yaml";

export const process = async (filename, content) => {
const trimmedContent = content.trim();
if (trimmedContent.charAt(0) !== "#") {
console.warn(
"File does not begin with title - frontmatter will not be valid: " +
filename,
);
}

const endOfFirstLine = trimmedContent.indexOf("\n");

// Get the first line of content, which should be the header.
// This will exclude the very first character, which should be '#'
const header = trimmedContent.slice(1, endOfFirstLine).trim();

// add the frontmatter to the file. This will replace the first line of the file.
let newContent = await getFrontmatter(header, filename);
newContent = newContent + trimmedContent.slice(endOfFirstLine);

return {
newFilename: filename,
newContent,
};
};

const getFrontmatter = async (header, filename) => {
let frontmatter = `---
title: '${header}'
originalFilePath: '${filename}'
product: 'Cloud Native Operator'
`;

if (filename.slice(-8) === "index.md") {
frontmatter = await addIndexFrontmatterSection(frontmatter);
}

return frontmatter + "---";
};

const addIndexFrontmatterSection = async (frontmatter) => {
let modifiedFrontmatter =
frontmatter +
`indexCards: none
directoryDefaults:
prevNext: true
iconName: kubernetes
navigation:
`;

// read the mkdocs.yml file to figure out the nav entries for the frontmatter
const mkdocsYaml = yaml.load(
await fs.readFile("mkdocs.yml", { encoding: "utf8" }),
);
mkdocsYaml.nav.forEach((line) => {
// make sure file extensions are stripped off.
modifiedFrontmatter = `${modifiedFrontmatter} - ${line.slice(0, -3)}\n`;

// Make sure the interactive demo page is included in the right spot.
if (line === "quickstart.md") {
modifiedFrontmatter = modifiedFrontmatter + " - interactive_demo\n";
}
});

return modifiedFrontmatter;
};
Loading

0 comments on commit 5c956da

Please sign in to comment.