From 2c723fa17c5346438e9eff16c5bc51a78efdd915 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Dec 2024 10:23:33 +0000 Subject: [PATCH] Update advocacy_docs/edb-postgres-ai/analytics/reference/loadingdata.mdx Co-authored-by: Artjoms Iskovs --- .../edb-postgres-ai/analytics/reference/loadingdata.mdx | 7 ------- 1 file changed, 7 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/analytics/reference/loadingdata.mdx b/advocacy_docs/edb-postgres-ai/analytics/reference/loadingdata.mdx index 035653f5a00..06c67a43c64 100644 --- a/advocacy_docs/edb-postgres-ai/analytics/reference/loadingdata.mdx +++ b/advocacy_docs/edb-postgres-ai/analytics/reference/loadingdata.mdx @@ -17,13 +17,6 @@ However, this comes with some major caveats (which will eventually be resolved): * The tables must be stored as [Delta Lake Tables](http://github.com/delta-io/delta/blob/master/PROTOCOL.md) within the location. * A "Delta Lake Table" (or "Delta Table") is a folder of Parquet files along with some JSON metadata. -* Each table must be prefixed with a `$schema/$table/` where `$schema` and `$table` are valid Postgres identifiers (i.e. < 64 characters) - * For example, this is a valid Delta Table that will be recognized by Beacon Analytics: - * `my_schema/my_table/{part1.parquet, part2.parquet, _delta_log}` - * These `$schema` and `$table` identifiers will be queryable in the Postgres Lakehouse node, e.g.: - * `SELECT count(*) FROM my_schema.my_table;` - * This Delta Table will NOT be recognized by Postgres Lakehouse node (missing a schema): - * `my_table/{part1.parquet, part2.parquet, _delta_log}` ### Loading data into your bucket