Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by
brew bump
Created with
brew bump-formula-pr
.release notes
This adds the UUID type. Ignore the serialization portion for now, I'll figure out a different serialization scheme later that should work for all types (including user-defined types). While this works, we need to forward the type information through to the connector so it can use the correct OID. For now though, this should be enough to unblock.
Sometimes, command docs would generate duplicate tests. This was due to the RNG selecting the same number multiple times. This fixes this issue, along with fixing all of the affected tests. There are still duplicate tests, but that is due to the synopses defining the duplicates, rather than the generation, which I think is fine.
This PR adds syntax parsing support for:
CREATE INDEX
CREATE TABLE AS
It also splits
postgres/parser/sem/tree/create.go
file intocreate_*.go
files for each specific statements.This makes use of an addition to Vitess and GMS that allows for name resolution to occur for injected expressions. The new
ARRAY[...]
smoke test demonstrates this.Related PRs:
This adds rough support for both
BOOLEAN
andBOOLEAN[]
types. This is primarily to show a working implementation of the newExtendedType
interface that was added to GMS to support these new types, demonstrating that we can now add standard PostgreSQL types, as well as array types (along with accompanying functionality, such asARRAY[]
support).Related PRs:
Added parsing support to following statements:
CREATE EXTENSION
CREATE FUNCTION
COMMENT ON EXTENSION
DROP EXTENSION
DROP FUNCTION
This PR adds missing postgres syntax for:
CREATE SCHEMA
CREATE DATABASE
DROP DATABASE
Server no longer always forcefully closes a connection when catching a panic, which is still very common because of unsupported features that panic, as well as unintended panics due to bugs.
This change also refactors the connection handling logic to extract a new ConnectionHandler type.
Also fixes a problem where a errors during a Query execution would hang a connection while it waited for a Sync message that never came. Added
waitForSync
flag in ConnectionHandler to manage this bit of state.Now handles type casts for placeholders, as well as matching a larger variety of expressions.
Companion PR:
[no-release-notes] BinaryExpression refactor dolthub/go-mysql-server#2272
This only includes quite a few functions, but doesn't include their tests. I was planning on including those too, but it's turning into a "more work than expected" type of situation, and if I'm going to solve those problems, I'd rather do it with all of the rest of the functions too, so that I can just get all the functions done for all the tests as once. This implements every function that I think we can implement for now based on our currently-supported types (without ripping out the implementation as soon as we get proper type support).
In the meantime, we can get these functions in, and get the tests in later.
This PR implements postgres prepared statements with support for
$1
style parameters in query strings. If this approach is reasonable, the next step would be to add support for additional types and lots more tests.Also missing is support for the client manually specifying the types of parameters in a
Parse
message, which is required for certain statements where the type information cannot be determined based on referenced columns.Companion PRs:
Support for doltgres prepared statements dolthub/go-mysql-server#2239
Define an ExtendedHandler interface to handle postgres specific interactions dolthub/vitess#299
This PR includes some postgres-specific syntax support.
ALTER DATABASE
GRANT
REVOKE
ALTER DEFAULT PRIVILEGES
This PR includes some postgres-specific syntax support.
ABORT
ALTER AGGREGATE
ALTER COLLATION
ALTER CONVERSION
Created by the Release workflow to update DoltgreSQL's version
Currently, doltgres uses dolt init when starting doltgres server with data dir set to empty directory. When there is no dolt or doltgres config
user.name
anduser.email
, it fails to create a new database with error message fromdolt init
, which suggests to setdolt
user info. Current fix will not return error; instead, it will use user name,postgres
and email,[email protected]
if it's not set.This is the 2nd version of my attempt at implementing functions.
The first iteration hijacked
*tree.FuncExpr
and special-cased names of functions that matched the built-in ones for Postgres. Then it returned a new node that approximated the functionality. In some cases this was as simple as returning a different*vitess.FuncExpr
node that pointed to a different function, however some cases did not have direct analogues in MySQL. Originally I was going to ignore those, but since I was working on functions anyway, I decided to tackle them to get it over with, similar to my approach with the entire AST conversion. Well that's when I started running into two key issues:The second was born from the first, as I needed to extend my tests to make sure my massive nodes actually worked for most cases. However, extending the tests showed some issues that are somewhat fundamentally different to how MySQL approaches functions, with a big one being overloading.
The first was a major issue though. I'll post an example in another comment, but some of the nodes became almost unreadable, and they also took forever to create (multiple hours per function, hence the continual delays). The only real maintainable alternative would be to skip the AST conversion pipeline and jump straight to the GMS expressions, but that posed its own issues. Not only is there a lot of boilerplate for expressions, but some of the more sophisticated aspects (such as the origin of values, which I'm calling their
Source
) are specific to PostgreSQL, So I instead decided to modify the entire set of functions by replacing all of the built-ins (Postgres doesn't want MySQL's built-ins anyway) with a custom "compiled" function structure.So now, this mimics a bit how overloaded stored procedures work. We define a set of functions under a single name, and those functions become overloads for that name. Whenever a function is called, we evaluate all of the parameters, and use their types to look for an exact match. If one is not found, then we allow some types to cast to others, and check while rotating type casts. If we still don't find anything, then we assume that the function does not exist. In addition, some functions allow for some casts and others don't, so we also store the original type before the cast. This way, a function only needs to worry about its own inputs, and the rest is handled automatically.
This does create the consequence of needing to use specific types (
Integer
,Float
, etc.) for parameters and the return value. I considered allowing native Go types and using a context to host the additional information, but that made it a bit more difficult to handle the overloading and automatic casting, but I could be convinced that it's a better alternative (and probably easier to upgrade too as we add more functions).On the note of functions, this only adds a few as a proof-of-concept. It's astronomically quicker to write these compared to the AST approach, and I'm going to use that test generation framework to get better testing coverage for functions, since the testing scope is orders of magnitude smaller compared to permutations of every statement, and the think-write-eval loop of coming up with tests manually takes quite a bit of time.
Last thing, all of the files that are related to the function compilation are prefixed with
0_
. I wanted to prefix them with a single underscore instead, but apparently Go ignores files that start with an underscore. I didn't want to just dump them together with the functions without any special indication, because they'll end up lost in the sea of files. For example, which files inserver/ast
aren't nodes? Only way to know is to scroll through all 141 files, so I wanted to avoid that. I also tried separating packages, but then that required either a lone file that would get lost, or aninit()
in every file that adds functions to the catalog, and both of those seemed worse.Closed Issues
doltgres
with data dir set to empty directory