Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New schema generation pipeline #183

Open
MichalLytek opened this issue Oct 24, 2018 · 30 comments
Open

New schema generation pipeline #183

MichalLytek opened this issue Oct 24, 2018 · 30 comments
Assignees
Labels
Internal 🏠 All the things related to internal parts
Milestone

Comments

@MichalLytek
Copy link
Owner

MichalLytek commented Oct 24, 2018

Right now, schema generation pipeline works in this way:

  • collect metadata from decorators (options, reflected type, placement (method/field, name, class)
  • put them in raw shape in MetadataStorage
  • "build" the metadata (by modifying MetadataStorage)
    • attach fields to types definition
    • attach args to resolvers definition
    • attach other metadata (middlewares, roles, etc.)
    • doing it sequentially, metadata type by type (fields, then handlers, etc.)
  • generate the schema with graphql-js using "built" metadata from MetadataStorage
    • check for inheritance and interface implementing, then copy it's definitions
    • handle all overloading and overwriting cases

It wasn't designed for such complicated features as inheritance, etc., so it has started to became a bottleneck. So the schema generation pipeline has to be rewritten into a new design:

  • MetadataStorage store all small parts of metadata from decorators (like now)
  • MetadataBuilder takes resolvers and types to build the extended, full metadata
    • not modifying MetadataStorage, working like a "pure" function
    • doing it on demand (not all the types in one time but lazily, step by step, recursive and with caching)
    • attaching all loose metadata to the base metadata definition
    • handling all extending, implementing and overwrtiting cases
  • SchemaGenerator accepts built metadata object and generate schema corresponding to the metadata (without need for checking the metadata storage)

This would simplify this pipeline a lot, as well as allow to implement new features and allow to fix #133 and #110.

@MichalLytek MichalLytek added the Internal 🏠 All the things related to internal parts label Oct 24, 2018
@MichalLytek MichalLytek added this to the 1.0.0 release milestone Oct 24, 2018
@MichalLytek MichalLytek self-assigned this Oct 24, 2018
@jdolle
Copy link

jdolle commented Nov 5, 2018

Something that may be worth considering during this refactor is multiple schemas in a single codebase. This is a requirement for my services because, for our application, there is a public graphql, which clients (web, mobile) interact with, and an internal graphql, with which other microservices interact. Placing these schemas in the same codebase allows easy code reuse.

If this is something you'd be open to, I'd be happy to work on it. I'm not familiar with this codebase, but there does not appear to be anything in the API that restricts it to only one schema.

@MichalLytek
Copy link
Owner Author

Of course, #110 is about separating schemas with a temporary solution for your codebase problem.

@Evanion
Copy link
Contributor

Evanion commented Nov 12, 2018

One of the things i reacted on was that you define a typeDef as a class in the current implementation.
I know it's probably because the interfaces gets stripped by tsc at compile time ...
but perhaps it would be possible to have a base interface like Type, Enum, Scalar etc.
and then create a ts transformer to convert interfaces that extend these base types, in to SDL format.

that way, it should be possible to write you typedefs more naturally (consider psudo code):

import {ID} from 'type-grahpql';

interface Avatar extends Type {
 <T>(size: T): T;
}

interface User extends Type {
  id: ID;
  username: string;
  quote?: string;
  avatar?: Avatar; 
}

That way, classes can be dedicated to the implementation rather than the specification of the model.
The typescript transformer can extract the interfaces and convert them to the actual SDL. And you might be able to drop the use of decorators, as they are still considered experimental.

@MichalLytek
Copy link
Owner Author

@Evanion
Custom TS transformer/plugin is of course planed but in a long future. They are much less supported than "experimental" decorators. But with TS transformer I was thinking about extending reflection capabilities to get rid of all type => [Foo] annotations. So decorators would be very concise and clear, generics would be probably just as simple as with normal classes, etc.

TypeGraphQL use classes to describe the types because it allows e.g. to easily integrate with class-validator and TypeORM which also returns instances of Entity classes, so it's much more pleasant to use instanceof checks rather than recreating the object and providing __typename or other solution to solving interfaces and unions. Also you can define getters and inline methods while using classes, not interfaces. If you like interfaces, just use pure abstract classes that are 1:1 interfaces.

The interface syntax looks nice but without decorator it would be hard to provide information which method is query and which is mutation or subscription. Something like this:

interface UserResolver extends Query {
  users: User[];
}
interface UserResolver extends Mutation {
  addUser(user: UserInput)?: User;
}

doesn't look nice. Also as you see in TypeORM example, it would be more complicated to exclude some property from emitting in schema than now by not placing decorator.

So to sum up: TypeGraphQL is not a tool to create SDL from TS interfaces (which could be done as a CLI that works in oposite to graphql code generator) but a whole framework for creating GraphQL API in Node.js and only the main part of it is it's reflection and schema generating capabilities - it also focus on interoperability with other tools, libs and frameworks.

@goldcaddy77
Copy link

Would this also hook into the pipeline to allow dynamic creation of additional schema? For instance, I'd like to generate Inputs dynamically from the model attributes.

@MichalLytek
Copy link
Owner Author

Not directly by this task but yes, the new pipeline will be designed with a hook/plugin system in mind, as well as thinking about #134 😉

@Fyzu
Copy link

Fyzu commented May 2, 2019

@19majkel94 Any plans for a future release?

@MichalLytek
Copy link
Owner Author

@Fyzu Not enough support → not enough time → no ETA 😞
https://opencollective.com/typegraphql

@ematipico
Copy link
Contributor

That's a strange situation though. There are people that are willing to give time to help developing a new solution but you don't want to accept it (#304 (comment)).

So you're willing to accept donation so we can have the resources to move forward with this internal rewrite. Although fellow developers, sometimes, prefer to help by spending some time developing and testing.

I hope everything is going to be sorted soon!

@MichalLytek
Copy link
Owner Author

MichalLytek commented May 27, 2019

but you don't want to accept it

Noone will accept a PR with a complete rewrite done by an external contributor. How could I then develop and extend the core which code I don't know or completely understand?

If you can't wait:

you can continue your work and use your fork until the proper fix will come

I am preparing a complete rewrite, with monorepo support, with plugins in mind, with splitted packages and an architecture that can easily solve current issues/blockers and possible future issues. It's not a task that you can do in two weekends - the description here is just an idea, a draft that is probably outdated.

Although fellow developers, sometimes, prefer to help by spending some time developing and testing.

I've accepted plenty of PR for many features or bug fixes:
https://github.com/19majkel94/type-graphql/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed+no%3Aassignee+review%3Aapproved
So you can help with smaller things, not by starting rewriting the whole code.

So keep calm and stay tuned! 😉

@MichalLytek

This comment has been minimized.

@tkvw
Copy link

tkvw commented Jun 25, 2019

@19majkel94 : what is your issue with the monorepo?

@MichalLytek
Copy link
Owner Author

@tkvw
I need to configure changelog generation from commits.

I would also love to have continuous release, so every commit to master/next branch would result in automatic pre-release (maybe with @next or @beta scope).

And I also have to configure a full build package script, maybe with pika-pkg, to not only build TS files but also transform pacakge.json, copy readme and licence, etc.

@leonetosoft
Copy link

Great idea!

To help, I'm already contributing monthly.

@j
Copy link
Contributor

j commented Oct 29, 2019

@MichalLytek I was going to try to help with allowing support for nested inputs (I feel that inputs are almost pointless if they don't convert to what a user would expect them to be). I found out the hard way that validation doesn't run on nested inputs which I think all users of this library would assume would work as well. I was missing functional tests with those since I just unit test to validate if a class has a validator function associated to it.

Anyway, when going through this and the decorators PR I submitted, one thing that I feel could benefit this library is to maybe convert metadata to classes and have "MetadataDefinitions" as interfaces for decorators. This is sort of what the Doctrine PHP guys do. In their ORM, a lot of the logic is given to the main metadata class, and the builder builds from definitions.

An example that I wanted to add was to params.

interface CommonArgDefinition extends BasicParamDefinition {
  getType: TypeValueThunk;
  typeOptions: TypeOptions;
  validate: boolean | ValidatorOptions | undefined;
}

interface ArgParamDefinition extends CommonArgDefinition {
  kind: "arg";
  name: string;
  description: string | undefined;
}

abstract class AbstractParamMetadata<T1 extends CommonArgDefinition, T2 = any> {
    constructor(protected def: T1) {}

    abstract getParam(data: ResolverData<any>, opts?: T2);
}

class ArgParamMetadata extends AbstractParamMetadata<ArgParamDefinition, { globalValidate: boolean }> {
  getParam(data: ResolverData<any>, opts?: T2) {
    // get param specific to arg
  }
}

I'm kind of spitballing here, but I feel like code navigation would be easier to understand.

I'm not sure how much progress you've made on the re-write.

@MichalLytek
Copy link
Owner Author

MichalLytek commented Oct 29, 2019

convert metadata to classes

I am trying to use as much FP approach as possible, rather that rich, unmaintainable classes and OOP 😕

The solution for nested input is simple - have access to metadata in resolver/middleware scope and then just use the type metadata to recursively transform nested object into class instances.
Then depending on the plugin system (other validation libraries than class-validator) call the validate function recursively or on the root arg object.

@j
Copy link
Contributor

j commented Oct 29, 2019

Classes are unmaintainable when they start doing too much. FP is unmaintainable when you have a single function trying to do too much. But to each their own :P

@j
Copy link
Contributor

j commented Oct 29, 2019

I'll play with some ways of doing what you've said above in the current code base. Non-nested inputs is kind of blocking at the moment which is super frustrating. And in previous issues regarding nested inputs have "solutions" that are insanely ugly and definitely not maintainable.

@MichalLytek
Copy link
Owner Author

You can try a fork that uses class-transform to create instances and calling Type(() => Foo) decorator inside @Field. I had to abandon class-transform because of it's problem with Promises - example with TypeORM lazy loading failed dramatically.

@j
Copy link
Contributor

j commented Oct 29, 2019

The one thing I'm trying to think about if that recursion and type checking is done on the resolver, you're having to check if a field is an input type on each resolve instead of coming up with a more performance solution.

@j
Copy link
Contributor

j commented Oct 29, 2019

@MichalLytek Yeah, I don't use class-transform either. https://github.com/j/type-mongodb I do it my own way here.

I've had issues as well with class-transform. Constructing nested input types shouldn't be hard. I just don't want to iterate over input types on every resolve to check if it needs to be constructed differently.

@MichalLytek
Copy link
Owner Author

don't want to iterate over input types on every resolve

Oh, that's what you mean... so how we can generate on schema build time a "transform" function that will take args as a parameter and return the transformed and deserialized class instance?

All I can think of is to prefetch the types info and build something like a convertion tree that we can then traverse in runtime mapping the args object data 😕

@j
Copy link
Contributor

j commented Oct 29, 2019

All I can think of is to prefetch the types info and build something like a conversion tree that we can then traverse in runtime mapping the args object data

Yup, exactly.

I was thinking ArgParamMetadata can either contain a tree, or a transform type function that is pre-generated upon schema building.

In my library I posted above, I did some tests on using maps vs array iteration for finding metadata and found that maps were pretty darn fast. Is there a reason you're using arrays for metadata lookups as apposed to having everything be in a map? (off topic). It'd make it so that performance for just adding it to the resolver wouldn't be too bad. Pre-generated would be better though.

@MichalLytek
Copy link
Owner Author

MichalLytek commented Oct 29, 2019

Is there a reason you're using arrays for metadata lookups as apposed to having everything be in a map? (off topic).

You can find some comments in code base about that thing 😉
I have plenty of legacy code that was born in PoC phase and "works" till current day 😄

When I arrive to the transformation point, I think I will just make it work first and then refactor to more performance solution after some profiling.

@j
Copy link
Contributor

j commented Oct 29, 2019

// TODO: refactor to map for quicker access

Haha, okay, cha-chingggg.

I wrote a simple test and got nested inputs to work with arrays & non-arrays, but then I made the inheritance tests fail. So I'll need to dig a little bit more. I'm definitely using the "slow" method.

@j
Copy link
Contributor

j commented Oct 29, 2019

@MichalLytek #452 does the job. I can add more tests on caching, etc too. I just got it to work while not breaking anything.

@glen-84
Copy link
Contributor

glen-84 commented Aug 13, 2020

@MichalLytek,

In addition to #134, would resolving this issue also resolve:

  1. Accessing to resolver metadata in middleware #123.
  2. Global access to built metadata. (to allow a CLI to generate additional classes based on the existing metadata)

?

@MichalLytek
Copy link
Owner Author

@glen-84 Yes, the typegraphql-next is being written from scratch with support for that in mind and the built-in decorators are relying on that so I'm actually eating my own dog food 😅

@EdouardBougon
Copy link

@MichalLytek Do you have a public roadmap, or more information about this rewriting ?

@MichalLytek
Copy link
Owner Author

@EdouardBougon No ETA yet, I don't know if it will be an open core, sponsorware or under a corpo-restricted licence as I suffer from lack of time for the development 😥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Internal 🏠 All the things related to internal parts
Projects
None yet
Development

No branches or pull requests