-
-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New schema generation pipeline #183
Comments
Something that may be worth considering during this refactor is multiple schemas in a single codebase. This is a requirement for my services because, for our application, there is a public graphql, which clients (web, mobile) interact with, and an internal graphql, with which other microservices interact. Placing these schemas in the same codebase allows easy code reuse. If this is something you'd be open to, I'd be happy to work on it. I'm not familiar with this codebase, but there does not appear to be anything in the API that restricts it to only one schema. |
Of course, #110 is about separating schemas with a temporary solution for your codebase problem. |
One of the things i reacted on was that you define a typeDef as a class in the current implementation. that way, it should be possible to write you typedefs more naturally (consider psudo code):
That way, classes can be dedicated to the implementation rather than the specification of the model. |
@Evanion TypeGraphQL use classes to describe the types because it allows e.g. to easily integrate with The interface syntax looks nice but without decorator it would be hard to provide information which method is query and which is mutation or subscription. Something like this: interface UserResolver extends Query {
users: User[];
}
interface UserResolver extends Mutation {
addUser(user: UserInput)?: User;
} doesn't look nice. Also as you see in TypeORM example, it would be more complicated to exclude some property from emitting in schema than now by not placing decorator. So to sum up: TypeGraphQL is not a tool to create SDL from TS interfaces (which could be done as a CLI that works in oposite to |
Would this also hook into the pipeline to allow dynamic creation of additional schema? For instance, I'd like to generate Inputs dynamically from the model attributes. |
Not directly by this task but yes, the new pipeline will be designed with a hook/plugin system in mind, as well as thinking about #134 😉 |
@19majkel94 Any plans for a future release? |
@Fyzu Not enough support → not enough time → no ETA 😞 |
That's a strange situation though. There are people that are willing to give time to help developing a new solution but you don't want to accept it (#304 (comment)). So you're willing to accept donation so we can have the resources to move forward with this internal rewrite. Although fellow developers, sometimes, prefer to help by spending some time developing and testing. I hope everything is going to be sorted soon! |
Noone will accept a PR with a complete rewrite done by an external contributor. How could I then develop and extend the core which code I don't know or completely understand? If you can't wait:
I am preparing a complete rewrite, with monorepo support, with plugins in mind, with splitted packages and an architecture that can easily solve current issues/blockers and possible future issues. It's not a task that you can do in two weekends - the description here is just an idea, a draft that is probably outdated.
I've accepted plenty of PR for many features or bug fixes: So keep calm and stay tuned! 😉 |
This comment has been minimized.
This comment has been minimized.
@19majkel94 : what is your issue with the monorepo? |
@tkvw I would also love to have continuous release, so every commit to master/next branch would result in automatic pre-release (maybe with And I also have to configure a full build package script, maybe with pika-pkg, to not only build TS files but also transform pacakge.json, copy readme and licence, etc. |
Great idea! To help, I'm already contributing monthly. |
@MichalLytek I was going to try to help with allowing support for nested inputs (I feel that inputs are almost pointless if they don't convert to what a user would expect them to be). I found out the hard way that validation doesn't run on nested inputs which I think all users of this library would assume would work as well. I was missing functional tests with those since I just unit test to validate if a class has a validator function associated to it. Anyway, when going through this and the decorators PR I submitted, one thing that I feel could benefit this library is to maybe convert metadata to classes and have "MetadataDefinitions" as interfaces for decorators. This is sort of what the An example that I wanted to add was to params. interface CommonArgDefinition extends BasicParamDefinition {
getType: TypeValueThunk;
typeOptions: TypeOptions;
validate: boolean | ValidatorOptions | undefined;
}
interface ArgParamDefinition extends CommonArgDefinition {
kind: "arg";
name: string;
description: string | undefined;
}
abstract class AbstractParamMetadata<T1 extends CommonArgDefinition, T2 = any> {
constructor(protected def: T1) {}
abstract getParam(data: ResolverData<any>, opts?: T2);
}
class ArgParamMetadata extends AbstractParamMetadata<ArgParamDefinition, { globalValidate: boolean }> {
getParam(data: ResolverData<any>, opts?: T2) {
// get param specific to arg
}
} I'm kind of spitballing here, but I feel like code navigation would be easier to understand. I'm not sure how much progress you've made on the re-write. |
I am trying to use as much FP approach as possible, rather that rich, unmaintainable classes and OOP 😕 The solution for nested input is simple - have access to metadata in resolver/middleware scope and then just use the type metadata to recursively transform nested object into class instances. |
Classes are unmaintainable when they start doing too much. FP is unmaintainable when you have a single function trying to do too much. But to each their own :P |
I'll play with some ways of doing what you've said above in the current code base. Non-nested inputs is kind of blocking at the moment which is super frustrating. And in previous issues regarding nested inputs have "solutions" that are insanely ugly and definitely not maintainable. |
You can try a fork that uses |
The one thing I'm trying to think about if that recursion and type checking is done on the resolver, you're having to check if a field is an input type on each resolve instead of coming up with a more performance solution. |
@MichalLytek Yeah, I don't use I've had issues as well with |
Oh, that's what you mean... so how we can generate on schema build time a "transform" function that will take All I can think of is to prefetch the types info and build something like a convertion tree that we can then traverse in runtime mapping the args object data 😕 |
Yup, exactly. I was thinking In my library I posted above, I did some tests on using maps vs array iteration for finding metadata and found that maps were pretty darn fast. Is there a reason you're using arrays for metadata lookups as apposed to having everything be in a map? (off topic). It'd make it so that performance for just adding it to the resolver wouldn't be too bad. Pre-generated would be better though. |
You can find some comments in code base about that thing 😉 When I arrive to the transformation point, I think I will just make it work first and then refactor to more performance solution after some profiling. |
Haha, okay, cha-chingggg. I wrote a simple test and got nested inputs to work with arrays & non-arrays, but then I made the inheritance tests fail. So I'll need to dig a little bit more. I'm definitely using the "slow" method. |
@MichalLytek #452 does the job. I can add more tests on caching, etc too. I just got it to work while not breaking anything. |
In addition to #134, would resolving this issue also resolve:
? |
@glen-84 Yes, the |
@MichalLytek Do you have a public roadmap, or more information about this rewriting ? |
@EdouardBougon No ETA yet, I don't know if it will be an open core, sponsorware or under a corpo-restricted licence as I suffer from lack of time for the development 😥 |
Right now, schema generation pipeline works in this way:
MetadataStorage
MetadataStorage
)graphql-js
using "built" metadata fromMetadataStorage
It wasn't designed for such complicated features as inheritance, etc., so it has started to became a bottleneck. So the schema generation pipeline has to be rewritten into a new design:
MetadataStorage
store all small parts of metadata from decorators (like now)MetadataBuilder
takesresolvers
andtypes
to build the extended, full metadataMetadataStorage
, working like a "pure" functionSchemaGenerator
accepts built metadata object and generate schema corresponding to the metadata (without need for checking the metadata storage)This would simplify this pipeline a lot, as well as allow to implement new features and allow to fix #133 and #110.
The text was updated successfully, but these errors were encountered: