-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Future of the horizon-pool repo #169
Comments
About the untrusted PR, the action functionality in github can run pre-defined tasks and produce artifacts given a docker image. A comment action can access those artifacts and post a comment. The build task doesn't need access to secrets and the API token is only accessible to the (trusted) script posting the comment. Example here: https://github.com/machine-learning-apps/pr-comment |
If servers are needed, I am running multiple which I could put the scripts on. Anyways a cheap virtual Hetzner-Ubuntu box costs around 4 Euros per month, that shouldn't be the problem. I think we also have to talk about the organizational side of things, namely stuff like:
Maybe a few things could also be done as a concerted effort (like creating and reviewing the most commonly used packages). If you know the package is alright reviewing has already become a lot easier. This of course all ties in with Automation. Investing a little more time on doing that properly now is probably saving many people a lot of time and energy in the long run so it is definitly something that pays off. Automation is, what would enable such clear and easy workflows. If a lot of that could become part of the pool manager itself that would be even greater. I am gladly using git from the shell to do things by hand, but I'd doubt that this is a very common thing among electronics people. |
A streamlined PR reviewed process would greatly benefit the pool. I'd imagine stalling PRs without apparent reasons can easily demotivate users and are, in a way, holding Horizon back because there are not as many contributions as there could be. As @atoav mentions, the details need to be worked out there, but there needs to be a common, agreed upon, base for all decisions, too. Let me know if I can do anything for the convention or its issues to get the discussion there going again. As we are talking about processes, I also wanted to come back to the possibility of a multiple ‘stages’, ‘channels’, or however you wanted to call them, containing parts with different levels of dependability: Here I had left some ideas how the review process and the responsibility for part correctness could be split up among users. I think it is still relevant as it could make the reviewers' lives easier, who wouldn't have to shoulder everything, while providing more probably-correct parts for end users. In a nutshell, my idea was to have separate ‘testing’ and ‘stable’ pools (branches of the repository or with some built-in tagging system). Contributions could be merged into ‘testing’ with a minimal amount of sanity checks from reviewers, where they would be available to end users after an explicit opt-in. Users building their schematics and layouts with these untested parts can report back in the part's tracking issue, testifying for what they validated. As soon as every item on a PR's check list (footprint, pinout, 3D model, …) has been independently validated, it can be merged into ‘stable’. This could already be done today, with a ‘testing’ pool including the main ‘stable’ pool, without changes to Horizon. If we were to adopt something like this, it still might be a good idea to think about UX and possible direct integration into Horizon first, though. I would, for example, imagine a simple flag ‘confirmed’ for each unit/entity/symbol/etc. The current ‘PR load’ isn't anywhere near high enough that a small team of reviewers couldn't be able to shoulder it, and letting average users cross-check parts instead of just trusted reviewers might be debatable, but I think a process like this could greatly benefit the number of available parts for end users (if they are willing to cross-check everything, which most people probably do anyway). |
Doesn't this just shift the issue? How does the comment-posting script ensure that it doesn't get spam to post?
Thanks for the offer, but I try to stick with manged solutions such as github pages, actions, discourse, etc. that don't require any sysadmin work. Having though about this, a bit I think I have something that could work even though it doesn't seem particularly elegant:
|
The comment posting script only posts a comment with a fixed template and the output artifacts from the PR. The script itself is using the version from a predefined branch rather than whatever comes in the PR. That's a weird configuration, which is why I linked to an example. If someone is motivated enough to provide their spam in the form of a horizon library item, well I guess we just have to live with that. |
That could be solved as well by having the AWS labda script look at the presence of certain markers or flags that only a privileged user is allowed to add. That would make this a semi automated process though. I agree with @fruchti that conforming to the template is enough of a hurdle to get rid of random bots and low effort spammers. What might be more problematic would be well meaning users posting copyrighted material (e.g. 3D models), because that stuff isn't easy to figure out automatically. Edit: I offered my servers because I already do the sysadmin work on them, however a AWS lamda would be a good fit as well |
Very good point. There are also bots which make agreeing to the licence a prerequesite for contributions, i.e. contributers would have to sign they have the rights to redistribute their 3D models under the pool's licence. As an example, this one is the first one I came across: https://github.com/finos/cla-bot |
I'm probably still not getting it: We add a workflow to this repo that gets triggered by forked PRs and doesn't have access to any useful secrets (the github_token is readonly), so how is this workflow supposed to to anything privileged such as commenting on a PR? No matter how it's going the be deployed, I think that a bot that does as much linting as possible is something that'd help a lot as the contributor will get immediate feedback on their PR and some common mistakes can be caught without opening the pool manager.
You should probably create an issue for that. Is any of the two lists particularly long? Re multiple pools: One workflow that's currently supported, but not really intuitive is to create a pool for each project and mix-and-match pools from other users. As far as the git integration goes, for users not familiar with git, there's the "remote" tab that'll appear when downloading the pool using the pool manager. |
The workflow gets triggered by PRs. Said workflow has access to secrets, and can do privileged stuff, but you cannot modify the workflow itself from a PR without approval so it can neither leak secrets nor do anything you haven't preconfigured. The key thing is that the workflow is specified to use a particular version or branch when it's triggered. Separately from this, you run a docker image (that you control) that also cannot be modified from a PR without approval, that does whatever you like in the background linting and rendering and all, and produces a set of artifacts (images, error/warning lists, whatever) that has access to the PR's changes. Your commenting workflow gets access to these outputs, and can incorporate them into a comment in any way it wishes. The PR author has read-only access to the workflow script, read-only access to the docker image, and no access to the secrets. The workflow has access to the docker thing's output, and posting rights. The docker thing has code you control and data coming from the PR. Of course, both the workflow itself and the docker image it uses can be changed by a PR, but those changes are not effective (do not get run) until and unless a person with commit access approves them. The do get run in the submitter's repo, but don't have access to your repo's secrets. Hope this helps - like I said it is confusing, this the example implementation |
As far as I understand workflows, they can be modified by PRs as well. The docs also state that the github token is read-only for forked PRs: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull-request-events-for-forked-repositories Secret's aren't passed either: https://docs.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#using-encrypted-secrets-in-a-workflow |
From your link: The example I linked responds to that event - in our repository, not the forked one. And then we can do all the things we want. GITHUB_TOKEN is passed to the runner, but the runner runs in our context not the forked repo's. GUTHUB_TOKEN gets passed to workflows that are triggered by PRs, but the repo where the PR is coming from cannot access our GITHUB_TOKEN. And of course a PR can modify the workflow, but only once it's merged. Pre-merge the workflow that gets run is the workflow that's in the base repo, even though it gets runs on the changes in the PR. So here is what happens:
Yeah, I know it's confusing and underdocumented. But it does actually allow a sensible workflow. |
Is just found https://github.blog/2020-08-03-github-actions-improvements-for-fork-and-pull-request-workflows/ The |
I tested |
We now have a rudimentary bot that can bet triggered by a PR comment that starts with |
I believe the concept presented by @atoav is essential to increase the traction on this EDA. Users should be able to use different pools on the same project, this would solve the PR waiting list time without depending on a centralized review committee to agree on rules or check the requests. |
Sorry for resurrecting this. I was thinking about this a bit more tonight. I'll preface this by saying that I am by no means using Horizon on my day-to-day, but have been really enjoyed using it to compare using it vs other EDA tools on a couple small projects. I think that the library/pool management is critical to the success of an EDA tool and I am largely a fan of the way horizon does it. By far (vs. Altium or Kicad for example), my favorite has been the way that Horizon manages libraries/pools. I think leveraging git to version control and manage revisions is the right approach. The big thing that strikes me as missing is the fact that you can't mix/match repos. This almost brings up a philosophical debate on the merits of a monorepo vs a polyrepo approach with regards to managing pools. In my mind, there is a strong argument to be made to use different repos, but in a structured/templated way. The biggest advantage of this is that a centralized GitHub action could be created and managed by the project that performs all the linting/testing no matter where the repo lives. For example, if you have a private pool, you could manage this in your personal GitHub account and leverage the projects linting/checks CI for new commits. GitHub has the concept of Template repositories, so you could have a structure that is sane by default and include the GH Actions by default. If we had the ability to cross-reference items in pools, like via pool uuid as a namespace, for example: Then, you could organize repos with a bit more flexibility. For example you could do something by vendors:
Or you could do something by category:
Or it could be done with a combination of the two, where maybe certain vendors with large product sets, or where the pool is automatically generated via scripts. Then you could have those scripts live with the repo and get ran via scheduled ci action. The other thing that might be interesting is that in the future, it might be possible to have a JSON of the rules/checks in the repo and then both horizon itself and the CI checks could reference those rules/checks. Lastly, 3d models could live in separate repos as well. This way if someone didn't want the models, just the symbols/footprints, they could chose not to even download the 3d models. All in all, I think there would be some major advantages to allowing multiple pools being used and/or being able to cross-reference items between pools. I could see it being phased in by adding a few new features and then later on restructuring the pools. That way it isn't too much of an undertaking at once.
|
You can in fact, even it's a bit clunky: Pools support including other pools (settings tab in the pool manager). So if you want to keep your parts in your own repo rather than than messing around with branches, you can create your own pool and have it include the default pool.
That's already given by the included pools mentioned above. Since UUIDs are globally unique, there's no need for namespacing. Right now conflicting UUIDs are resolved by having the item in the local pool override the item from the included pool. We can do to the pool whatever we want as long as the new implementation implements the My initial reservation against having multiple pools was that it leads to fragmentation. However, I think that this just makes the fragmentation that will happen/has happened more evident. Another plus for multiple repos is that I don't think that performance enhancements will keep pace with the pool's growth. Even right now, downloading the pool repo on windows (slower disk io than linux) isn't as fast I want it to be. With the pool on a spinning disk, it's even worse.
As they take up a nontrivial amount of space, this could be the right approach. I however want to avoid messing with path variables as it's done in KiCad. If we want to do this, each pool (repo) can optionally have single secondary repo that contains the 3D models. Some ideas for implementation:
An implementation obstacle we need to overcome is that we now have to take care of updating the project pool to reflect changes made in the upstream pools. Right now, updating a pool with included pools completely ignores the pool database created by the included pools and looks at every item for itself. Before that implementation, I was experimenting with merging the pool on-the-fly by means of SQL views. That however turned out to be a bit too fragile. A halfway-between solution could be to just copy the rows from the included pools databases to the project pool. |
I include another pool in my forked horizon-pool (work branch), for very odd parts, quick and dirty fixes or personal decals that have no place in the official pool. My forked pool only differs from the official pool in including the changes that have not been accepted as a PR yet (so I can e.g. add parts when I need them, use them right away and still contribute to the official pool). In the included pool I had to duplicate padstacks, symbols, units, entities etc. from the official pool that I wanted to reuse. This works, but offloads a bit of cognitive load onto the user. It is not immidiately obvious how one would do that, and as mentioned maybe it would be good if:
So I really agree to the plan to make these changes. Breaking out 3D-parts into their own repo/pool/bucket makes a lot of sense too, although I am not sure what would be the smartest solution there in terms of topology. |
Honestly, I think having separate PRs for 3d models vs. the part stuff isn't that big of a deal. I would assume there are others like myself who just care about the package/part side of things and there are others who are focused on 3d rendering of boards. TBH the only time I use 3d view is to take a look and make sure vias are tented appropriately and my soldermask looks good before sending out to be manufactured. My understanding is the 3d model directory doesn't have JSON objects with it, but does have a UUID that is inside the package json. Would it make sense to pull the JSON out of the package and basically put a JSON object that lives with the 3d model and then just reference it in the package JSON by UUID instead? Also, with 3d models, would want to check/lint based on a couple different things like: do you have a license for the step file, what is the license for it, where did it originate from(url), etc. But honestly, that could be added to the JSON file and then would be simple to check against it. In the future it would be possible to generate lists of packages that don't have 3d models and track those inside the 3d model repo. Someone who is proficient with 3d models could tackle those or associate with packages that are already created. |
3D models are referenced purely by filename. The UUID in the package is used to tell multiple models of a a package apart. If the same model is referenced by multiple packages, it'll have a different UUID.
If your PCA ends up being more than an evalboard chances are it'll have to go in some sort of enclosure and you need to import the PCA into mechanical cad. To me at least, 3D models are an integral part of any modern PCB design tool. Apart from the pull requests, we'd also need to integrate the separate 3d model in several other places:
We'd also need some way of assigning the 3D model repos to the regular ones. I've though of making them a submodule, but that still adds considerable overhead. The main/only(?) argument for separating them is disk space and download size. Right now, the pool contains 80MB of 3D models for vertical pin headers which isn't all that great. By splitting these pin headers into a separate pool/repo, we already get a significant size reduction as users who don't need these can just not download that pool. |
I also think the cleanest solution by far is to have the 3D-models next to the packages in the same pool. Sometimes I'd wish for a way to procedurally generate the geometry on the fly, because then we'd only have to store the code that generates the geometry, and not the resulting data. This would also allow tuning in the resolution of the resulting meshes. For very generic parts (SMD) this could even be part of the existing footprint (or a new set of generators). For more complex geometry step is still going to be the way in the end, but many things would get easier with such a feature IMO. |
We can procedurally generate geometry - that's how a lot of kicad models are built too. The problem is that they use cadquery, which is a horrible pain to install (recommended method is using conda, which is a horrible pain in general). |
There are no meshes in STEP. It uses boundary representation.
Exactly what I wanted to say :) Some more specific thoughts on what will most likely get implemented: The key to make using multiple pools in project feasible will be replacing/enhancing the pool cache by a special project pool. The pool will look like this on disk:
Items copied into the cache will get some additional data such as the original path and the pool they came from. Padstacks will get another field to indicate which package they came from. I've thought about preserving file names and paths in the cache, but that can get difficult when items get moved. One issue that we need to solve is that calling |
Apart from the implementation details, we also need to figure out how to organize these multiple repos. From my perspective, grouping solely by manufacturer or device type doesn't really cut it. TI for example makes almost everything that can be made out of silicon. Some ideas for what should go in its own pool
Any suggestions what to do with odd devices like the Si53306-B-GM clock buffer or the LCR-0202 Vactrol analog optoisolator? I have plans for providing a centralized pool registry to provide an index for installable pools in the application. It'd also be nice to have a way to find parts among all pools without having to guess which one to download. |
Re: organizing... I think that makes a ton of sense. For the 2 examples, you could have a timing repo and then put crystals/oscillators in there as well. Likewise you could have a optical (optoisolators, LEDs, laser, etc.) repo or you could do isolation (maybe transformers go in there too). Honestly, there are always going to be oddball devices, so having a catchall or "uncategorized" repo might not be a bad idea. Then, if a logical group starts to form, it can be split into its own repo. Having the ability to re-organize items in pools will be extremely important. Having a bot command to move the JSON to a different repo might be a really nice thing to have for this. If/when this change happens, I can help out with that as well, just let me know how you want it done and I can work on it, since I know it can be tedious. I can also go through some distributor websites and just start randomly picking parts and see how the categorization would hold up and start documenting types/categories of things and where they go (which repo). I think having documentation on the workflow/examples and maybe a video might be really helpful for folks who are new to horizon. In my mind some additional or renamed categories that form (by no means is this inclusive):
I think the important thing is to just have an index of where the UUIDs live pool-wise, like you mentioned, that way they can be re-organized as appropriate. |
Thanks for the clarifications. I think beyond a certain granularity deciding where a new part shall go will become a challenge for users. The strength of having multiple repos is that special purpose repos can be drawn up more quickly and that no full consensus has to exist about how to do it. E.g. Bob could decide he needs a valve guitar amplifier repository, which only collects parts used for tube amps. For those who make tube amps this might be incredibly valuable, while those who never even think about them it is out of the way. Sorting those tube sockets, specialized capacitors etc by their oddball manufacturers would result result in fragmented repos nobody needs. I think the most useful order would orient itself towards usage. I think the proposed categories are not a bad start. However it must be very easy to use all, mix and match, etc. Maybe drawing a distinction between offical and 3rd party makes also sense. |
To be honest, I don't like the idea of splitting the pool by categories. I see problems with the packages. Some common packages like SSOP or SOIC will be required in several categories. How to handle this?
I'd rather change the way how 3D models are stored. How about |
While being able to use multiple pools is a good idea for sharing internal symbols and footprints inside a group of projects or company, I'm not sure that splitting up this pool into smaller categories would really help with the contribution workflow. The way I see having a split pool benefiting contribution is allowing delegation of various component types to various people, but if these split pools are still official, is there any guarantee that maintainers would be found for these pools? Maintainership of different parts of the pool by different people is still possible with one repo. In short, the problem seems more of a human organizational problem than one which should be solved by adding more pools and more features to horizon (though these features will most certainly be welcome and have other usecases). Having multiple official pools definitely has some disadvantages, and I'd tend to believe that adding more maintainers to this repo will have more of an effect for less effort than splitting this pool up. |
There have been some developments with cadquery I was not aware of. Apparently this is now a thing https://github.com/CadQuery/cq-cli/releases and it packages everything necessary, entirely removing the need to install it. It's still a massive pain to keep up to date, but much less so than before. It's very limited compared to the main cadquery, but there's just enough functionality there to be able to generate STEP on demand. In addition, I've engaged a friend to look into the build process of OCP and cadquery in order to make it more sane and less conda-dependent. Perhaps this will enable a more practical solution. |
Thanks for everyone's ideas and suggestions. This reminded me that one goal of the pool was to avoid inventing a taxonomy for parts. Splitting up the pool doesn't really line up with that. However I think it still makes sense to factor out large numbers of generated items as not all users might need them and it slows down the part browser.
Of course not.
Yes, that's the point of including pools in other pools. However, having a separate packages pool could make contributions much harder than they are right now as we'd need to tell users that the new generic package they're adding in a PR needs to go in a different repo.
I don't really see how LFS will improve things in our case. 3D models don't really change all that often so cloning the repo doesn't download that many unneeded files. It also looks like libgit2 (used by the pool manager for downloading pools) doesn't really support LFS. Some 3D models (basically everything you get from manufacturers) don't allow for redistribution. However, in some cases the STEP file is available for download in way that can be automated, such as https://www.molex.com/pdm_docs/stp/500901-0801_stp.zip We could then put this information in the pool in some way to automatically download the models (if needed?). This feature could also be used to automatically pull in 3D models from another repo or so.
Definitely. I'm also thinking of having Horizon EDA register a url scheme on such as
There certainly some truth to that point that we're trying to solve a human problem through technology... As you can see, I haven't really made up my mind yet on how to move forward on this issue. Regardless of how this will pan out, I think that converting the project cache to a pool is a good thing to do as this will make the part browser more consistent since it'll then show the items as they're cached, even if they're not present in the actual pool. We can also use this opportunity to fix some buggy behavior such as looking at frames in the schematic properties dialog resulting in them being added to the cache. |
Hmmm, I guess the amount of parts imported into Horizon will grow either way. Generated parts are mostly passives like resistors and capacitors - and every project need them. Furthermore, most users won't throw out pools after they've finished a project. Wouldn't it make more sense to optimize the part browser? Like grouping parts by their first tag and display these groups collapsed if nothing has been filled into the filter above? We could end up with having a hand full of categories. Furthermore, this will make the parts more searchable, since at least the first tag must be picked carefully. |
Depending on what a user is building and their background, they might be somewhat overwhelmed by finding 80 100nF 0603 capacitors. For them, we could offer a pool that contains generic parts.
Suppose, we also add a whole bunch of through hole resistors as a pool and I know that my project won't need them, I could opt not to use the pool with the TH resistors in the project. That way, the part browser doesn't have to cope with parts that are never needed.
Unfortunately, tags aren't ordered: https://github.com/horizon-eda/horizon/blob/7c3a8361fda4a599141b6703c37d40a1ef990424/src/pool/part.hpp#L49 |
I think the user story for "I want a part with a precise MPN" is quite good already. Users that just want "a resistor" (because they only draw a schematic) or just "a 0805 resistor" because they didn't decide on naything yet (or aren't experienced enough) have a harder time tho. You are probably aware of circuitJS where you can scroll to set a resistor value. Sometimes I wish that I could just draw a schematic up in a similar fashion and decide which precise MPN 100k-resistor I want to take afterwards.
I don't think we would need multiple pools for this tho. E.g. if all TH resistors have a tag called The good thing is that horizons pool model is flexible enough to improve in all directions. We really need to figure out which direction is good for an assumed groth of the pool. |
The recent commits add support for project pools and thus make it feasible to use multiple pools in a project. |
Very cool, thank you. I wonder: Maybe we have to add a new section in the docu (next to Why a Pool?) that is about pool organisation? So how to structure things, how to include things, explaining project pools and so on. Because right now even to me as a long time user/contributer it is kinda foggy what is possible and what is actually a good way to do things etc. |
Awesome! This solves my remaining annoyances with part and project organisation. ‘Missing in pool’ items don’t seem to show up in the project pool cache tab, though? |
Good catch, fixed in horizon-eda/horizon@9df1dfa. Missing items are told apart from items only in this pool by being in the To anyone, who already migrated projects to the project pool, I noticed that packages didn't get migrated correctly. To fix this, delete the |
Should it be noted somewhere that regular project pool entries shouldn’t reside in a subdirectory with this name? Another nitpick: Cached items show up as overridden (blue) in the lists, which is a bit confusing. |
Yes, probably in the same place as:
With horizon-eda/horizon@119d3f3 they're differentiated from overriding items if the pool isn't a project pool. This only takes pool UUIDs into account, so items missing from the included pools will still show up as green. |
Now that we have the support for multiple pools somewhat in place, I think we can go into detail on what should stay in the main pool and what not. From my perspective, we should keep as much as possible in the main pool and only factor out specific items:
To get access to the generic items in the main pool, all of the extra pools will include the main pool. |
I got here while looking for generic parts...here's my 5c I think any type of rigid part organization structure adds complexity for users because there's no one fits all scenario. It should be up to user to organize their library the way it fits their needs. Also, for growth and popularity of Horizon EDA community should be able to share parts, review, and comment. Therefore, an important feature would be an easy mechanism to move parts between pools individually and in batches to create new pools. |
A few of these will be needed in pretty much every project, though, so you’d include the ‘passive components’ pool every time. At most, there could be separate THT and SMT passives pools because you’d likely only need one.
Exactly. The main problem of other component libraries the flat pool structure solves is the ambiguousness of ways parts can be grouped together. With a system with sublibraries, you can at least click through a few until you find the part you want but pools are completely opaque until you include them in a project. If the subdivision of the pool should be useful, the resulting ‘categories’ must be overlap-free and the component association intuitive to beginners.
Currently, I add parts I’m unlikely to need in another project to the project-specific pool but it’d be very nice to have a way to share ‘niche’ parts without clogging the main pool. Another idea: the part browser could have inverse filtering, too, perhaps by tag. By default, a list of tags would be excluded from display and search, e.g. ‘resistor’ and ‘capacitor’, which I’d assume would be good for performance. Because the filter is shown directly below the search fields, it is also more discoverable than additional pools you’d have to include. |
Is this the main problem here? Isn't there a way to optimize GTK's list view? From my web development background, I know techniques like lazy loading. Is the used list view capable of something similar? |
It also makes pool updates slower. I've already played with putting the SQLite action into its own thread, but figured the complexity wasn't worth it yet.
Chances are, one also doesn't need passives from every manufacturer under the sun.
Sound reasonable, any idea on how the UI should look like?
I'm glad you asked since the latest commit added just that, though there's still some copy writing to be done: Add your pool to https://github.com/horizon-eda/horizon-pool-index/tree/master/pools with the level set to |
Any first idea on what the boundary would look like for "too niche to be in core"? |
@RX14 As one of the main oddball contributors (it appears) I'd guessavailability plays a huge role in this. So if the part is only available at a weird chinese Aliexpress reseller, it might be oddball. If the part is not produced anymore but there is still stock being sold (e.g. certain germanium tranistors), it is quite definitly oddball. If it is very special purpose and not being sold on the typical reseller sites (Mouser/Digikey/...) it might be oddball as well (e.g. Coolaudio V2044A). Drawing the line here will always be subjective I guess. @carrotIndustries I would consider breaking out all of my audio stuff into an Audio/Synth pool. How would we deal with shared things like symbols? E.g. if I create a OTA Symbol/Unit/Entity for say a LM13700, this would probably be something that should be in the main pool as well? Should I add it via PR to both pools then? On the issue of inverse filters: I think I proposed somethign similar a while back when I said maybe it would make sense to allow users to configure a set of global exclude/include filters in the preferences. This way if someone feels particularily annoyed by a subset of parts, they could a narrow down with filters what they see on their day to day usage of the software. Maybe adding a visible switch to the part browser to temorarily disable that filter makes sense as well, but I think these filters should be in the preferences and they should be persistent between restarts of the software (as opposed to being visible in the part browser and non persistent between restarts). |
That part in particular could go into the main pool as it's readily available at digikey and other reputable distributors.
I think it makes more sense to decide this based on the part's availability. If a part is particularly hard to get or only available on ebay or so, it shouldn't be in the main pool. |
How about a filter exactly like the ’Tag’ filter, just called ‘Exclude tags’? It would be pre-populated with ‘resistor’ + ‘capacitor’ (+ ‘inductor’?). I’d rather not put very generic/vague tags like ‘passive’ in there by default so users
Maybe ‘With tags’ + ‘Without tags’ would be a clearer labelling, not sure there. |
Sure! I can't make any promises regarding the amount of time I can spend for reviews, but I'd love to help out whenever I find some time. The review guidelines are described in the horizon-pool-convention repo, right? |
Same for me, sign me up! |
Ah sure thing, i am all in |
As it's probably evident by PRs not getting merged, I don't have the time and motivation for reviewing external contributions . Let's use this issue to brainstorm for ideas on how that situation can be improved.
Making the review process easier
To easily get PRs into my clone of the pool for review I added a bit of libgit2-based code that pulls a PR into a local branch and merges it with master. Since that feature didn't meet my requirements in terms of robustness, it gathered dust on a local branch. To make this available to other people interested in reviewing PRs, I merged this into master: horizon-eda/horizon@e2d4cd2 but it needs to be enabled manually in the
prefs.json
In the grand scheme of things, I see two ways to make reviewing PRs easier:
More automation
Automatically check for things such as missing items like as PRs just containing a unit or datasheet links pointing to secondary sources. Some of this can probably be supported by the existing GitHub Actions workflow: https://github.com/horizon-eda/horizon-pool/blob/master/.github/workflows/all.yml On the long rung, I'm envisioning something equivalent to Debian's Lintian, but for packages, parts, etc.
More reviewers
To have more people than me having merge permissions, we probably need to work on the open issues on https://github.com/horizon-eda/horizon-pool-convention/ so that no matter who's reviewing the PR, we end up with the same result.
What else?
I don't believe that there's a single solution to this problem, so we probably need to do at least all of the things listed above. Any more ideas?
The text was updated successfully, but these errors were encountered: