Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rcmgr: the last charge #9680

Merged
merged 1 commit into from
Mar 6, 2023
Merged

rcmgr: the last charge #9680

merged 1 commit into from
Mar 6, 2023

Conversation

Jorropo
Copy link
Contributor

@Jorropo Jorropo commented Mar 1, 2023

Includes work from: #9623
Includes work from: #9612
Fixes: #9650
Fixes: #9621
Fixes: #9577
Fixes: #9603

@Jorropo
Copy link
Contributor Author

Jorropo commented Mar 1, 2023

Blocked on libp2p/go-libp2p#2155

@Jorropo Jorropo marked this pull request as ready for review March 1, 2023 14:30
@Jorropo Jorropo requested a review from guseggert March 1, 2023 14:30
@Jorropo Jorropo added the status/blocked Unable to be worked further until needs are met label Mar 1, 2023
test/cli/harness/node.go Outdated Show resolved Hide resolved
test/cli/harness/node.go Outdated Show resolved Hide resolved
test/cli/rcmgr_test.go Outdated Show resolved Hide resolved
}

maxMemoryMB := maxMemory / (1024 * 1024)
maxFD := int(cfg.ResourceMgr.MaxFileDescriptors.WithDefault(int64(fd.GetNumFDs()) / 2))

// We want to see this message on startup, that's why we are using fmt instead of log.
fmt.Printf(`
msg := fmt.Sprintf(`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This message needs to be updated, like we don't apply user-supplied overrides on top, and "ipfs swarm limit all" is no longer a valid command.

I think it'd be fine to just remove this entirely though, does it really add much value anymore?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the value of the message is that it makes clear what maxMemoryMB value is being used for computing defaults. I'm torn whether to remove it, because this has been source of confusion before, but maybe it was more confusing with all the other stuff going on. We could remove for now.

If we do keep this message, we should:

  1. talk about ipfs swarm resources
  2. remove the user-supplied overrides on top
  3. document what the returned string is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please take a look again

@@ -340,9 +338,9 @@ var Core = fx.Options(
fx.Provide(Files),
)

func Networked(bcfg *BuildCfg, cfg *config.Config) fx.Option {
func Networked(bcfg *BuildCfg, cfg *config.Config, userRessourceOverrides rcmgr.PartialLimitConfig) fx.Option {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func Networked(bcfg *BuildCfg, cfg *config.Config, userRessourceOverrides rcmgr.PartialLimitConfig) fx.Option {
func Networked(bcfg *BuildCfg, cfg *config.Config, userResourceOverrides rcmgr.PartialLimitConfig) fx.Option {

core/commands/swarm.go Outdated Show resolved Hide resolved
repo/repo.go Outdated Show resolved Hide resolved
@guseggert guseggert removed the status/blocked Unable to be worked further until needs are met label Mar 1, 2023
Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I had no bandwidth to review code, but small ask about end user error)

config/types.go Outdated Show resolved Hide resolved
Copy link
Contributor

@BigLep BigLep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @Jorropo for taking this on and helping us get this over the line.

A few high level comments:

  1. After incorporating comments, please add example output in the PR from running this code. That helps validate that we're getting what's expected.
  2. I'll take a crack at adding some commits for the documentation side. That should also help ensure we're aligned here on the desired output.
  3. I was surprised not to see a call to rcmgr. NewLimiterFromJSON. That was the idea behind Resource Manager: Remove ResourceMgr.Limits configuration from Kubo and use the default libp2p JSON file for it #9603. We instead seem to be plumbing through "UserResrouceLimitOverrides" in various places. Instead can we just say that "if you provide limits.json" we take a hands off approach and let the user do whatever they like. At that point a user is skiing out of bounds and assume all liability.
  • As an aside, given there is nothing special about the name "limits.json" lets come up with a clearer name. I have a comment about this inline.
  1. I recognize that there is going to need to be some back and forth here. I'll make it clear that the release is delayed.

core/commands/swarm.go Outdated Show resolved Hide resolved
core/commands/swarm.go Outdated Show resolved Hide resolved
},
Encoders: cmds.EncoderMap{
cmds.Text: cmds.MakeTypedEncoder(func(req *cmds.Request, w io.Writer, ris libp2p.ResourceInfos) error {
tw := tabwriter.NewWriter(w, 30, 8, 0, '\t', 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we emit TSV anywhere else?
If the default is to do JSON output, we can always have folks use jq to convert to TSV if desired.

( .[0] | keys_unsorted), (.[] | [.[]]) | @tsv

Example snippet: https://jqplay.org/s/SsPvm8FG3ER

That said I agree a TSV table is most user-friendly and if we just want to do that, that's fine too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can always add more encoders later, the framework we use for commands supports them both.

core/commands/swarm.go Outdated Show resolved Hide resolved
core/commands/swarm.go Outdated Show resolved Hide resolved
core/node/libp2p/rcmgr_defaults.go Outdated Show resolved Hide resolved
repo/fsrepo/fsrepo.go Outdated Show resolved Hide resolved
@@ -437,6 +443,16 @@ func (r *FSRepo) openConfig() error {
return nil
}

// openUserRessourceOverrides will remove all overrides if the file is not present.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "remove all overrides"?

I assume if the file isn't present that then we are only using computed defaults. Is that right? Can you please clarify?

Also can we add comments about why Kubo even needs to be looking at libp2p's limits.json file here? Why doesn't libp2p just use it automatically? (I think I know the answer, but I think we should be clear.)

Copy link
Contributor

@BigLep BigLep Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also add a link to go-libp2p limits.json docs so it's clear where this file path came from.

(That said I can't find a place to link to per https://filecoinproject.slack.com/archives/C03FFEVK30F/p1677704303811539 )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so per Slack thread, there is nothing special about limits.json. My bad for not realizing that earlier. Given we get to come up with the name then lets be more self describing. Maybe something like libp2p-resource-limit-overrides.json. Key thing is to get the libp2p prefix and to have the name match what we agree to call this in code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lotus and boost already name this file limits.json, I like the consistency.

What do you mean by "remove all overrides"?

It gives a partial config that is empty.
The partial config object will use the concrete value when we call .Build when a 0 is present, if I give you an object that is all zeros everywhere, all of the values will be replaced by the supplied concrete.

I'll update the comment.

Copy link
Contributor

@BigLep BigLep Mar 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed verbally using a name that is clear for Kubo users (most of which aren't paying attention to conventions in Filecoin projects like Lotus and Boost). "limits.json" is too generic as there a lot of potential limits that aren't being defined here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated please take a look again (when I push)

test/cli/harness/node.go Outdated Show resolved Hide resolved
@@ -106,6 +88,8 @@ func ResourceManager(cfg config.SwarmConfig) interface{} {
ropts = append(ropts, rcmgr.WithTrace(traceFilePath))
}

limiter := rcmgr.NewFixedLimiter(limitConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have thought that if there is a "limit.json" file that we do rcmgr.NewLimiterFromJSON(reader for limit.json, limitConfig)

That was the spirit of #9603. I think this is an important distinction and I have added a top-level comment on that. Getting alignment there will affect some of the implementation I suspect.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this verbally. @Jorropo understandably wants to have all the IO happen up front and not sneaking in later down the line. Per #9680 (comment), we are replicating the logic of cmgr.NewLimiterFromJSON, but doing it upfront to make it clearer to reason about when I/O is happening.

Copy link
Contributor

@BigLep BigLep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some logic questions coming up for me while writing docs and checking it against the implementation...

}

return defaultLimitConfig.Build(orig), nil
return partialLimits.Build(rcmgr.DefaultLimits.Scale(int64(maxMemory), maxFD)), msg, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few important logic changes here and I'm not sure they're right...

  1. Previously we were doing the autoscaling before the if cfg.ConnMgr.Type.WithDefault(config.DefaultConnMgrType) != "none" code that ensures System.ConnsInbound is high enough. That was giving us the logic of max(computed System.ConnsInbound, connmgrHighWaterTimesTwo). It now seems like we just get connmgrHighWaterTimesTwo.
  2. Prevously we were scaling some base limits by maxMemroy and maxFD. Now we're scaling rcmgr.DefaultLimits. I'm confused why rcmgr.DefaultLimits is even in the picture. partialLimits already has all the scopes defined. Why do we need DefaultLimits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. this is a mistake that slipped through while rebasing and fixing conflicts, fixed.
  2. partial limit is not scalled, we used to define both a base and a scalling config, Antonio changed it to now where we only define a base and then use default scalling, I don't know what are the impacts of not using a custom scalling.

@BigLep
Copy link
Contributor

BigLep commented Mar 1, 2023

I added docs here: #9685 (I didn't know how to get them into your PR @Jorropo given your changes aren't in ipfs/kubo. Feel free to educate me after.)

@Jorropo
Copy link
Contributor Author

Jorropo commented Mar 2, 2023

I was surprised not to see a call to rcmgr.NewLimiterFromJSON. That was the idea behind #9603. We instead seem to be plumbing through "UserResrouceLimitOverrides" in various places. Instead can we just say that "if you provide limits.json" we take a hands off approach and let the user do whatever they like. At that point a user is skiing out of bounds and assume all liability.

NewLimiterFromJSON create a new live limiter object, due to the way the code is architectured it's not easily usefull for us.
So I copy pasted the private readLimiterConfigFromJSON in the *FSRepo:

// NewLimiterFromJSON creates a new limiter by parsing a json configuration.
func NewLimiterFromJSON(in io.Reader, defaults ConcreteLimitConfig) (Limiter, error) {
	cfg, err := readLimiterConfigFromJSON(in, defaults)
	if err != nil {
		return nil, err
	}
	return &fixedLimiter{cfg}, nil
}

func readLimiterConfigFromJSON(in io.Reader, defaults ConcreteLimitConfig) (ConcreteLimitConfig, error) {
	var cfg PartialLimitConfig
	if err := json.NewDecoder(in).Decode(&cfg); err != nil {
		return ConcreteLimitConfig{}, err
	}
	return cfg.Build(defaults), nil
}

We have the same:

	var cfg PartialLimitConfig
	if err := json.NewDecoder(in).Decode(&cfg); err != nil {
		return ConcreteLimitConfig{}, err
	}

logic (except ours will ignore if the file is not present).

@Jorropo
Copy link
Contributor Author

Jorropo commented Mar 2, 2023

Rcmgr tests are passing on my machine.

Copy link
Contributor

@BigLep BigLep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Jorropo . This is looking great. Thanks for the back and forth. The reasons I didn't give it an "Approve" was because:

  1. Missing the docs changes.
  2. I'd ideally like to see the output of ipfs swarm resources to confirm it's what we'd expect
  3. I had one question about the tests and why we're gating some of the asserts with an if check.

That said I'm good with you merging if:

  1. You add the docs
  2. You look through my comment and incorporate where they make sense
  3. Are highly confident that the ipfs swarm resources output is what you'd expect given a value of Swarm.ResrouceMgr.MaxMemory of something like ~4GB.

If you aren't feeling confident, I'm good to review tomorrow (2023-03-03). (I realize that means the RC delays to Monday, 2023-03-03, in that case.)

config/types.go Outdated Show resolved Hide resolved
core/commands/swarm.go Outdated Show resolved Hide resolved
core/commands/swarm.go Outdated Show resolved Hide resolved
core/commands/swarm.go Outdated Show resolved Hide resolved
core/node/libp2p/rcmgr.go Outdated Show resolved Hide resolved
core/node/libp2p/rcmgr_defaults.go Outdated Show resolved Hide resolved
core/node/libp2p/rcmgr_defaults.go Outdated Show resolved Hide resolved
repo/fsrepo/fsrepo.go Show resolved Hide resolved
repo/repo.go Outdated Show resolved Hide resolved
test/cli/rcmgr_test.go Outdated Show resolved Hide resolved
@BigLep BigLep mentioned this pull request Mar 3, 2023
@Jorropo Jorropo force-pushed the rcmgr-last-push branch 2 times, most recently from de74d8e to bb850b3 Compare March 3, 2023 11:33
@Jorropo
Copy link
Contributor Author

Jorropo commented Mar 3, 2023

Example output:

Scope					Limit Name		Limit Value		Limit Usage Amount	Limit Usage Percent	
system					Memory			25000000000		6848512			0.0%			
system					FD			262144			21			0.0%			
system					ConnsInbound		23841			0			0%			
transient				Memory			6250000000		0			0%			
transient				FD			65536			0			0%			
transient				ConnsInbound		5960			0			0%			
svc:libp2p.relay/v2			Memory			407388160		0			0%			
svc:libp2p.relay/v2			FD			blockAll		0			n/a			
svc:libp2p.relay/v2			Conns			blockAll		0			n/a			
svc:libp2p.relay/v2			ConnsInbound		blockAll		0			n/a			
svc:libp2p.relay/v2			ConnsOutbound		blockAll		0			n/a			
svc:libp2p.relay/v2			Streams			6216			0			0%			
svc:libp2p.relay/v2			StreamsInbound		6216			0			0%			
svc:libp2p.relay/v2			StreamsOutbound		6216			0			0%			
svc:libp2p.autonat			Memory			53020672		294912			0.6%			
svc:libp2p.autonat			FD			blockAll		0			n/a			
svc:libp2p.autonat			Conns			blockAll		0			n/a			
svc:libp2p.autonat			ConnsInbound		blockAll		0			n/a			
svc:libp2p.autonat			ConnsOutbound		blockAll		0			n/a			
svc:libp2p.autonat			Streams			157			72			45.9%			
svc:libp2p.autonat			StreamsInbound		157			0			0%			
svc:libp2p.autonat			StreamsOutbound		157			0			0%			
svc:libp2p.holepunch			Memory			101847040		0			0%			
svc:libp2p.holepunch			FD			blockAll		0			n/a			
svc:libp2p.holepunch			Conns			blockAll		0			n/a			
svc:libp2p.holepunch			ConnsInbound		blockAll		0			n/a			
svc:libp2p.holepunch			ConnsOutbound		blockAll		0			n/a			
svc:libp2p.holepunch			Streams			436			0			0%			
svc:libp2p.holepunch			StreamsInbound		218			0			0%			
svc:libp2p.holepunch			StreamsOutbound		218			0			0%			
svc:libp2p.identify			Memory			101847040		0			0%			
svc:libp2p.identify			FD			blockAll		0			n/a			
svc:libp2p.identify			Conns			blockAll		0			n/a			
svc:libp2p.identify			ConnsInbound		blockAll		0			n/a			
svc:libp2p.identify			ConnsOutbound		blockAll		0			n/a			
svc:libp2p.identify			Streams			3108			0			0%			
svc:libp2p.identify			StreamsInbound		1554			0			0%			
svc:libp2p.identify			StreamsOutbound		1554			0			0%			
svc:libp2p.ping				Memory			101847040		0			0%			
svc:libp2p.ping				FD			blockAll		0			n/a			
svc:libp2p.ping				Conns			blockAll		0			n/a			
svc:libp2p.ping				ConnsInbound		blockAll		0			n/a			
svc:libp2p.ping				ConnsOutbound		blockAll		0			n/a			
svc:libp2p.ping				Streams			1554			0			0%			
svc:libp2p.ping				StreamsInbound		1554			0			0%			
svc:libp2p.ping				StreamsOutbound		1554			0			0%			
proto:/ipfs/id/1.0.0			Memory			101847040		0			0%			
proto:/ipfs/id/1.0.0			FD			blockAll		0			n/a			
proto:/ipfs/id/1.0.0			Conns			blockAll		0			n/a			
proto:/ipfs/id/1.0.0			ConnsInbound		blockAll		0			n/a			
proto:/ipfs/id/1.0.0			ConnsOutbound		blockAll		0			n/a			
proto:/ipfs/id/1.0.0			Streams			3108			0			0%			
proto:/ipfs/id/1.0.0			StreamsInbound		1554			0			0%			
proto:/ipfs/id/1.0.0			StreamsOutbound		1554			0			0%			
proto:/ipfs/id/push/1.0.0		Memory			101847040		0			0%			
proto:/ipfs/id/push/1.0.0		FD			blockAll		0			n/a			
proto:/ipfs/id/push/1.0.0		Conns			blockAll		0			n/a			
proto:/ipfs/id/push/1.0.0		ConnsInbound		blockAll		0			n/a			
proto:/ipfs/id/push/1.0.0		ConnsOutbound		blockAll		0			n/a			
proto:/ipfs/id/push/1.0.0		Streams			3108			0			0%			
proto:/ipfs/id/push/1.0.0		StreamsInbound		1554			0			0%			
proto:/ipfs/id/push/1.0.0		StreamsOutbound		1554			0			0%			
proto:/libp2p/circuit/relay/0.2.0/stop	Memory			407388160		0			0%			
proto:/libp2p/circuit/relay/0.2.0/stop	FD			blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/stop	Conns			blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/stop	ConnsInbound		blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/stop	ConnsOutbound		blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/stop	Streams			15540			0			0%			
proto:/libp2p/circuit/relay/0.2.0/stop	StreamsInbound		15540			0			0%			
proto:/libp2p/circuit/relay/0.2.0/stop	StreamsOutbound		15540			0			0%			
proto:/libp2p/autonat/1.0.0		Memory			53020672		294912			0.6%			
proto:/libp2p/autonat/1.0.0		FD			blockAll		0			n/a			
proto:/libp2p/autonat/1.0.0		Conns			blockAll		0			n/a			
proto:/libp2p/autonat/1.0.0		ConnsInbound		blockAll		0			n/a			
proto:/libp2p/autonat/1.0.0		ConnsOutbound		blockAll		0			n/a			
proto:/libp2p/autonat/1.0.0		Streams			157			72			45.9%			
proto:/libp2p/autonat/1.0.0		StreamsInbound		157			0			0%			
proto:/libp2p/autonat/1.0.0		StreamsOutbound		157			0			0%			
proto:/libp2p/circuit/relay/0.2.0/hop	Memory			407388160		0			0%			
proto:/libp2p/circuit/relay/0.2.0/hop	FD			blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/hop	Conns			blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/hop	ConnsInbound		blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/hop	ConnsOutbound		blockAll		0			n/a			
proto:/libp2p/circuit/relay/0.2.0/hop	Streams			15540			0			0%			
proto:/libp2p/circuit/relay/0.2.0/hop	StreamsInbound		15540			0			0%			
proto:/libp2p/circuit/relay/0.2.0/hop	StreamsOutbound		15540			0			0%			
proto:/libp2p/dcutr			Memory			101847040		0			0%			
proto:/libp2p/dcutr			FD			blockAll		0			n/a			
proto:/libp2p/dcutr			Conns			blockAll		0			n/a			
proto:/libp2p/dcutr			ConnsInbound		blockAll		0			n/a			
proto:/libp2p/dcutr			ConnsOutbound		blockAll		0			n/a			
proto:/libp2p/dcutr			Streams			436			0			0%			
proto:/libp2p/dcutr			StreamsInbound		218			0			0%			
proto:/libp2p/dcutr			StreamsOutbound		218			0			0%			
proto:/ipfs/ping/1.0.0			Memory			101847040		0			0%			
proto:/ipfs/ping/1.0.0			FD			blockAll		0			n/a			
proto:/ipfs/ping/1.0.0			Conns			blockAll		0			n/a			
proto:/ipfs/ping/1.0.0			ConnsInbound		blockAll		0			n/a			
proto:/ipfs/ping/1.0.0			ConnsOutbound		blockAll		0			n/a			
proto:/ipfs/ping/1.0.0			Streams			1554			0			0%			
proto:/ipfs/ping/1.0.0			StreamsInbound		1554			0			0%			
proto:/ipfs/ping/1.0.0			StreamsOutbound		1554			0			0%

The blockAll for protos and svcs is because libp2p use the same types even when they don't apply (protocols and services don not manage connections, they only work with multiplexed streams).
I would expect a thiner type that does not have this information.

I don't think this is a blocker for the rc.

@Jorropo Jorropo force-pushed the rcmgr-last-push branch 2 times, most recently from c8a8406 to 3a36dcf Compare March 3, 2023 11:58
Copy link
Contributor

@BigLep BigLep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - thanks @Jorropo

Maybe to the resource manager .md file add something like

What do svc and peer scopes list a lot of blockAll values for ipfs swarm resources?

The svc and peer scopes function at the stream (not connection) level. As a result, limits for

  • FD
  • Conns
  • ConnsInbound
  • ConnsOutbound
    aren't applicable for these scopes. They use the same internal struct within go-libp2p as other scopes like system, transient, etc. but have a value of "blockAll" internally which is why they are outputted in this way.

@guseggert
Copy link
Contributor

Wouldn't it be trivial to just drop those limits for svc and peer scopes when dumping them out? Is there a scenario where this information is useful, or is it always misleading?

@guseggert
Copy link
Contributor

Also in that latest example output you posted, I don't see any "unlimited" limits, so I can't verify that the result is interpretable in that case.

Comment on lines +379 to +384
limit = "unlimited"
percentage = "n/a"
case rcmgr.BlockAllLimit64:
limit = "blockAll"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
limit = "unlimited"
percentage = "n/a"
case rcmgr.BlockAllLimit64:
limit = "blockAll"
limit = "Unlimited"
percentage = "n/a"
case rcmgr.BlockAllLimit64:
limit = "BlockAll"

for consistency with the other enum string representations used in the output

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jorropo added a commit to ipfs/go-ipfs-http-client that referenced this pull request Mar 6, 2023
This is required because of the cyclic module dependencies, this will fix CI for ipfs/kubo#9680.
@@ -309,7 +309,7 @@ jobs:
- run:
name: Cloning
command: |
git clone https://github.com/ipfs/go-ipfs-http-client.git
git clone https://github.com/ipfs/go-ipfs-http-client.git -b bump-for-rcmgr-last-push
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove once the cylce has been upgraded.

@@ -149,6 +149,7 @@ jobs:
with:
repository: ipfs/go-ipfs-http-client
path: go-ipfs-http-client
ref: bump-for-rcmgr-last-push
Copy link
Contributor Author

@Jorropo Jorropo Mar 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove once the cylce has been upgraded.

Co-Authored-By: Antonio Navarro Perez <[email protected]>
@Jorropo Jorropo disabled auto-merge March 6, 2023 11:46
@Jorropo Jorropo merged commit 7986196 into ipfs:master Mar 6, 2023
@Jorropo Jorropo deleted the rcmgr-last-push branch March 6, 2023 11:47
Jorropo added a commit to ipfs/go-ipfs-http-client that referenced this pull request Apr 16, 2023
This is required because of the cyclic module dependencies, this will fix CI for ipfs/kubo#9680.
Jorropo added a commit to ipfs/go-ipfs-http-client that referenced this pull request Apr 18, 2023
This is required because of the cyclic module dependencies, this will fix CI for ipfs/kubo#9680.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Archived in project
4 participants