-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support v2 backing image #3295
base: master
Are you sure you want to change the base?
Conversation
WalkthroughThe pull request introduces significant modifications across multiple files to enhance the handling of backing images and volumes in the Longhorn system. Key changes include the addition of a Changes
Possibly related PRs
Suggested reviewers
Warning Rate limit exceeded@ChanYiLin has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 6 minutes and 26 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
3a7756a
to
0311ac8
Compare
ref: longhorn/longhorn 6341 Signed-off-by: Jack Lin <[email protected]>
0311ac8
to
642b77a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Outside diff range and nitpick comments (28)
webhook/resources/backupbackingimage/validator.go (1)
38-54
: Consider validating backing image state before backup.The current validation only checks the data engine type. Consider also validating the backing image state to ensure it's in a valid state for backup (e.g., ready, not in error state).
Example validation to consider:
if backingImage.Status.State != longhorn.BackingImageStateReady { return werror.NewInvalidError( fmt.Sprintf("cannot backup backing image %q in %v state", backingImageName, backingImage.Status.State), "") }🧰 Tools
🪛 golangci-lint (1.62.2)
50-50: S1039: unnecessary use of fmt.Sprintf
(gosimple)
47-47: printf: fmt.Sprintf call has arguments but no formatting directives
(govet)
upgrade/v17xto180/upgrade.go (3)
45-50
: Consider using more idiomatic Go code.While the logic is correct, the code could be more idiomatic and concise.
Consider this refactor:
- // before v1.8.0, there should not have any v2 data engine disk in the backing image. - for bi.Spec.DiskFileSpecMap != nil { - for diskUUID := range bi.Spec.DiskFileSpecMap { - bi.Spec.DiskFileSpecMap[diskUUID].DataEngine = longhorn.DataEngineTypeV1 - } - } + // before v1.8.0, there should not have any v2 data engine disk in the backing image. + for diskUUID := range bi.Spec.DiskFileSpecMap { + bi.Spec.DiskFileSpecMap[diskUUID].DataEngine = longhorn.DataEngineTypeV1 + }The
nil
check is unnecessary as ranging over a nil map is safe in Go and yields zero iterations.
52-57
: Consider extracting common upgrade logic.The code for upgrading DiskFileStatusMap is identical to DiskFileSpecMap, suggesting an opportunity for code reuse.
Consider extracting the common logic:
+ // Helper function to upgrade disk file maps to v1 + upgradeDiskFileMap := func(m map[string]longhorn.BackingImageDiskFile) { + for diskUUID := range m { + m[diskUUID].DataEngine = longhorn.DataEngineTypeV1 + } + } + // before v1.8.0, there should not have any v2 data engine disk in the backing image. - for bi.Status.DiskFileStatusMap != nil { - for diskUUID := range bi.Status.DiskFileStatusMap { - bi.Status.DiskFileStatusMap[diskUUID].DataEngine = longhorn.DataEngineTypeV1 - } - } + upgradeDiskFileMap(bi.Spec.DiskFileSpecMap) + upgradeDiskFileMap(bi.Status.DiskFileStatusMap)This refactor:
- Reduces code duplication
- Makes the upgrade logic more maintainable
- Follows Go idioms for map iteration
Line range hint
23-57
: Solid upgrade strategy for v2 backing image support.The upgrade implementation follows a robust pattern:
- Properly initializes DataEngine fields for backward compatibility
- Ensures consistent v1 marking across all backing image components
- Maintains proper error handling and logging
The approach aligns well with the PR objectives for supporting v2 backing images while ensuring a smooth upgrade path from v1.7.x.
Consider documenting the upgrade strategy in the project's upgrade guide to help users understand the implications of the v1 to v2 transition.
k8s/generate_code.sh (2)
68-71
: Add error checking for CRD generationWhile the explicit paths improve reliability, consider adding error checking to ensure CRDs are generated successfully, especially important for the new DataEngine field support.
-${GOPATH}/bin/controller-gen crd paths=${APIS_DIR}/... output:crd:dir=${CRDS_DIR} +if ! ${GOPATH}/bin/controller-gen crd paths=${APIS_DIR}/... output:crd:dir=${CRDS_DIR}; then + echo "Failed to generate CRDs" + exit 1 +fi
79-79
: Validate generated CRDsConsider adding validation for the generated CRDs to ensure they are well-formed and include all required fields (especially the new DataEngine field).
-${GOPATH}/bin/kustomize build ${CRDS_DIR} >> ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/crds.yaml +if ! ${GOPATH}/bin/kustomize build ${CRDS_DIR} >> ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/crds.yaml; then + echo "Failed to build CRDs" + exit 1 +fi + +# Validate the generated CRDs +if ! kubectl apply --dry-run=client -f ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/crds.yaml > /dev/null; then + echo "Generated CRDs validation failed" + exit 1 +fiapi/backingimage.go (2)
55-58
: Consider documenting DataEngine parameter constraintsSince this is a public API endpoint, it would be helpful to document any constraints or valid values for the DataEngine parameter, especially given its role in v2 backing image support.
Consider adding a comment above the function describing the valid DataEngine values and their implications.
55-58
: Ensure error messages include DataEngine contextThe error message could be enhanced to include the DataEngine value when reporting failures, as this would aid in debugging issues related to v2 backing images.
Consider updating the error message:
-return errors.Wrapf(err, "failed to create backing image %v from source type %v with parameters %+v", input.Name, input.SourceType, input.Parameters) +return errors.Wrapf(err, "failed to create backing image %v from source type %v with parameters %+v and data engine %v", input.Name, input.SourceType, input.Parameters, input.DataEngine)k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go (2)
15-32
: Add field documentation and validation tags.The new
BackingImageV2CopyInfo
struct would benefit from:
- Field documentation to explain the purpose and constraints of each field
- Validation tags for fields that should have constraints (e.g., Size, Progress)
Consider adding documentation and validation like this:
type BackingImageV2CopyInfo struct { // +optional + // Name is the identifier of the backing image copy Name string `json:"name"` // +optional + // UUID is the unique identifier of this specific copy UUID string `json:"uuid"` // +optional + // DiskUUID is the identifier of the disk containing this copy DiskUUID string `json:"diskUUID"` // +optional + // Size represents the total size of the backing image in bytes + // +kubebuilder:validation:Minimum=0 Size int64 `json:"size"` // +optional + // Progress indicates the copying progress as a percentage + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=100 Progress int `json:"progress"` // +optional + // State represents the current state of the backing image copy State BackingImageState `json:"state"` // +optional + // CurrentChecksum is the SHA512 checksum of the backing image data CurrentChecksum string `json:"currentChecksum"` // +optional + // Message provides additional information about the current state Message string `json:"message"` }
15-32
: Consider version compatibility strategy.The introduction of v2-specific types suggests a need for clear version management strategy. Consider:
- Adding a version field to explicitly track the backing image version
- Implementing conversion webhooks for v1 to v2 migration
Example addition:
type BackingImageV2CopyInfo struct { + // Version indicates the backing image version (v1 or v2) + // +kubebuilder:validation:Enum=v1;v2 + Version string `json:"version"` // ... existing fields ... }manager/backingimage.go (1)
89-89
: Follow Go parameter naming conventionsThe parameter
DataEngine
should be in camelCase as per Go conventions.-func (m *VolumeManager) CreateBackingImage(name, checksum, sourceType string, parameters map[string]string, minNumberOfCopies int, nodeSelector, diskSelector []string, secret, secretNamespace string, DataEngine string) (bi *longhorn.BackingImage, err error) { +func (m *VolumeManager) CreateBackingImage(name, checksum, sourceType string, parameters map[string]string, minNumberOfCopies int, nodeSelector, diskSelector []string, secret, secretNamespace string, dataEngine string) (bi *longhorn.BackingImage, err error) {webhook/resources/backingimage/validator.go (1)
75-77
: Enhance error message for v2 backing image clone restriction.The validation logic correctly prevents cloning from v2 backing images. However, consider making the error message more descriptive to help users understand the limitation.
- return werror.NewInvalidError("clone from a v2 backing image is not supported", "") + return werror.NewInvalidError("cloning from a v2 backing image is not supported in the current version. Please use a v1 backing image as the source.", "")engineapi/enginesim.go (1)
290-308
: Add documentation for SPDK methodsWhile the implementation pattern is correct, please add documentation for each method describing:
- Purpose and functionality
- Parameter requirements
- Return value semantics
- Error conditions
Example format:
// SPDKBackingImageCreate creates a new SPDK backing image. // Parameters: // name: unique identifier for the backing image // backingImageUUID: UUID of the backing image // diskUUID: UUID of the target disk // checksum: verification checksum // fromAddress: source address // srcDiskUUID: UUID of the source disk // size: size in bytes // Returns: // *imapi.BackingImage: created backing image info // error: implementation or validation errorwebhook/resources/backingimage/mutator.go (1)
192-195
: LGTM! Consider enhancing error message with migration guidance.The immutability check for DataEngine is correctly implemented. However, the error message could be more helpful.
Consider enhancing the error message to guide users on the proper migration path:
- err := fmt.Errorf("changing data engine for BackingImage %v is not supported", oldBackingImage.Name) + err := fmt.Errorf("changing data engine for BackingImage %v is not supported. Please create a new backing image with the desired data engine", oldBackingImage.Name)controller/instance_manager_controller_test.go (2)
66-67
: LGTM, but consider adding test cases for proxy connection counter.The addition of
proxyConnCounter
is well-integrated into the test setup. However, consider adding specific test cases to verify the counter's behavior in various scenarios.Consider adding test cases that:
- Verify counter increments/decrements correctly
- Check counter behavior during instance manager state transitions
- Validate counter state after instance manager failures
Line range hint
1-500
: Consider adding error scenario test cases.While the existing test cases cover various state transitions, they don't explicitly test error handling for proxy connections. Consider adding test cases for:
- Network failures during proxy connection establishment
- Resource cleanup when an instance manager terminates unexpectedly
- Race conditions between multiple instance managers accessing the counter
engineapi/types.go (1)
114-118
: Add documentation for new SPDK backing image methodsThe new methods align well with the PR objectives for v2 backing image support. However, consider adding documentation to describe:
- Purpose and behavior of each method
- Parameter requirements and validation rules
- Expected error conditions
Example documentation format:
// SPDKBackingImageCreate creates a new SPDK backing image. // Parameters: // - name: unique identifier for the backing image // - backingImageUUID: UUID of the backing image // - diskUUID: UUID of the target disk // - checksum: verification checksum // - fromAddress: source address // - srcDiskUUID: UUID of the source disk // - size: size in bytes // Returns: // - *imapi.BackingImage: created backing image info // - error: on failurecontroller/controller_manager.go (1)
96-99
: LGTM: Backing image controller properly updated for v2 supportThe addition of
proxyConnCounter
toNewBackingImageController
is well-integrated and supports the v2 backing image feature.Consider documenting the relationship between the proxy connection counter and v2 backing images in the controller's documentation, as this is a significant architectural change that affects multiple controllers.
k8s/crds.yaml (1)
499-503
: LGTM! Consider adding field descriptions.The addition of
dataEngine
fields and v2-specific status fields properly supports v2 backing images. The enum validation ensures only valid values ("v1" or "v2") can be specified.Consider adding descriptions for the new fields to improve API documentation:
dataEngine: + description: "The data engine version to use for this backing image. Can be 'v1' or 'v2'." enum: - v1 - v2 type: string
Also applies to: 555-556, 585-588
engineapi/proxy_backing_image.go (1)
12-14
: Consider using a parameter struct to enhance readabilityThe
SPDKBackingImageCreate
function accepts multiple parameters of the same type (string
), which can increase the risk of passing arguments in the wrong order. Encapsulating these parameters into a struct can improve readability and maintainability of the code.You might define a struct like this:
type BackingImageCreateOptions struct { Name string BackingImageUUID string DiskUUID string Checksum string FromAddress string SrcDiskUUID string Size uint64 }And update the function signature:
-func (p *Proxy) SPDKBackingImageCreate(name, backingImageUUID, diskUUID, checksum, fromAddress, srcDiskUUID string, size uint64) (*imapi.BackingImage, error) { - return p.grpcClient.SPDKBackingImageCreate(name, backingImageUUID, diskUUID, checksum, fromAddress, srcDiskUUID, size) +func (p *Proxy) SPDKBackingImageCreate(opts BackingImageCreateOptions) (*imapi.BackingImage, error) { + return p.grpcClient.SPDKBackingImageCreate(opts) }This change promotes clarity and reduces the likelihood of parameter misplacement.
k8s/pkg/apis/longhorn/v1beta2/backingimage.go (1)
Line range hint
20-21
: Remove the obsolete typeBackingImageDownloadState
The comment indicates that
BackingImageDownloadState
is replaced byBackingImageState
, but the type declaration forBackingImageDownloadState
remains in the code. IfBackingImageDownloadState
is no longer used, consider removing it to clean up the codebase and prevent any confusion.webhook/resources/volume/validator.go (1)
110-111
: Clarify the error message for volume size validationThe condition checks if
volume.Spec.Size
is less thanbackingImage.Status.VirtualSize
, meaning the volume size must be at least as large as the backing image size. The error message could be more precise to reflect this.Consider updating the error message for clarity:
if volume.Spec.Size < backingImage.Status.VirtualSize { - return werror.NewInvalidError("volume size should be larger than the backing image size", "") + return werror.NewInvalidError("volume size must be at least as large as the backing image size", "") }controller/instance_manager_controller.go (4)
72-72
: Clarify the distinction betweenName
andimName
inInstanceManagerMonitor
.The
InstanceManagerMonitor
struct now includes bothName
andimName
fields. This might cause confusion regarding their respective purposes. Consider renaming them to more descriptive names, such asmonitorName
andinstanceManagerName
, to clearly differentiate between the monitor's identifier and the associated instance manager's name.
85-86
: Evaluate the necessity of bothclient
andproxyClient
fields.The struct contains both
client *engineapi.InstanceManagerClient
andproxyClient engineapi.EngineClientProxy
. If both clients are required due to differences in their usage (e.g., one for standard operations and one for proxy communications), please ensure that their responsibilities are clearly documented. Otherwise, consider unifying them or clarifying their roles to avoid confusion.
Line range hint
111-134
: Ensure proper initialization ofproxyConnCounter
.The
proxyConnCounter
field is added to theInstanceManagerController
and passed throughNewInstanceManagerController
. Verify that this counter is properly initialized before being used. If it's possible forproxyConnCounter
to benil
, ensure that you add necessary checks or initialize it to prevent potentialnil
pointer dereferences.Apply this diff to initialize
proxyConnCounter
if not already initialized:func NewInstanceManagerController( logger logrus.FieldLogger, ds *datastore.DataStore, scheme *runtime.Scheme, kubeClient clientset.Interface, namespace, controllerID, serviceAccount string, proxyConnCounter util.Counter, ) (*InstanceManagerController, error) { + if proxyConnCounter == nil { + proxyConnCounter = util.NewCounter() + } + eventBroadcaster := record.NewBroadcaster()
85-86
: Consider consolidating client interfaces.Both
client
andproxyClient
are used inInstanceManagerMonitor
. If possible, consider unifying these interfaces or extending one to handle all required functionalities. This could simplify the code and reduce the potential for misusing one client over the other.Also applies to: 1582-1633
controller/backing_image_controller.go (2)
551-554
: Check the condition for early returnIn the
deleteInvalidV2Copy
function, consider whether the early return whenbi.Status.V2FirstCopyStatus != longhorn.BackingImageStateReady
is appropriate. There may be scenarios where invalid V2 copies need to be cleaned up even if the first copy is not ready.Review the logic to ensure that invalid V2 copies are always cleaned up as necessary, regardless of the first copy's status.
915-920
: Clarify status update logic for unknown copiesIn
syncV2StatusWithInstanceManager
, when updating the state toUnknown
, ensure that the message provides actionable information for debugging.Consider enhancing the message:
- bi.Status.DiskFileStatusMap[diskUUID].Message = "The copy status is unknown because the status is not found from the instance manager" + bi.Status.DiskFileStatusMap[diskUUID].Message = "Copy status is unknown: not found in instance manager; instance manager may be down or copy may have been deleted"
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (113)
go.mod
is excluded by!go.mod
go.sum
is excluded by!**/*.sum
,!go.sum
vendor/github.com/coreos/go-systemd/v22/dbus/dbus.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/methods.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/properties.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/set.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/subscription.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/subscription_set.go
is excluded by!vendor/**
vendor/github.com/gammazero/deque/README.md
is excluded by!vendor/**
vendor/github.com/gammazero/deque/deque.go
is excluded by!vendor/**
vendor/github.com/gammazero/deque/doc.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/CONTRIBUTING.md
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/LICENSE
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/MAINTAINERS
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/README.md
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_anonymous.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_external.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_sha1.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/call.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_darwin.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_other.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_unix.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_windows.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/dbus.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/decoder.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/default_handler.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/doc.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/encoder.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/escape.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/export.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/homedir.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/match.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/message.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/object.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sequence.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sequential_handler.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/server_interfaces.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sig.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_darwin.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_generic.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_nonce_tcp.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_tcp.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unix.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_dragonfly.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_freebsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_linux.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_netbsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_openbsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_zos.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant_lexer.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant_parser.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/api/backing_image.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/client/instance.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/client/proxy_backing_image.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/meta/version.go
is excluded by!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/instance.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/proxy.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/proxy_grpc.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/spdkrpc/spdk.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/spdkrpc/spdk_grpc.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/moby/sys/mountinfo/mounted_linux.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/LICENSE
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_linux.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_linux_fuzzer.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/NOTICE
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/blkio_device.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/config.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/config_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/configs_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/hugepage_limit.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/intelrdt.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/interface_priority_map.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/mount.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/network.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/rdma.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/devices/device.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/devices/device_unix.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/user.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/user_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_deprecated.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_maps.c
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_maps_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/config.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/state.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/version.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/decode.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/encode.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/expfmt.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/openmetrics_create.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/text_create.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/text_parse.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labels.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labelset_string.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labelset_string_go120.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/metric.go
is excluded by!vendor/**
vendor/modules.txt
is excluded by!vendor/**
📒 Files selected for processing (30)
api/backingimage.go
(1 hunks)api/model.go
(2 hunks)client/generated_backing_image.go
(2 hunks)client/generated_backing_image_disk_file_status.go
(1 hunks)controller/backing_image_controller.go
(18 hunks)controller/backing_image_data_source_controller.go
(2 hunks)controller/controller_manager.go
(1 hunks)controller/instance_manager_controller.go
(9 hunks)controller/instance_manager_controller_test.go
(1 hunks)controller/replica_controller.go
(1 hunks)datastore/longhorn.go
(2 hunks)engineapi/enginesim.go
(2 hunks)engineapi/instance_manager.go
(1 hunks)engineapi/proxy_backing_image.go
(1 hunks)engineapi/types.go
(2 hunks)k8s/crds.yaml
(4 hunks)k8s/generate_code.sh
(2 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimage.go
(4 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimagedatasource.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/instancemanager.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/zz_generated.deepcopy.go
(2 hunks)manager/backingimage.go
(2 hunks)scripts/build
(1 hunks)types/types.go
(2 hunks)upgrade/v17xto180/upgrade.go
(2 hunks)webhook/resources/backingimage/mutator.go
(2 hunks)webhook/resources/backingimage/validator.go
(2 hunks)webhook/resources/backupbackingimage/validator.go
(1 hunks)webhook/resources/volume/validator.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- scripts/build
🧰 Additional context used
📓 Learnings (1)
upgrade/v17xto180/upgrade.go (1)
Learnt from: mantissahz
PR: longhorn/longhorn-manager#2182
File: upgrade/v17xto180/upgrade.go:170-177
Timestamp: 2024-11-10T16:45:04.898Z
Learning: In `upgrade/v17xto180/upgrade.go`, when there is only a single caller, prefer to keep parameter management logic within the `upgradeBackingImageDataSources` function rather than extracting it to a helper function, to maintain code simplicity.
🪛 golangci-lint (1.62.2)
controller/backing_image_controller.go
362-362: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
696-696: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
758-758: printf: (*github.com/sirupsen/logrus.Entry).Infof call has arguments but no formatting directives
(govet)
webhook/resources/backupbackingimage/validator.go
50-50: S1039: unnecessary use of fmt.Sprintf
(gosimple)
47-47: printf: fmt.Sprintf call has arguments but no formatting directives
(govet)
🪛 Shellcheck (0.10.0)
k8s/generate_code.sh
[warning] 74-74: Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames.
(SC2038)
🪛 GitHub Check: CodeFactor
k8s/generate_code.sh
[warning] 74-74: k8s/generate_code.sh#L74
Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames. (SC2038)
🔇 Additional comments (49)
webhook/resources/backupbackingimage/validator.go (1)
16-23
: LGTM! Clean implementation of the validator struct and constructor.
The struct and constructor follow good practices with proper embedding of the DefaultValidator.
upgrade/v17xto180/upgrade.go (2)
11-11
: LGTM!
The import is necessary for accessing the DataEngineTypeV1
constant used in the upgrade logic.
41-43
: LGTM! Verify handling of all backing images.
The initialization of empty DataEngine fields to v1 is correct for backward compatibility. This ensures existing backing images are properly marked as v1 before v2 support is introduced.
Let's verify that all backing images will be properly handled:
✅ Verification successful
Based on the comprehensive search results, I can now generate the final response since I have enough information to verify the handling of backing images:
Verified: DataEngine initialization for backing images is properly handled
The initialization of empty DataEngine fields to v1 is correctly implemented and all backing images will be properly handled because:
- The upgrade code properly sets DataEngine to v1 for all existing backing images with empty DataEngine field
- The webhook mutator ensures any new backing images with empty DataEngine field get initialized to v1
- The backing image controller has proper handling for both v1 and v2 data engines, including:
- Validation of data engine compatibility
- Proper initialization of disk file specs and statuses
- Separate handling paths for v1 and v2 backing image operations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any backing images that might have unexpected DataEngine values
# or might be missed by the upgrade logic
# Search for backing image related structs and their DataEngine field usage
ast-grep --pattern 'type BackingImage struct {
$$$
DataEngine $_
$$$
}'
# Search for any direct DataEngine assignments or comparisons
rg -A 3 'DataEngine.*=|=.*DataEngine'
Length of output: 108395
k8s/generate_code.sh (2)
40-44
: LGTM! Improved command availability checks
The explicit path checks and installations for controller-gen
and kustomize
improve script reliability by ensuring consistent tool versions and locations.
Also applies to: 47-55
68-79
: Verify CRD generation for v2 backing image support
Since this PR adds support for v2 backing images and introduces the DataEngine field, let's verify the CRD generation includes these new fields.
✅ Verification successful
CRD generation properly includes v2 backing image support
The verification confirms that the CRD generation script correctly includes the new dataEngine
field with proper v1/v2 enum values and related configurations. Specifically:
- The
dataEngine
field is properly defined with enum values ["v1", "v2"] - The field appears in multiple relevant contexts including backing image specifications
- Associated v2-specific configurations are present with proper nesting under
dataEngineSpec.v2.properties
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that generated CRDs include new DataEngine field and v2 backing image support
# Check for DataEngine field in BackingImage CRD
echo "Checking for DataEngine field in BackingImage CRD..."
rg -A 5 "dataEngine:" k8s/crds.yaml
# Check for v2 related fields
echo "Checking for v2 related configurations..."
rg -A 10 "properties:" k8s/crds.yaml | rg -A 5 "BackingImage"
Length of output: 2120
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 74-74: Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames.
(SC2038)
🪛 GitHub Check: CodeFactor
[warning] 74-74: k8s/generate_code.sh#L74
Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames. (SC2038)
client/generated_backing_image_disk_file_status.go (2)
10-10
: LGTM! Verify the DataEngine field usage across the codebase.
The new DataEngine
field is properly structured with appropriate JSON/YAML tags and follows the existing field patterns.
Let's verify the consistent usage of this field across the codebase:
✅ Verification successful
The DataEngine field is consistently defined and used across the codebase
The verification shows that the DataEngine
field is properly integrated:
- Defined in core API types (
k8s/pkg/apis/longhorn/v1beta2/
) - Consistently implemented in generated client code with matching JSON/YAML tags
- Used uniformly across related entities (Volume, Replica, BackingImage, InstanceManager)
- Follows the same pattern with
omitempty
tags in all client-side implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent DataEngine field usage in related structs and APIs
# Search for DataEngine field definitions
echo "Checking DataEngine field definitions:"
rg --type go "DataEngine.*\`json:\"dataEngine" -A 2
# Search for DataEngine field usage in controllers and handlers
echo "Checking DataEngine field usage:"
rg --type go "DataEngine.*string.*yaml:\"data_engine" -A 2
Length of output: 6073
10-10
: Consider updating the code generation source instead of the generated file.
This appears to be generated code. Instead of modifying the generated file directly, consider updating the source templates or specifications used for code generation. This ensures the changes persist through regeneration and maintains proper development practices.
Let's check for code generation markers and related files:
✅ Verification successful
Let me try another approach to verify if this is generated code:
Based on the evidence from the shell script outputs, I can now generate a final response:
The file appears to be generated code and should not be modified directly
The file client/generated_backing_image_disk_file_status.go
is part of a larger set of generated files in the client/
directory, as evidenced by:
- The file name follows the pattern
generated_*.go
like other generated files in the same directory - There are numerous other generated files in the same directory with similar naming patterns
- The file is part of a client package that appears to be auto-generated from API definitions
Instead of modifying this generated file directly, you should:
- Locate and modify the source definition/template used for code generation
- Re-run the code generation process to update this file
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Look for code generation configuration and source files
# Check for generation markers
echo "Checking for generation markers:"
rg --type go "// Code generated .* DO NOT EDIT" "client/"
# Look for potential generator configurations
echo "Looking for generator configurations:"
fd -e yaml -e json "generator|codegen|swagger"
Length of output: 266
Script:
#!/bin/bash
# Look for common code generation patterns and related files
# Check the file header for any generation comments
echo "Checking file header:"
head -n 10 client/generated_backing_image_disk_file_status.go
# Look for potential generator source files
echo -e "\nLooking for potential generator source files:"
fd "swagger|openapi|proto|thrift|graphql"
# Check if the filename itself suggests generation
echo -e "\nFiles with 'generated' in name:"
fd "generated.*\.go$"
# Look for common build/generate tools configuration
echo -e "\nChecking for common generator configs:"
fd "Makefile|go.mod" --type f --exec grep -l "generate"
Length of output: 57954
api/backingimage.go (1)
55-58
: Verify manager layer implementation for DataEngine parameter
The addition of input.DataEngine
parameter aligns with the PR objectives for v2 backing image support. However, we should ensure proper validation and handling of this parameter.
Let's verify the manager implementation and validation:
✅ Verification successful
Let me gather more information about the manager implementation and validation.
Let me check one more aspect to ensure we have a complete picture of the DataEngine validation.
Based on the gathered information, I can now provide the final response:
DataEngine parameter implementation is properly handled
The manager implementation shows appropriate handling of the DataEngine parameter:
- Default value is set to DataEngineTypeV1 when empty
- Type validation functions exist (
IsDataEngineV1
,IsDataEngineV2
) - The parameter is properly passed through the API to manager layer
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if CreateBackingImage in manager handles DataEngine validation
ast-grep --pattern 'func ($m *Manager) CreateBackingImage($$$, dataEngine $t) ($ret, error) {
$$$
}'
# Check for any validation of DataEngine values
rg -A 5 'DataEngine.*validation|validate.*DataEngine'
Length of output: 466
Script:
#!/bin/bash
# Check the CreateBackingImage implementation in manager
rg -A 10 "func.*CreateBackingImage" manager/
# Check for DataEngine validation or usage in types
rg -A 5 "DataEngine.*struct|type.*DataEngine" types/
# Look for any DataEngine validation in the backing image context
rg -A 5 "validate.*backing.*image|validateBackingImage" manager/
Length of output: 1023
Script:
#!/bin/bash
# Check for DataEngine type definition and valid values
rg -A 5 "DataEngineType|DataEngine.*const" types/
# Look for any validation functions that might handle DataEngine
rg -A 5 "ValidateDataEngine|IsValidDataEngine"
# Check if there are any specific validation functions in the API layer
rg -A 5 "validate.*input.*DataEngine" api/
Length of output: 2004
k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go (1)
15-32
: Verify v2 copy state transitions.
The State
field uses BackingImageState
which is shared with v1. Ensure that all possible state transitions for v2 copies are valid and properly handled.
✅ Verification successful
Let me gather more information about the state transitions and validations.
Based on the search results, I can now provide a final response about the state transitions for v2 copies:
State transitions for v2 copies are properly handled and validated.
The codebase shows a well-defined state management system for v2 copies with the following key validations:
- States are properly defined in
BackingImageState
type with clear transitions: pending -> starting -> in-progress -> ready/failed - The
prepareFirstV2Copy
function handles state transitions with proper validation and cleanup mechanisms - Failed states are properly handled with cleanup and retry logic
- State transitions are atomic and synchronized through the controller
- Unknown states are properly detected and handled with recovery mechanisms
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for state transition handling in the codebase
# Look for state transition handling code
rg -A 5 "BackingImageState.*v2"
# Check for v2-specific state validation
ast-grep --pattern 'func $_(state BackingImageState) $_ {
$$$
}'
Length of output: 522
Script:
#!/bin/bash
# Search for BackingImageState type definition and its possible values
rg -A 10 "type BackingImageState string"
# Search for any validation or state transition logic related to BackingImage
rg -A 10 "func.*BackingImage.*State"
# Look for any v2-specific copy operations that might involve state transitions
rg -A 5 "v2.*copy|copy.*v2" -i
# Search for any state validation in manager
rg -A 5 "validateBackingImage|validate.*State" -i
Length of output: 68815
client/generated_backing_image.go (2)
Line range hint 46-54
: Verify client operations handle the new DataEngine field.
The BackingImageOperations interface and client methods need to properly handle the new DataEngine field, especially in Create and Update operations.
✅ Verification successful
DataEngine field is properly handled through standard client operations
The verification shows that:
- The BackingImage struct properly defines the DataEngine field with JSON/YAML tags
- The client's Create/Update operations use the generic
doCreate
method which automatically handles all struct fields through JSON serialization - No special handling is required for the DataEngine field as it follows the same pattern as other fields in the struct
The client operations will correctly handle the DataEngine field through the standard JSON marshaling/unmarshaling process used by the underlying RancherClient implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify proper handling of DataEngine field in operations
# Expected: Find implementations handling the DataEngine field
# Check for DataEngine field usage in operation implementations
rg -A 5 "Create\(.*BackingImage\)" --type go
rg -A 5 "Update\(.*BackingImage\)" --type go
# Check for any validation logic
ast-grep --pattern 'func $_(bi *BackingImage) {
$$$
dataEngine := bi.DataEngine
$$$
}'
Length of output: 2076
Script:
#!/bin/bash
# Let's check the implementation of doCreate to understand how fields are handled
rg -A 10 "func.*doCreate" --type go
# Also check the BackingImage struct definition to understand the field
rg "type BackingImage struct" -A 15 --type go
Length of output: 12141
12-13
: LGTM! Verify consistent DataEngine field usage across the codebase.
The addition of the DataEngine field with appropriate JSON/YAML tags aligns with the PR objectives for v2 backing image support.
✅ Verification successful
Let me gather more information about the DataEngine type definition to ensure complete consistency.
Based on the gathered information, I can now provide the final response:
The DataEngine field usage is consistent across the codebase
The DataEngine field is correctly implemented with:
- Consistent type definition as
DataEngineType
string enum ink8s/pkg/apis/longhorn/v1beta2/volume.go
- Proper JSON/YAML tag conventions (camelCase/snake_case) across all related structs
- Consistent usage of
omitempty
for optional fields - Well-defined values:
v1
andv2
as defined constants - Proper validation and handling throughout the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent DataEngine field usage across related structs
# Expected: DataEngine field should be present in related backing image structs with consistent tags
# Check for DataEngine field usage in other structs
rg -U "type.*struct.*\{(?s:.)*DataEngine.*string.*\}" --type go
# Check for consistent JSON/YAML tag naming
rg "json:\"dataEngine" --type go
rg "yaml:\"data_engine" --type go
Length of output: 2560
Script:
#!/bin/bash
# Check DataEngineType definition and its values
ast-grep --pattern 'type DataEngineType string'
# Check for DataEngine constant values
rg "DataEngine.*=.*" --type go
# Check if there are any validation functions for DataEngine
rg "func.*DataEngine.*" --type go
Length of output: 28631
k8s/pkg/apis/longhorn/v1beta2/backingimagedatasource.go (1)
33-33
: LGTM! The new parameter aligns with PR objectives.
The addition of DataSourceTypeParameterDataEngine
constant is well-placed and follows the existing naming conventions. This parameter is essential for supporting v2 backing images and enabling the data engine specification during backing image operations.
k8s/pkg/apis/longhorn/v1beta2/instancemanager.go (2)
195-196
: LGTM on the BackingImages field addition
The addition of the BackingImages
field to InstanceManagerStatus
aligns well with the PR objectives to support v2 backing images. This change naturally extends the existing data engine support in the struct while maintaining backward compatibility.
195-195
: Verify BackingImageV2CopyInfo type definition
Let's ensure the BackingImageV2CopyInfo
type is properly defined and accessible.
✅ Verification successful
BackingImageV2CopyInfo type is properly defined and integrated
The type BackingImageV2CopyInfo
is correctly defined in k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go
and is being used appropriately in the codebase. The struct definition includes all necessary fields with proper JSON tags, and the type is properly referenced across multiple files including the instance manager where it's being used.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for BackingImageV2CopyInfo type definition
# Expected: Find the type definition in backingimagemanager.go
# Search for the type definition
ast-grep --pattern 'type BackingImageV2CopyInfo struct {
$$$
}'
# Search for any imports of the type
rg --type go "BackingImageV2CopyInfo"
Length of output: 3134
manager/backingimage.go (2)
97-99
: LGTM: Appropriate default value handling
The default value handling ensures backward compatibility by defaulting to DataEngineTypeV1 when the parameter is not provided.
117-117
: Verify handling of existing backing images
The addition of DataEngine to BackingImageSpec looks correct. However, we should verify how existing backing images without this field are handled during upgrades.
✅ Verification successful
Let me gather more information about how the DataEngine field is handled in the codebase.
Let me check one more aspect to understand how the DataEngine field is handled during initialization and upgrades.
Based on the gathered information, I can now provide a final response:
DataEngine field addition is safe for existing backing images
The addition of the DataEngine field to BackingImageSpec is safe because:
- The field is marked as optional in the struct definition
- There's a default value set (DataEngineTypeV1) in the API model
- The field follows Kubernetes's standard behavior for optional fields, where missing fields are handled gracefully
Since Kubernetes handles optional fields gracefully and there's a default value implementation, existing backing images without this field will continue to work as expected during upgrades.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for existing backing images and their DataEngine field
# Search for BackingImage CRD usage to understand the impact
rg -A 5 "kind:\s+BackingImage"
# Look for upgrade-related code that might handle the new field
rg -A 5 "DataEngine.*migration|upgrade.*DataEngine"
Length of output: 817
Script:
#!/bin/bash
# Check BackingImage CRD definition and default values
rg -A 20 "type: BackingImage" k8s/crds.yaml
# Look for BackingImage controller code that might handle field initialization
ast-grep --pattern 'type BackingImage struct {
$$$
}'
# Search for BackingImage reconciliation logic
rg -A 10 "func.*Reconcile.*BackingImage"
Length of output: 10057
Script:
#!/bin/bash
# Check for default values and field initialization in BackingImageSpec
rg -A 10 "type BackingImageSpec struct" k8s/pkg/apis/longhorn/v1beta2/
# Look for conversion code between v1beta1 and v1beta2
rg -A 10 "Convert.*BackingImage" k8s/pkg/apis/longhorn/
# Check for any default value settings in the controller
rg -A 5 "Default.*DataEngine|DataEngine.*Default"
Length of output: 2557
webhook/resources/backingimage/validator.go (1)
93-96
: Enhance error message and verify v2-related validations.
The validation logic correctly prevents exporting from v2 volumes. Consider:
- Making the error message more descriptive
- Ensuring consistency of v2-related validations across the codebase
- return werror.NewInvalidError("exported from a v2 volume is not supported", "")
+ return werror.NewInvalidError("exporting from a v2 volume is not supported in the current version. Please use a v1 volume as the source.", "")
Let's verify the consistency of v2-related validations:
✅ Verification successful
The search results show consistent handling of v2-related validations across the codebase. There are three instances where v2 operations are explicitly blocked with clear error messages:
- Backing image export from v2 volumes (the code under review)
- Backing image backup for v2 backing images
- Backing image cloning from v2 backing images
The error messages are consistent in style and clarity, all indicating that v2 operations are not supported. The current validation aligns well with other v2-related checks in the codebase.
Error message improvement suggestion confirmed, no other issues found
The suggested error message improvement is valid and aligns with the style of other v2-related validations in the codebase. The current validation is correctly placed and consistent with other v2-related checks.
- return werror.NewInvalidError("exported from a v2 volume is not supported", "")
+ return werror.NewInvalidError("exporting from a v2 volume is not supported in the current version. Please use a v1 volume as the source.", "")
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistency in v2-related validation messages and coverage
# Search for v2-related validation error messages
echo "Checking v2-related error messages:"
rg -l "v2.*not supported" --type go
# Search for DataEngineTypeV2 checks in validation code
echo -e "\nChecking v2 engine type validations:"
rg "DataEngineTypeV2" --type go -C 3 "webhook/"
# Search for potential missing v2 validations in other resources
echo -e "\nChecking other potential locations needing v2 validation:"
rg "func.*Create.*\(.*admission\.Request" --type go "webhook/"
Length of output: 7769
engineapi/enginesim.go (1)
4-4
: LGTM: Required imports for SPDK functionality
The new imports are necessary for the SPDK backing image operations:
context
for watch operation cancellation supportimapi
for BackingImage types
Also applies to: 12-12
webhook/resources/backingimage/mutator.go (1)
158-160
: LGTM! Verify default DataEngine handling across the codebase.
The default DataEngine initialization is correctly implemented. The V1 engine default maintains backward compatibility.
Let's verify the DataEngine handling across the codebase:
✅ Verification successful
Default DataEngine handling is consistent across the codebase
The verification confirms that the default DataEngine handling in the webhook mutator aligns with the codebase's practices:
- Upgrade paths properly set V1 as default for backward compatibility
- Webhooks consistently validate DataEngine field presence
- Controllers handle both V1 and V2 engines with appropriate logic
- CSI and manager components default to V1 when needed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify DataEngine handling consistency
# Check for any direct DataEngine assignments or validations
# Search for DataEngine field usage
rg -A 3 "DataEngine.*=.*V1"
# Search for potential validation logic
ast-grep --pattern 'if $_.Spec.DataEngine == $_ {
$$$
}'
Length of output: 8109
engineapi/types.go (2)
4-4
: LGTM: Required imports added for SPDK backing image support
The new imports are correctly added and necessary for the new SPDK backing image functionality:
context
for the context.Context parameter in SPDKBackingImageWatchimapi
for BackingImage types from longhorn-instance-manager
Also applies to: 11-11
114-118
: Verify SPDK backing image implementation details
Let's verify the implementation details of these new methods across the codebase.
✅ Verification successful
Based on the shell script results, I can now provide a final response:
SPDK backing image implementation is consistent and well-structured
The implementation shows consistent patterns across the codebase:
- Parameter validation is properly handled in client implementations with required field checks
- Error handling follows standard Go patterns with proper error wrapping and propagation
- Usage patterns are consistent across proxy, client and controller layers
- Interface methods are properly implemented with consistent signatures
The code demonstrates proper separation of concerns:
- gRPC service definitions in generated code
- Client implementations with validation
- Controller logic for orchestration
- Error types and status handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify SPDK backing image implementation patterns
# Check for parameter validation patterns
echo "Checking parameter validation patterns..."
rg -A 5 "SPDKBackingImage.*\(" --type go
# Check for error handling patterns
echo "Checking error handling patterns..."
rg "return.*error" --type go -C 5 | rg -B 5 "SPDKBackingImage.*\("
# Check for usage patterns
echo "Checking usage patterns..."
rg "SPDKBackingImage" --type go -C 5
Length of output: 132328
controller/controller_manager.go (1)
88-91
: LGTM: Instance manager controller initialization updated correctly
The addition of proxyConnCounter
to NewInstanceManagerController
is consistent with the v2 backing image support implementation.
Let's verify the consistent usage of proxyConnCounter across related controllers:
✅ Verification successful
Consistent usage of proxyConnCounter parameter verified across controllers
The verification shows that proxyConnCounter
is consistently used across all controller initializations in the codebase, including:
- InstanceManagerController
- VolumeController
- EngineController
- BackupTargetController
- BackupVolumeController
- BackupController
- BackupBackingImageController
- BackingImageController
- BackingImageDataSourceController
- SnapshotController
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent proxyConnCounter usage in controller constructors
# Expected: All related controllers should use proxyConnCounter consistently
rg "New.*Controller.*proxyConnCounter" --type go
Length of output: 2258
controller/replica_controller.go (1)
428-430
: LGTM: DataEngine field addition aligns with v2 backing image support
The addition of DataEngine
field to BackingImageDiskFileSpec
is correct and consistent with the PR's objective to support v2 backing images. The value is properly inherited from the replica's spec.
Let's verify the consistency of DataEngine usage across the codebase:
✅ Verification successful
DataEngine field addition is correctly implemented and consistent
The verification confirms that:
- The
DataEngine
field is properly defined in theBackingImageDiskFileSpec
struct (k8s/pkg/apis/longhorn/v1beta2/backingimage.go
) - The field is consistently used across the codebase:
- In replica controller (the reviewed code)
- In backing image data source controller (with
DataEngineTypeV1
) - In backing image controller (with
DataEngineTypeV2
)
- The implementation aligns with the type system and includes proper JSON tags
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify DataEngine field usage in BackingImageDiskFileSpec
# Test: Search for BackingImageDiskFileSpec struct definition and its usage
# Expect: Consistent DataEngine field usage
# Check the struct definition
ast-grep --pattern 'type BackingImageDiskFileSpec struct {
$$$
DataEngine $_ `$$$`
$$$
}'
# Check other usages of BackingImageDiskFileSpec
rg -A 5 'BackingImageDiskFileSpec\s*{'
Length of output: 258
Script:
#!/bin/bash
# Let's try a different approach to verify the DataEngine field usage
# Check the struct definition
rg "type BackingImageDiskFileSpec struct" -A 10
# Check usages of BackingImageDiskFileSpec
rg "BackingImageDiskFileSpec" -A 3
# Check if DataEngine is consistently used with backing images
rg "DataEngine.*backing.*image" -i
Length of output: 7816
engineapi/instance_manager.go (1)
576-578
: LGTM! Verify backing image usage.
The addition of BackingImageName
to the replica creation request is correct and aligns with the PR objectives for v2 backing image support.
Let's verify the usage of backing image name in the codebase:
✅ Verification successful
Based on the extensive search results, I can now verify the usage of BackingImageName
in the codebase. The field is consistently used across the codebase, particularly in:
- The ReplicaSpec struct in both v1beta1 and v1beta2 APIs where it's defined as
BackingImage string
- The instance manager's replica creation request where it's used as
BackingImageName
- Various protobuf and API definitions showing consistent usage
LGTM! The backing image name field is correctly implemented.
The addition of BackingImageName
to the replica creation request is consistent with the codebase's handling of backing images and properly maps from req.Replica.Spec.BackingImage
to the request field.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of backing image name in replica creation
# Test: Search for backing image usage in replica creation. Expect: Consistent usage of backing image name.
# Check backing image field usage in replica specs
ast-grep --pattern 'type ReplicaSpec struct {
$$$
BackingImage string `$_`
$$$
}'
# Check backing image name usage in instance manager
rg -A 5 'BackingImageName.*Spec\.BackingImage'
Length of output: 417
Script:
#!/bin/bash
# Let's verify more thoroughly by:
# 1. Finding all ReplicaSpec struct definitions
# 2. Checking BackingImage field usage in replica creation contexts
# 3. Verifying consistency of the field name across the codebase
# Find all ReplicaSpec definitions
ast-grep --pattern 'type ReplicaSpec struct {
$$$
}'
# Check all usages of BackingImage field in replica contexts
rg -B 2 -A 2 'BackingImage.*string'
# Find all places where BackingImageName is used
rg -B 2 -A 2 'BackingImageName'
Length of output: 105155
types/types.go (3)
756-758
: LGTM: Error check function follows existing patterns
The function provides a clear way to check for a specific error condition related to logical volumes, which aligns with the v2 backing image support objective.
1270-1272
: LGTM: Consistent naming function
The function follows the established naming patterns in the codebase for generating component names.
1274-1276
: LGTM: V2-specific naming function
The function provides a clear way to generate unique names for v2 backing images by incorporating the disk UUID.
api/model.go (2)
2015-2015
: LGTM!
The DataEngine field mapping in toBackingImageResource is correctly implemented.
269-269
: Consider adding field validation for DataEngine
The DataEngine field addition looks good and aligns with the v2 backing image support. Consider adding validation to ensure only valid engine types are accepted.
Let's verify the allowed values for DataEngine:
k8s/pkg/apis/longhorn/v1beta2/zz_generated.deepcopy.go (3)
488-502
: LGTM! The BackingImageV2CopyInfo deepcopy implementation is correct.
The implementation follows the standard Kubernetes deepcopy patterns with proper nil checks and value copying.
1522-1528
: LGTM! The BackingImages field deepcopy implementation is correct.
The implementation properly handles the map type with string keys and BackingImageV2CopyInfo values, consistent with other map field implementations in the file.
488-502
: Verify integration with v2 backing image controller.
The deepcopy implementations align with the PR objective of supporting v2 backing images. To ensure proper integration, verify that the backing image controller correctly utilizes these deepcopy methods when managing v2 backing image copies.
Also applies to: 1522-1528
✅ Verification successful
The search results show that BackingImageV2CopyInfo
is properly integrated across the codebase:
- It's defined in
k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go
- It's used in the engine API interfaces and implementations (
engineapi/types.go
,engineapi/enginesim.go
,engineapi/proxy_backing_image.go
) - It's used in instance manager status (
k8s/pkg/apis/longhorn/v1beta2/instancemanager.go
)
Let's verify the controller integration:
The search results show comprehensive integration of BackingImageV2CopyInfo
with the v2 backing image controller and SPDK functionality:
- The struct is used in the engine API interface and implementations
- It's properly integrated with the instance manager controller for monitoring and listing SPDK backing images
- The backing image controller uses it through SPDK operations (create, delete, get, list, watch)
- The proxy layer correctly handles the conversion between RPC and internal types
DeepCopy implementation is correctly integrated with v2 backing image functionality
The deepcopy implementation aligns with the codebase usage, where BackingImageV2CopyInfo
is used in maps and requires copying for controller operations, particularly in:
- Instance manager status updates (
BackingImages map[string]BackingImageV2CopyInfo
) - SPDK backing image operations in the controller
- Proxy layer type conversions
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of BackingImageV2CopyInfo in the backing image controller
# Expected: Find references to BackingImageV2CopyInfo in the controller code
# Search for BackingImageV2CopyInfo usage in the codebase
rg "BackingImageV2CopyInfo" -A 5
# Search for v2 backing image related controller logic
ast-grep --pattern 'func $_(BackingImageV2CopyInfo)'
Length of output: 5131
Script:
#!/bin/bash
# Search for backing image controller and its usage of v2 related functionality
rg "BackingImage.*Controller" -l | xargs rg "BackingImageV2" -B 2 -A 2
# Search for SPDK related backing image operations
rg "SPDKBackingImage" -B 2 -A 2
Length of output: 83770
k8s/crds.yaml (1)
2280-2301
: LGTM! Comprehensive status tracking for backing images.
The addition of the backingImages
field to InstanceManager status provides all necessary information for tracking and managing backing images, including:
- Checksum validation
- Size and progress monitoring
- State tracking
- Disk association
k8s/pkg/apis/longhorn/v1beta2/backingimage.go (3)
32-33
: Addition of DataEngine
field to BackingImageDiskFileStatus
The addition of the DataEngine
field to BackingImageDiskFileStatus
enhances the ability to track the data engine type for each disk file status, which is essential for supporting both v1 and v2 backing images.
47-48
: Inclusion of DataEngine
field in BackingImageDiskFileSpec
Adding the DataEngine
field to BackingImageDiskFileSpec
aligns the disk file specifications with the data engine types, allowing for more precise control and configuration at the disk level.
101-105
: Addition of V2FirstCopyStatus
and V2FirstCopyDisk
in BackingImageStatus
Introducing V2FirstCopyStatus
and V2FirstCopyDisk
fields to BackingImageStatus
provides valuable tracking for the initial copy process of v2 backing images. This enhancement improves the observability and management of backing image states.
webhook/resources/volume/validator.go (1)
98-112
: Validation logic correctly enforces data engine and size constraints
The added checks appropriately ensure that:
- The volume's data engine matches the backing image's data engine.
- The volume size is at least as large as the backing image's virtual size.
controller/backing_image_data_source_controller.go (1)
696-700
: Correctly appending DataEngine
parameter for v2 backing images
The code appropriately appends the DataEngine
parameter to the command when bi.Spec.DataEngine
is longhorn.DataEngineTypeV2
, ensuring the backing image data source pod is configured correctly.
controller/instance_manager_controller.go (6)
813-820
: Confirm correct invocation of backing image monitoring functions.
The syncMonitor
method conditionally starts or stops backing image monitoring based on the data engine type. Ensure that this condition correctly reflects when monitoring should occur for V2 data engines. Also, verify that no additional conditions are needed for starting or stopping the monitor to prevent resource leaks or missed updates.
1259-1261
: Properly stop backing image monitoring during cleanup.
In the cleanupInstanceManager
function, you're stopping the backing image monitoring by invoking imc.stopMonitoring(backingImageMonitorName)
. Ensure that this effectively stops all related goroutines and cleans up resources associated with the monitor to prevent leaks.
1582-1633
: Good implementation of startBackingImageMonitoring
.
The startBackingImageMonitoring
function correctly initializes the backing image monitor for V2 data engines and handles error cases, such as missing IP addresses. The use of mutexes to manage concurrent access to instanceManagerMonitorMap
is appropriate.
1653-1729
: Efficient handling in BackingImageMonitorRun
.
The BackingImageMonitorRun
method effectively manages the monitoring loop, including error handling with continuousFailureCount
and proper synchronization using mutexes. The use of context cancellation and channels for stopping the monitor is appropriate.
1730-1763
: Robust polling in pollAndUpdateV2BackingImageMap
.
The method pollAndUpdateV2BackingImageMap
correctly polls the backing image statuses and updates the instance manager's status when changes are detected. Error handling and conditions for stopping the monitor are properly implemented.
504-504
: Avoid resetting BackingImages
to nil
unnecessarily.
In syncInstanceStatus
, setting im.Status.BackingImages = nil
could lead to loss of backing image state information, especially during transient errors or restarts. Consider preserving the backing image statuses to maintain consistency and aid in debugging.
Run the following script to check where im.Status.BackingImages
is being used and ensure that resetting it to nil
will not cause unintended side effects:
✅ Verification successful
Setting BackingImages
to nil
is correct and intentional in this context
The code is working as intended. The nil
assignment occurs in syncInstanceStatus
when the instance manager is in a non-running state (stopped, error, or starting). This is part of a deliberate cleanup pattern where all instance-related status fields are reset to prevent stale information, as indicated by the comment "This step prevents other controllers from being confused by stale information."
The BackingImages
status is properly managed:
- Reset to
nil
when instance manager is not running to avoid stale data - Updated with fresh data from
SPDKBackingImageList()
during normal operation - Changes are tracked using
DeepEqual
to prevent unnecessary updates
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find usages of im.Status.BackingImages in the codebase.
rg --no-heading --line-number 'im\.Status\.BackingImages' controller/
Length of output: 441
Script:
#!/bin/bash
# Let's check the context around these usages to understand the logic flow
# Check the context around the nil assignment
rg --no-heading -B 10 -A 10 'im\.Status\.BackingImages = nil' controller/
# Check the context around other usages
rg --no-heading -B 10 -A 10 'im\.Status\.BackingImages = resp' controller/
rg --no-heading -B 10 -A 10 'reflect\.DeepEqual\(im\.Status\.BackingImages, resp\)' controller/
Length of output: 5218
controller/backing_image_controller.go (4)
1631-1631
: Ensure proper object casting in enqueueBackingImageForInstanceManagerUpdate
Inside enqueueBackingImageForInstanceManagerUpdate
, make sure to correctly cast the provided object to the appropriate type. Missing or incorrect casting may result in runtime panics.
Confirm that the object obj
can be safely used within the function or adjust the function to handle different object types as necessary.
925-945
:
Handle potential nil pointer dereference
In replenishBackingImageCopies
, when checking bi.Status.V2FirstCopyStatus
, ensure that bi.Status.V2FirstCopyDisk
is not empty to avoid potential nil pointer dereferences.
Apply this fix:
- if bi.Status.V2FirstCopyDisk == "" || bi.Status.V2FirstCopyStatus != longhorn.BackingImageStateReady {
+ if bi.Status.V2FirstCopyDisk == "" || bi.Status.V2FirstCopyStatus != longhorn.BackingImageStateReady {
+ return nil
+ }
Likely invalid or redundant comment.
1630-1632
:
Fix incorrect function parameter in enqueueBackingImageForInstanceManagerUpdate
The function enqueueBackingImageForInstanceManagerUpdate
is called with a single parameter, but its definition seems to expect two parameters (oldObj
, newObj
).
Update the function definition to accept the correct parameters, or adjust the caller accordingly.
- func (bic *BackingImageController) enqueueBackingImageForInstanceManagerUpdate(obj interface{}) {
+ func (bic *BackingImageController) enqueueBackingImageForInstanceManagerUpdate(oldObj, newObj interface{}) {
Alternatively, if only the current object is needed, ensure consistency between the function definition and its usage.
Likely invalid or redundant comment.
348-353
: Ensure proper handling of V2 backing images
The condition checks for backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2
before calling handleV2BackingImage
. Confirm that all paths leading to the handling of V2 backing images are correctly guarded and that handleV2BackingImage
properly manages the state transitions.
Run the following script to verify the guard conditions:
✅ Verification successful
V2 backing image handling is properly guarded and implemented
The verification confirms that the handling of V2 backing images is correctly implemented with proper guards:
- The
handleV2BackingImage
function is only called whenbackingImage.Spec.DataEngine == longhorn.DataEngineTypeV2
- The implementation includes comprehensive error handling and state management:
- Prepares first V2 copy with proper validation
- Syncs status with instance manager
- Handles state transitions and cleanup
- All V2-specific operations (creation, monitoring, cleanup) are consistently guarded by the same DataEngine type check
- The code maintains proper separation between V1 and V2 operations throughout the controller
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that `handleV2BackingImage` is called only when `DataEngine` is `DataEngineTypeV2`
# Search for calls to `handleV2BackingImage` and check the conditions
rg -A 5 "handleV2BackingImage" controller/ | grep -B 2 "DataEngineTypeV2"
Length of output: 71
Script:
#!/bin/bash
# Let's try a different approach to verify the handling of V2 backing images
# First, let's find all occurrences of handleV2BackingImage
rg -A 5 -B 5 "handleV2BackingImage" controller/
# Then, let's also check for DataEngineTypeV2 usage
rg -A 5 -B 5 "DataEngineTypeV2" controller/
# Finally, let's look at the implementation of handleV2BackingImage
ast-grep --pattern 'func $$$handleV2BackingImage($$$) {
$$$
}'
Length of output: 30988
datastore/longhorn.go (1)
2896-2896
: Verify all call sites of GetReadyNodeDiskForBackingImage
are updated with the new dataEngine
parameter
The function signature has been modified to include the dataEngine
parameter. Please ensure that all calls to this function are updated accordingly to prevent compilation errors or incorrect behavior.
Run the following script to check for call sites that may not have been updated:
✅ Verification successful
All call sites of GetReadyNodeDiskForBackingImage
are properly updated with the new dataEngine
parameter
The verification shows that all four call sites of the function have been updated with the required dataEngine
parameter:
- In
controller/backing_image_controller.go
:- Called with
longhorn.DataEngineTypeV2
- Called with
biDataEngine
variable - Called twice with
longhorn.DataEngineTypeV1
- Called with
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all call sites of GetReadyNodeDiskForBackingImage using the old signature.
# Search for function calls without the new parameter.
ast-grep --pattern $'GetReadyNodeDiskForBackingImage($_)'
Length of output: 58
Script:
#!/bin/bash
# Let's try a more comprehensive search using ripgrep to find all occurrences of the function name
rg "GetReadyNodeDiskForBackingImage" -A 2 -B 2
Length of output: 2364
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | ||
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | ||
if !ok { | ||
return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "") | ||
} | ||
backingImageName := backupBackingImage.Name | ||
|
||
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | ||
if err != nil { | ||
return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "") | ||
} | ||
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | ||
return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "") | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error messages and improve error handling.
There are several issues in the validation logic:
- The error message in line 47 has incorrect formatting and unused parameters
- Unnecessary use of fmt.Sprintf in line 50
- Error handling could be more specific about why the backing image couldn't be retrieved
Apply these fixes:
- return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "")
+ return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image %q for backup: %v", backingImageName, err), "")
- return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "")
+ return werror.NewInvalidError("v2 backing image is not supported for backup", "")
Consider adding more context to error messages to help users understand and resolve issues:
- return werror.NewInvalidError("v2 backing image is not supported for backup", "")
+ return werror.NewInvalidError(fmt.Sprintf("backing image %q uses v2 data engine which doesn't support backup operations", backingImageName), "")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | |
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | |
if !ok { | |
return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "") | |
} | |
backingImageName := backupBackingImage.Name | |
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | |
if err != nil { | |
return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "") | |
} | |
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | |
return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "") | |
} | |
return nil | |
} | |
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | |
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | |
if !ok { | |
return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "") | |
} | |
backingImageName := backupBackingImage.Name | |
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | |
if err != nil { | |
return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image %q for backup: %v", backingImageName, err), "") | |
} | |
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | |
return werror.NewInvalidError(fmt.Sprintf("backing image %q uses v2 data engine which doesn't support backup operations", backingImageName), "") | |
} | |
return nil | |
} |
🧰 Tools
🪛 golangci-lint (1.62.2)
50-50: S1039: unnecessary use of fmt.Sprintf
(gosimple)
47-47: printf: fmt.Sprintf call has arguments but no formatting directives
(govet)
func (bbi *backupBackingImageValidator) Resource() admission.Resource { | ||
return admission.Resource{ | ||
Name: "backupbackingimages", | ||
Scope: admissionregv1.NamespacedScope, | ||
APIGroup: longhorn.SchemeGroupVersion.Group, | ||
APIVersion: longhorn.SchemeGroupVersion.Version, | ||
ObjectType: &longhorn.BackupBackingImage{}, | ||
OperationTypes: []admissionregv1.OperationType{ | ||
admissionregv1.Create, | ||
}, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Update and Delete operations should be validated for backup backing images
The codebase shows extensive usage of Update and Delete operations for BackupBackingImage resources:
-
Update operations:
UpdateBackupBackingImage
andUpdateBackupBackingImageStatus
in datastore- Status updates in backup_backing_image_controller.go
- Label updates via
AddBackupBackingImageDeleteCustomResourceOnlyLabel
-
Delete operations:
DeleteBackupBackingImage
used in multiple controllers- Cleanup of non-existent backup backing images in backup_target_controller.go
- API endpoints for deletion
The current validator should be extended to include these operations since they are actively used in the codebase and could benefit from validation rules.
🔗 Analysis chain
Verify if other operation types are needed for backup backing images.
The validator currently only handles Create operations. Consider if Update/Delete operations should also be validated for backup backing images.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if there are any Update/Delete operations on backup backing images in the codebase
# Search for any Update/Delete operations on backup backing images
rg -A 5 "Update.*BackupBackingImage|Delete.*BackupBackingImage"
# Check for any CRD definitions that might indicate needed operations
fd -e yaml -e yml | xargs rg "kind: BackupBackingImage" -A 20
Length of output: 15756
if [ -e ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/patches/crd ]; then | ||
cp -a ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/patches/crd patches | ||
find patches -type f | xargs -i sh -c 'kustomize edit add patch --path {}' | ||
find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unsafe file handling in patch application
The current implementation doesn't safely handle filenames containing spaces or special characters, which could lead to script failures.
Apply this fix to handle special characters safely:
-find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}'
+find patches -type f -print0 | xargs -0 -I {} sh -c '${GOPATH}/bin/kustomize edit add patch --path "{}"'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}' | |
find patches -type f -print0 | xargs -0 -I {} sh -c '${GOPATH}/bin/kustomize edit add patch --path "{}"' |
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 74-74: Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames.
(SC2038)
🪛 GitHub Check: CodeFactor
[warning] 74-74: k8s/generate_code.sh#L74
Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames. (SC2038)
@@ -27,7 +29,7 @@ type BackingImage struct { | |||
|
|||
Secret string `json:"secret,omitempty" yaml:"secret,omitempty"` | |||
|
|||
SecretNamespace string `json:"secretNamespace,omitempty" yaml:"secretNamespace,omitempty"` | |||
SecretNamespace string `json:"secretNamespace,omitempty" yaml:"secret_namespace,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Breaking change detected in YAML tag for SecretNamespace field
The change from secretNamespace
to secret_namespace
in the YAML tag is a breaking change that affects BackingImage CRD configurations. This is confirmed by:
- The CRD definition in
k8s/crds.yaml
still usessecretNamespace
in the spec - Multiple Go code references show active usage of this field for encryption and secret handling
- No migration documentation was found for this breaking change
Recommendations:
- Keep the YAML tag as
secretNamespace
to maintain backward compatibility - If snake_case standardization is required, it should be part of a major version upgrade with proper migration documentation
🔗 Analysis chain
Verify the impact of YAML tag change for SecretNamespace.
The YAML tag change from secretNamespace
to secret_namespace
is a breaking change that could affect existing YAML configurations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for YAML configurations that might be affected by the tag change
# Expected: Find any YAML files using the old secretNamespace format
# Search for YAML files using the old format
rg "secretNamespace:" --type yaml
# Check for any migration documentation
fd -e md -e txt | xargs rg -l "secret.?namespace"
Length of output: 175
Script:
#!/bin/bash
# Let's examine the CRD definition to understand the impact
cat k8s/crds.yaml
# Also search for any related test files or examples
fd -e yaml -e yml | xargs rg -l "secret.?namespace"
# Check for any related Go code that might handle this field
rg "SecretNamespace" -t go -C 2
Length of output: 131177
Script:
#!/bin/bash
# Let's check for any existing YAML files using the old format in examples, tests, or docs
rg -t yaml "secretNamespace:" -B 2 -A 2
# Check for any Go code references to understand the usage
rg "SecretNamespace" -t go -B 2 -A 2
# Look for any migration or upgrade documentation
fd -e md | xargs rg -l "secret.?namespace|breaking.?changes"
Length of output: 25559
@@ -30,6 +30,7 @@ const ( | |||
DataSourceTypeCloneParameterEncryption = "encryption" | |||
DataSourceTypeCloneParameterSecret = "secret" | |||
DataSourceTypeCloneParameterSecretNamespace = "secret-namespace" | |||
DataSourceTypeParameterDataEngine = "data-engine" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the gathered information, I can now generate the final response:
Documentation needed for data-engine parameter in BackingImageDataSource
The codebase shows that the data-engine
parameter is used to specify the data engine type (v1 or v2) for backing image operations, but this is not documented in the API. Please add documentation that describes:
- Valid values:
v1
andv2
(as seen inDataEngineType
definitions) - Usage: Required when using v2 data engine to prepare backing images
- Limitations:
- V2 backing images are not supported for backup operations
- Cannot clone from or export from v2 backing images/volumes
🔗 Analysis chain
Consider updating API documentation.
Since this introduces a new parameter that affects backing image operations, consider adding documentation that describes:
- Valid values for the data-engine parameter
- Use cases and scenarios where this parameter is required
- Any limitations or restrictions when using different data engines
Let's check if there's existing API documentation that needs updating:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for API documentation files
fd -e md -e yaml -e json "api|swagger|openapi" | grep -i "doc\|spec"
Length of output: 70
Script:
#!/bin/bash
# Let's try a broader search for documentation files and API specs
fd -e md -e yaml -e json . | grep -i "api\|doc\|spec"
# Also search for specific references to backing image documentation
rg -i "backingimagedatasource|data-engine" -t md -t yaml -t json
Length of output: 1873
Script:
#!/bin/bash
# Check for API type definitions and comments in Go files
rg -t go "type.*BackingImageDataSource" -A 10
# Look for any existing documentation about data engines
rg -i "data-engine|dataengine" -t md
Length of output: 51580
Script:
#!/bin/bash
# Check for API documentation in well-known locations
cat docs/README.md 2>/dev/null || true
cat README.md 2>/dev/null || true
Length of output: 4685
Script:
#!/bin/bash
# Let's check the CRD definition for BackingImageDataSource which often contains API documentation
cat k8s/crds.yaml | grep -A 50 "kind: BackingImageDataSource" || true
# Also check for any comments in the source code around the data-engine parameter
rg -B 5 -A 5 "DataSourceTypeParameterDataEngine"
Length of output: 4302
Script:
#!/bin/bash
# Let's check for any documentation about data engines in the codebase
rg -B 3 -A 3 "DataEngineType|data-engine" -t go
# Also check for any comments or documentation in the CRD about parameters
cat k8s/crds.yaml | grep -A 100 "parameters:" || true
Length of output: 102498
// backing image data source always prepare the backing image on the v1 disk | ||
bi.Spec.DiskFileSpecMap[bids.Spec.DiskUUID] = &longhorn.BackingImageDiskFileSpec{ | ||
DataEngine: longhorn.DataEngineTypeV1, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential inconsistency in setting DataEngine
for DiskFileSpecMap
entries
The code sets DataEngine
to longhorn.DataEngineTypeV1
unconditionally. This may not handle v2 backing images correctly, especially when bi.Spec.DataEngine
is longhorn.DataEngineTypeV2
.
Consider updating the code to set DataEngine
based on bi.Spec.DataEngine
:
- bi.Spec.DiskFileSpecMap[bids.Spec.DiskUUID] = &longhorn.BackingImageDiskFileSpec{
- DataEngine: longhorn.DataEngineTypeV1,
- }
+ bi.Spec.DiskFileSpecMap[bids.Spec.DiskUUID] = &longhorn.BackingImageDiskFileSpec{
+ DataEngine: bi.Spec.DataEngine,
+ }
Committable suggestion skipped: line range outside the PR's diff.
return errors.Wrapf(err, "failed to prepare the first v2 backing image") | ||
} | ||
|
||
if bic.syncV2StatusWithInstanceManager(bi); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unchecked error return value in syncV2StatusWithInstanceManager
call
The error returned by bic.syncV2StatusWithInstanceManager(bi)
is not assigned to err
, which may cause potential errors to be ignored. Assign the error to err
to ensure proper error handling.
Apply the following fix:
- if bic.syncV2StatusWithInstanceManager(bi); err != nil {
+ if err := bic.syncV2StatusWithInstanceManager(bi); err != nil {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if bic.syncV2StatusWithInstanceManager(bi); err != nil { | |
if err := bic.syncV2StatusWithInstanceManager(bi); err != nil { |
🧰 Tools
🪛 golangci-lint (1.62.2)
362-362: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
readyV2CopyDiskUUIDs = append(readyV2CopyDiskUUIDs, v2DiskUUID) | ||
} | ||
if backingImageInProgress(biStatus.State) { | ||
log.Infof("There is one v2 backing image copy in progress, prepare the backing image one at a time", bi.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing formatting directive in log.Infof
The log.Infof
call includes an argument bi.Name
but lacks a formatting directive, so the argument will not be used. Include %v
in the message to properly format the argument.
Apply the following fix:
- log.Infof("There is one v2 backing image copy in progress, prepare the backing image one at a time", bi.Name)
+ log.Infof("There is one v2 backing image copy in progress for backing image %v, prepare the backing image one at a time", bi.Name)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
log.Infof("There is one v2 backing image copy in progress, prepare the backing image one at a time", bi.Name) | |
log.Infof("There is one v2 backing image copy in progress for backing image %v, prepare the backing image one at a time", bi.Name) |
🧰 Tools
🪛 golangci-lint (1.62.2)
758-758: printf: (*github.com/sirupsen/logrus.Entry).Infof call has arguments but no formatting directives
(govet)
func (bic *BackingImageController) cleanupAllV2BackingImageCopies(bi *longhorn.BackingImage) (cleaned bool, err error) { | ||
log := getLoggerForBackingImage(bic.logger, bi) | ||
|
||
if bic.syncV2StatusWithInstanceManager(bi); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unchecked error return value in syncV2StatusWithInstanceManager
call
Similar to line 362, the error returned by bic.syncV2StatusWithInstanceManager(bi)
is not assigned to err
. This can lead to unhandled errors. Assign the error to err
to ensure proper error handling.
Apply the following fix:
- if bic.syncV2StatusWithInstanceManager(bi); err != nil {
+ if err := bic.syncV2StatusWithInstanceManager(bi); err != nil {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if bic.syncV2StatusWithInstanceManager(bi); err != nil { | |
if err := bic.syncV2StatusWithInstanceManager(bi); err != nil { |
🧰 Tools
🪛 golangci-lint (1.62.2)
696-696: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
if dataEngine == longhorn.DataEngineTypeV2 { | ||
if diskSpec.Type != longhorn.DiskTypeBlock { | ||
continue | ||
} | ||
} else { | ||
if diskSpec.Type != longhorn.DiskTypeFilesystem { | ||
continue | ||
} | ||
} | ||
// TODO: Jack add block type disk for spdk version BackingImage | ||
if diskSpec.Type != longhorn.DiskTypeFilesystem { | ||
if _, exists := backingImage.Spec.DiskFileSpecMap[diskStatus.DiskUUID]; exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Explicitly handle all dataEngine
types to prevent unintended behavior
The current logic uses an else
block to handle all data engine types other than DataEngineTypeV2
. If new data engine types are introduced in the future, they will fall into the else
block, which may not be appropriate. Consider explicitly checking for DataEngineTypeV1
and adding a default case to handle unexpected types.
Apply this diff to improve the code:
if dataEngine == longhorn.DataEngineTypeV2 {
if diskSpec.Type != longhorn.DiskTypeBlock {
continue
}
-} else {
+} else if dataEngine == longhorn.DataEngineTypeV1 {
if diskSpec.Type != longhorn.DiskTypeFilesystem {
continue
}
+} else {
+ // Handle unexpected dataEngine types
+ logrus.Warnf("Unknown dataEngine type: %v", dataEngine)
+ continue
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if dataEngine == longhorn.DataEngineTypeV2 { | |
if diskSpec.Type != longhorn.DiskTypeBlock { | |
continue | |
} | |
} else { | |
if diskSpec.Type != longhorn.DiskTypeFilesystem { | |
continue | |
} | |
} | |
// TODO: Jack add block type disk for spdk version BackingImage | |
if diskSpec.Type != longhorn.DiskTypeFilesystem { | |
if _, exists := backingImage.Spec.DiskFileSpecMap[diskStatus.DiskUUID]; exists { | |
if dataEngine == longhorn.DataEngineTypeV2 { | |
if diskSpec.Type != longhorn.DiskTypeBlock { | |
continue | |
} | |
} else if dataEngine == longhorn.DataEngineTypeV1 { | |
if diskSpec.Type != longhorn.DiskTypeFilesystem { | |
continue | |
} | |
} else { | |
// Handle unexpected dataEngine types | |
logrus.Warnf("Unknown dataEngine type: %v", dataEngine) | |
continue | |
} | |
if _, exists := backingImage.Spec.DiskFileSpecMap[diskStatus.DiskUUID]; exists { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🧹 Outside diff range and nitpick comments (28)
upgrade/v17xto180/upgrade.go (2)
45-50
: Simplify the DiskFileSpecMap iteration logicWhile the logic is correct, the nil check and iteration structure can be improved for better readability and maintainability.
Consider this more idiomatic approach:
- for bi.Spec.DiskFileSpecMap != nil { + if bi.Spec.DiskFileSpecMap != nil { for diskUUID := range bi.Spec.DiskFileSpecMap { bi.Spec.DiskFileSpecMap[diskUUID].DataEngine = longhorn.DataEngineTypeV1 } - } }
52-57
: Similar improvement needed for DiskFileStatusMap iterationThe same simplification can be applied here. While this code is similar to the DiskFileSpecMap section, I note the previous learning about keeping the logic within the function for simplicity when there's only a single caller.
Consider this improvement:
- for bi.Status.DiskFileStatusMap != nil { + if bi.Status.DiskFileStatusMap != nil { for diskUUID := range bi.Status.DiskFileStatusMap { bi.Status.DiskFileStatusMap[diskUUID].DataEngine = longhorn.DataEngineTypeV1 } - } }engineapi/proxy_backing_image.go (2)
12-14
: Add parameter validation for SPDKBackingImageCreateConsider adding validation for required parameters before making the RPC call. For example, check for empty strings and validate that size is greater than 0.
func (p *Proxy) SPDKBackingImageCreate(name, backingImageUUID, diskUUID, checksum, fromAddress, srcDiskUUID string, size uint64) (*imapi.BackingImage, error) { + if name == "" || backingImageUUID == "" || diskUUID == "" { + return nil, errors.New("required parameters cannot be empty") + } + if size == 0 { + return nil, errors.New("size must be greater than 0") + } return p.grpcClient.SPDKBackingImageCreate(name, backingImageUUID, diskUUID, checksum, fromAddress, srcDiskUUID, size) }
57-75
: Add documentation for EngineBinary SPDK methodsConsider adding comments to explain why these methods are not implemented in EngineBinary and when they might be implemented in the future. This will help other developers understand the design decision.
+// SPDKBackingImageCreate is not implemented for EngineBinary as SPDK operations +// are only supported through the proxy interface func (e *EngineBinary) SPDKBackingImageCreate(name, backingImageUUID, diskUUID, checksum, fromAddress, srcDiskUUID string, size uint64) (*imapi.BackingImage, error) { return nil, errors.New(ErrNotImplement) }k8s/generate_code.sh (1)
Line range hint
47-56
: Consider simplifying kustomize installation.While functional, the kustomize installation could be simplified to match the controller-gen pattern using
go install
.-if ! command -v ${GOPATH}/bin/kustomize > /dev/null; then - echo "kustomize is missing" - echo "Prepare to install kustomize" - mkdir -p ${GOPATH}/src/github.com/kubernetes-sigs - pushd ${GOPATH}/src/github.com/kubernetes-sigs - git clone -b ${KUSTOMIZE_VERSION} https://github.com/kubernetes-sigs/kustomize.git 2>/dev/null || true - cd kustomize/kustomize - go install . - popd +if ! command -v ${GOPATH}/bin/kustomize > /dev/null; then + echo "kustomize is missing" + echo "Prepare to install kustomize" + go install sigs.k8s.io/kustomize/kustomize/v5@${KUSTOMIZE_VERSION} fik8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go (2)
15-32
: Add field descriptions using kubebuilder markers.Each field should have a description to help API consumers understand its purpose and usage. Consider adding
// +kubebuilder:validation:Description=<description>
markers.Example for a few fields:
type BackingImageV2CopyInfo struct { // +optional + // +kubebuilder:validation:Description=Name of the backing image copy Name string `json:"name"` // +optional + // +kubebuilder:validation:Description=Unique identifier for the backing image copy UUID string `json:"uuid"`
23-27
: Consider type improvements for better API semantics.
- The
Size
field could benefit from usingresource.Quantity
instead ofint64
for consistent handling of storage sizes in Kubernetes:- Size int64 `json:"size"` + Size resource.Quantity `json:"size"`
- The
Progress
field could benefit from validation bounds:// +optional +// +kubebuilder:validation:Minimum=0 +// +kubebuilder:validation:Maximum=100 Progress int `json:"progress"`k8s/pkg/apis/longhorn/v1beta2/backingimage.go (2)
101-105
: Enhance state transition documentationThe current comment "It is pending -> in-progress -> ready/failed" could be more detailed to help users understand:
- What triggers each state transition
- What actions are taken in each state
- What conditions lead to the failed state
Consider expanding the comment to something like:
-// It is pending -> in-progress -> ready/failed +// State transitions for V2FirstCopyStatus: +// - pending: Initial state when v2 copy is requested +// - in-progress: Copy operation is actively running +// - ready: Copy completed successfully +// - failed: Copy operation failed (see status.message for details)
Line range hint
32-105
: Architecture feedback on v2 backing image supportThe API changes for v2 backing image support are well-structured and follow Kubernetes API conventions. The changes maintain backward compatibility while adding necessary fields for v2 support. A few architectural considerations:
- The separation between v1 and v2 is clean and explicit through the
DataEngine
field.- The status tracking through
V2FirstCopyStatus
andV2FirstCopyDisk
provides good observability.- The API structure allows for future versioning beyond v2 if needed.
Consider documenting the following aspects in the API documentation:
- Migration path from v1 to v2
- Compatibility matrix between v1 and v2 features
- Performance implications of v2 vs v1
manager/backingimage.go (1)
89-89
: Follow Go parameter naming conventionsThe parameter
DataEngine
should follow Go's camelCase naming convention for parameters. Consider renaming it todataEngine
.-func (m *VolumeManager) CreateBackingImage(name, checksum, sourceType string, parameters map[string]string, minNumberOfCopies int, nodeSelector, diskSelector []string, secret, secretNamespace string, DataEngine string) (bi *longhorn.BackingImage, err error) { +func (m *VolumeManager) CreateBackingImage(name, checksum, sourceType string, parameters map[string]string, minNumberOfCopies int, nodeSelector, diskSelector []string, secret, secretNamespace string, dataEngine string) (bi *longhorn.BackingImage, err error) {webhook/resources/backingimage/validator.go (1)
Line range hint
128-146
: Add validation to prevent DataEngine changesThe Update method currently validates changes to Secret and SecretNamespace fields, but it should also prevent changes to the DataEngine field to maintain consistency with the PR objectives.
Add the following validation:
if oldBackingImage.Spec.SecretNamespace != backingImage.Spec.SecretNamespace { err := fmt.Errorf("changing secret namespace for BackingImage %v is not supported", oldBackingImage.Name) return werror.NewInvalidError(err.Error(), "") } + + if oldBackingImage.Spec.DataEngine != backingImage.Spec.DataEngine { + err := fmt.Errorf("changing data engine for BackingImage %v is not supported", oldBackingImage.Name) + return werror.NewInvalidError(err.Error(), "") + }engineapi/enginesim.go (1)
290-308
: Add documentation for new SPDK methodsWhile the implementation returning
ErrNotImplement
is appropriate for a simulator, please add documentation comments for each method explaining:
- Purpose of each method
- Parameter descriptions
- Expected behavior when implemented
- Relationship to v2 backing image functionality
Example format:
// SPDKBackingImageCreate creates a new SPDK backing image. // Parameters: // - name: Name of the backing image // - backingImageUUID: UUID of the backing image // - diskUUID: UUID of the disk // - checksum: Checksum of the backing image // - fromAddress: Source address // - srcDiskUUID: Source disk UUID // - size: Size in bytes // Returns: // - BackingImage info or error if operation failswebhook/resources/backingimage/mutator.go (1)
158-161
: Consider using type checking for DataEngine comparisonWhile the implementation is functionally correct, consider using type checking instead of string comparison for better type safety.
-if string(backingImage.Spec.DataEngine) == "" { +if backingImage.Spec.DataEngine == "" { patchOps = append(patchOps, fmt.Sprintf(`{"op": "replace", "path": "/spec/dataEngine", "value": "%s"}`, longhorn.DataEngineTypeV1)) }controller/instance_manager_controller_test.go (2)
66-67
: Add test cases for proxy connection countingWhile the addition of
proxyConnCounter
is appropriate, consider adding test cases that verify the proxy connection counting functionality, especially in scenarios involving v2 backing images.Example test cases to consider:
- Verify counter increments when proxy connections are established
- Verify counter decrements when proxy connections are closed
- Verify counter behavior during instance manager state transitions
Line range hint
83-377
: Add test cases for v2 data engine scenariosThe test suite currently only tests
DataEngineTypeV1
scenarios. Consider adding test cases that cover:
- Instance manager behavior with
DataEngineTypeV2
- State transitions specific to v2 backing images
- Mixed v1/v2 scenarios
This will ensure the instance manager controller properly handles both v1 and v2 data engines, aligning with the PR's objective of supporting v2 backing images.
engineapi/types.go (2)
114-118
: Add documentation for new SPDK methodsThe new SPDK methods look well-structured, but would benefit from documentation comments explaining:
- Purpose and behavior of each method
- Expected parameter values and constraints
- Error conditions that callers should handle
- Context cancellation behavior for the watch method
Example documentation format:
// SPDKBackingImageCreate creates a new SPDK backing image. // Parameters: // - name: unique identifier for the backing image // - backingImageUUID: UUID of the backing image // - diskUUID: UUID of the target disk // - checksum: verification checksum // - fromAddress: source address // - srcDiskUUID: source disk UUID // - size: size in bytes (must be > 0) // Returns: // - *BackingImage: created backing image info // - error: on creation failure
118-118
: Document context usage in SPDKBackingImageWatchThe watch method accepts a context parameter but its cancellation behavior needs to be documented. Ensure the implementation properly handles context cancellation for cleanup.
controller/controller_manager.go (1)
Line range hint
249-257
: Consider implementing node-specific CPU requests for v2 data engineThe current implementation uses a global setting for v2 data engine CPU requests, while v1 supports node-specific configuration. This might limit flexibility in heterogeneous clusters.
Consider:
- Adding node-specific CPU request configuration for v2 volumes
- Implementing a similar fallback mechanism as v1 for better resource management
- Adding validation to ensure the CPU request is appropriate for the node's capacity
Would you like me to create a GitHub issue to track this enhancement?
controller/replica_controller.go (1)
428-430
: Consider documenting DataEngine field usageWhile the implementation is correct, consider adding a comment explaining how the DataEngine field affects backing image handling for different engine versions. This would help future maintainers understand the implications of this field.
Add a comment like:
bi.Spec.DiskFileSpecMap[r.Spec.DiskID] = &longhorn.BackingImageDiskFileSpec{ + // DataEngine field determines how the backing image is handled + // for different engine versions (v1/v2) DataEngine: r.Spec.DataEngine, }k8s/crds.yaml (1)
585-588
: Add descriptions for v2-specific status fieldsThe new v2-specific status fields lack descriptions explaining their purpose and usage.
Consider adding descriptions like this:
v2FirstCopyDisk: + description: The disk UUID where the first v2 copy is being prepared type: string v2FirstCopyStatus: + description: The status of the first v2 copy preparation type: stringwebhook/resources/volume/validator.go (1)
110-112
: Clarify error message for volume size validationThe error message "volume size should be larger than the backing image size" might be misleading, as the check allows the volume size to be equal to the backing image size. Consider updating the error message to "volume size should be larger than or equal to the backing image size" for clarity.
Apply this diff to improve the error message:
if volume.Spec.Size < backingImage.Status.VirtualSize { - return werror.NewInvalidError("volume size should be larger than the backing image size", "") + return werror.NewInvalidError("volume size should be larger than or equal to the backing image size", "") }controller/backing_image_data_source_controller.go (1)
696-700
: Ensure consistent inclusion ofDataEngine
parameterInstead of appending the
DataEngine
parameter only when it isDataEngineTypeV2
, consider always including it. This ensures that the backing image data source pod is aware of theDataEngine
type being used, whether it's v1 or v2, enhancing consistency and future-proofing the code.Apply this diff to include the
DataEngine
parameter regardless of its value:- if bi.Spec.DataEngine == longhorn.DataEngineTypeV2 { cmd = append(cmd, "--parameters", fmt.Sprintf("%s=%s", longhorn.DataSourceTypeParameterDataEngine, longhorn.DataEngineTypeV2)) - } + cmd = append(cmd, "--parameters", fmt.Sprintf("%s=%s", longhorn.DataSourceTypeParameterDataEngine, bi.Spec.DataEngine))controller/instance_manager_controller.go (4)
72-72
: Consider renamingimName
for clarityThe
InstanceManagerMonitor
struct now contains bothName
andimName
fields, which might cause confusion. Consider renamingimName
to something likeinstanceManagerName
to clearly distinguish its purpose.
86-86
: Clarify the usage ofclient
andproxyClient
fieldsHaving both
client
andproxyClient
inInstanceManagerMonitor
may lead to confusion about their roles. Consider adding comments or renaming variables to differentiate their usage clearly.
1582-1633
: Handle errors when starting backing image monitoringIn
startBackingImageMonitoring
, consider handling the error whereengineClientProxy
creation fails more gracefully. Currently, it logs the error and returns, but additional actions like retry mechanisms or alerts might be beneficial.
1730-1763
: Error handling when updating backing image mapIn
pollAndUpdateV2BackingImageMap
, errors during instance manager retrieval or status update are logged but not acted upon. Consider adding retry mechanisms or additional error handling to ensure reliability.controller/backing_image_controller.go (2)
450-450
: Fix typo in comment: "crahsh" should be "crash"In the comment at line 450, there's a typo:
// If the first v2 copy is in unknown, that means the instance manager may crahsh when creating the first v2 copy.
Please correct the typo:
// If the first v2 copy is in unknown, that means the instance manager may crash when creating the first v2 copy.
619-619
: Fix grammatical error in commentAt line 619, the comment reads:
// It will keeps in unknown state forever because spdk server has no snapshot lvol to pick up to reconstruct the record.
This should be corrected to improve grammar:
// It will remain in an unknown state forever because the SPDK server has no snapshot lvol to pick up to reconstruct the record.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (113)
go.mod
is excluded by!go.mod
go.sum
is excluded by!**/*.sum
,!go.sum
vendor/github.com/coreos/go-systemd/v22/dbus/dbus.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/methods.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/properties.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/set.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/subscription.go
is excluded by!vendor/**
vendor/github.com/coreos/go-systemd/v22/dbus/subscription_set.go
is excluded by!vendor/**
vendor/github.com/gammazero/deque/README.md
is excluded by!vendor/**
vendor/github.com/gammazero/deque/deque.go
is excluded by!vendor/**
vendor/github.com/gammazero/deque/doc.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/CONTRIBUTING.md
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/LICENSE
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/MAINTAINERS
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/README.md
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_anonymous.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_external.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/auth_sha1.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/call.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_darwin.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_other.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_unix.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/conn_windows.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/dbus.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/decoder.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/default_handler.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/doc.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/encoder.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/escape.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/export.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/homedir.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/match.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/message.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/object.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sequence.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sequential_handler.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/server_interfaces.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/sig.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_darwin.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_generic.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_nonce_tcp.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_tcp.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unix.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_dragonfly.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_freebsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_linux.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_netbsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_unixcred_openbsd.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/transport_zos.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant_lexer.go
is excluded by!vendor/**
vendor/github.com/godbus/dbus/v5/variant_parser.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/api/backing_image.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/client/instance.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/client/proxy_backing_image.go
is excluded by!vendor/**
vendor/github.com/longhorn/longhorn-instance-manager/pkg/meta/version.go
is excluded by!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/instance.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/proxy.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/imrpc/proxy_grpc.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/spdkrpc/spdk.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/longhorn/types/pkg/generated/spdkrpc/spdk_grpc.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/github.com/moby/sys/mountinfo/mounted_linux.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/LICENSE
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_linux.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_linux_fuzzer.go
is excluded by!vendor/**
vendor/github.com/moby/sys/userns/userns_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/NOTICE
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/blkio_device.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/config.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/config_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/configs_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/hugepage_limit.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/intelrdt.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/interface_priority_map.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/mount.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/network.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/configs/rdma.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/devices/device.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/devices/device_unix.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/user.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/user/user_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_deprecated.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_fuzzer.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_maps.c
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_maps_linux.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runc/libcontainer/userns/userns_unsupported.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/config.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/state.go
is excluded by!vendor/**
vendor/github.com/opencontainers/runtime-spec/specs-go/version.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/decode.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/encode.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/expfmt.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/openmetrics_create.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/text_create.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/expfmt/text_parse.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labels.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labelset_string.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/labelset_string_go120.go
is excluded by!vendor/**
vendor/github.com/prometheus/common/model/metric.go
is excluded by!vendor/**
vendor/modules.txt
is excluded by!vendor/**
📒 Files selected for processing (30)
api/backingimage.go
(1 hunks)api/model.go
(2 hunks)client/generated_backing_image.go
(2 hunks)client/generated_backing_image_disk_file_status.go
(1 hunks)controller/backing_image_controller.go
(18 hunks)controller/backing_image_data_source_controller.go
(2 hunks)controller/controller_manager.go
(1 hunks)controller/instance_manager_controller.go
(9 hunks)controller/instance_manager_controller_test.go
(1 hunks)controller/replica_controller.go
(1 hunks)datastore/longhorn.go
(2 hunks)engineapi/enginesim.go
(2 hunks)engineapi/instance_manager.go
(1 hunks)engineapi/proxy_backing_image.go
(1 hunks)engineapi/types.go
(2 hunks)k8s/crds.yaml
(4 hunks)k8s/generate_code.sh
(2 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimage.go
(4 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimagedatasource.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/instancemanager.go
(1 hunks)k8s/pkg/apis/longhorn/v1beta2/zz_generated.deepcopy.go
(2 hunks)manager/backingimage.go
(2 hunks)scripts/build
(1 hunks)types/types.go
(2 hunks)upgrade/v17xto180/upgrade.go
(2 hunks)webhook/resources/backingimage/mutator.go
(2 hunks)webhook/resources/backingimage/validator.go
(2 hunks)webhook/resources/backupbackingimage/validator.go
(1 hunks)webhook/resources/volume/validator.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- scripts/build
🧰 Additional context used
📓 Learnings (1)
upgrade/v17xto180/upgrade.go (1)
Learnt from: mantissahz
PR: longhorn/longhorn-manager#2182
File: upgrade/v17xto180/upgrade.go:170-177
Timestamp: 2024-11-10T16:45:04.898Z
Learning: In `upgrade/v17xto180/upgrade.go`, when there is only a single caller, prefer to keep parameter management logic within the `upgradeBackingImageDataSources` function rather than extracting it to a helper function, to maintain code simplicity.
🪛 golangci-lint (1.62.2)
controller/backing_image_controller.go
362-362: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
696-696: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
758-758: printf: (*github.com/sirupsen/logrus.Entry).Infof call has arguments but no formatting directives
(govet)
webhook/resources/backupbackingimage/validator.go
50-50: S1039: unnecessary use of fmt.Sprintf
(gosimple)
47-47: printf: fmt.Sprintf call has arguments but no formatting directives
(govet)
🪛 Shellcheck (0.10.0)
k8s/generate_code.sh
[warning] 74-74: Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames.
(SC2038)
🪛 GitHub Check: CodeFactor
k8s/generate_code.sh
[warning] 74-74: k8s/generate_code.sh#L74
Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames. (SC2038)
🔇 Additional comments (53)
webhook/resources/backupbackingimage/validator.go (2)
16-23
: LGTM! Clean validator implementation.
The validator struct and constructor follow good practices with proper dependency injection.
25-36
: Verify operation types coverage.
The validator is correctly configured for Create operations. This aligns with the PR objective to prevent backup operations for v2 backing images.
✅ Verification successful
Let me analyze the results and generate additional verification scripts to understand the validation requirements better.
The first script will help us understand the validation logic, while the second one will help us verify if there are any update/delete operations that might need validation. This will give us a complete picture of the validation requirements.
Let me try a different approach to verify the validation requirements.
This will help us understand both the validation logic and the controller operations to determine if additional operation types need validation.
Based on the analysis of the validator and controller implementations, I can now provide the final response:
Operation types are correctly configured for BackupBackingImage validation
The current validation scope limited to Create operations is appropriate because:
- The BackupBackingImage resource follows an immutable pattern where:
- Create operation requires validation to prevent v2 backing image backups
- Update operations only modify status subresource via the controller
- Delete operations handle cleanup through finalizers
- The controller implementation confirms this pattern, showing that updates are only performed on the status subresource and deletions are handled through finalizers.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify if other operations (Update/Delete) need validation
# by checking for related code patterns in the codebase
# Check for any backup operations in other handlers
rg -g '!vendor/*' -A 5 'backup.*backingimage'
Length of output: 25780
Script:
#!/bin/bash
# Check for validation logic in the validator implementation
ast-grep --pattern 'func (bbi *backupBackingImageValidator) Validate($$$) $$$'
# Check for any update/delete operations in the controller
rg -g '!vendor/*' -A 5 'func.*Update.*BackupBackingImage|func.*Delete.*BackupBackingImage' controller/
Length of output: 185
Script:
#!/bin/bash
# Check the validator implementation file for validation logic
cat webhook/resources/backupbackingimage/validator.go
# Check for any update/delete operations in the backup backing image controller
cat controller/backup_backing_image_controller.go
Length of output: 17556
upgrade/v17xto180/upgrade.go (3)
11-11
: LGTM: Import addition is appropriate
The addition of the longhorn package import is necessary for accessing the DataEngineTypeV1 constant used in the upgrade logic.
41-43
: LGTM: Proper initialization of DataEngine field
The code appropriately handles backward compatibility by setting DataEngineTypeV1 as the default when the DataEngine field is empty. This aligns with the PR objective of supporting v2 backing images while maintaining compatibility with existing v1 resources.
41-57
: Verify no v2 resources exist before upgrade
While the code correctly sets v1 as the default engine type, it would be good to verify that no v2 resources exist before the upgrade.
✅ Verification successful
Based on the search results, I can now provide a final response:
No v2 resources can exist before upgrade
The code is correct and safe. The search results confirm that:
- V2 data engine is a preview feature that must be explicitly enabled via the
v2-data-engine
setting - V2 resources cannot exist before v1.8.0 because:
- V2 engine requires explicit configuration of multiple settings
- V2 engine is guarded by feature flags and validations
- The webhook prevents creation of v2 resources when the feature is disabled
The upgrade code's assumption that all existing resources are v1 is valid, as v2 resources could not have been created in versions prior to v1.8.0.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any potential v2 engine references in the codebase that might affect the upgrade
rg -g '!upgrade/*' "DataEngineTypeV2" .
# Search for any v2-specific configurations or resources
rg -g '!upgrade/*' -i "v2.*(backing|image|engine)"
Length of output: 35768
engineapi/proxy_backing_image.go (1)
1-11
: LGTM: Clean package structure and imports
The package declaration and imports are well-organized and contain only the necessary dependencies.
k8s/generate_code.sh (2)
40-44
: LGTM! Improved tool version management.
The explicit path check and versioned installation of controller-gen ensures consistent behavior across different environments.
68-71
: LGTM! Consistent use of explicit paths.
The changes ensure reliable CRD generation by using explicit paths from GOPATH/bin, which aligns with the PR's objective to support v2 backing images.
Also applies to: 74-74, 79-79
client/generated_backing_image_disk_file_status.go (2)
10-10
: LGTM! Field addition follows the established patterns.
The new DataEngine
field is properly structured with appropriate JSON/YAML tags and follows the same pattern as other fields in the struct.
10-10
: Verify the source of this generated code.
Since this is a generated file, ensure that the corresponding changes have been made in the source templates or generation configuration.
✅ Verification successful
Let me try a different search strategy to find the source of this generated code.
Let me check the API definitions since we found that this is a Kubernetes-based project with code generation.
Let me check one more thing to understand how the client code is generated.
Based on the investigation, I can now provide a final response:
The source of this generated code has been verified
The file generated_backing_image_disk_file_status.go
is part of the Kubernetes client code generation pipeline. The source definition is in k8s/pkg/apis/longhorn/v1beta2/backingimage.go
where the BackingImageDiskFileStatus
struct is defined with the corresponding field DataEngine DataEngineType
. The code generation is handled by the Kubernetes code-generator tools as configured in k8s/generate_code.sh
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Look for potential generator configurations or source templates
# Search for generator configuration files
echo "Searching for generator configuration files..."
fd -t f -e yaml -e json -e xml "generator|codegen|swagger"
# Search for potential template files
echo "Searching for template files that might generate this code..."
fd -t f "backing.*template|template.*backing"
# Search for generator scripts
echo "Searching for code generation scripts..."
fd -t f -e sh -e py "generate|codegen"
Length of output: 552
Script:
#!/bin/bash
# Look for references to the generated file and potential generation sources
# Search for references to the file
echo "Searching for references to the generated file..."
rg -l "generated_backing_image_disk_file_status"
# Search for swagger/OpenAPI specs that might be used for generation
echo "Searching for swagger/OpenAPI specs..."
fd -t f -e yaml -e json "swagger|api|openapi"
# Look inside k8s/generate_code.sh since it was found
echo "Contents of k8s/generate_code.sh:"
cat k8s/generate_code.sh
# Search for any files with 'backing' in their name that might be related
echo "Files containing 'backing' in their name:"
fd "backing"
Length of output: 7198
Script:
#!/bin/bash
# Check the API definitions and related files
# Check the backing image API definition
echo "Checking backing image API definitions..."
cat k8s/pkg/apis/longhorn/v1beta2/backingimage.go
# Check if there's a corresponding API file
echo "Checking main API file..."
cat api/backingimage.go
# Look for any OpenAPI/Swagger specs in the api directory
echo "Looking for API specs..."
fd -t f -e yaml -e json . api/
Length of output: 9106
Script:
#!/bin/bash
# Check for client code generation configuration or scripts
# Look for any generator scripts in the client directory
echo "Looking for generator scripts in client directory..."
fd -t f "generate" client/
# Check if there's a generator config
echo "Checking for generator config..."
cat client/generator_config.go 2>/dev/null || true
# Check the main client package
echo "Checking client package..."
cat client/generated.go 2>/dev/null || true
Length of output: 3290
api/backingimage.go (1)
55-58
: LGTM! The changes align well with v2 backing image support.
The addition of DataEngine
parameter to CreateBackingImage
is implemented correctly and maintains consistent error handling patterns.
Let's verify the API documentation and backward compatibility:
✅ Verification successful
Let me gather more information about the API documentation and backward compatibility.
Let me gather more specific information about the API validation and backward compatibility.
Based on the gathered information, I can now generate the final response. The validator code shows proper validation for DataEngine, and the API struct definition shows it's properly defined in the types.
The DataEngine parameter addition is properly implemented with validation checks
The changes are well-implemented with:
- Proper API type definition in the BackingImage struct
- Validation checks in the webhook validator preventing unsupported operations with v2 data engine
- Consistent error handling in the API handler
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for API documentation updates and potential breaking changes
# Check if API documentation exists and contains DataEngine
fd -e md -e yaml -e json "api|swagger|openapi" | xargs rg -l "DataEngine"
# Look for existing API clients to verify backward compatibility
rg -A 5 "CreateBackingImage.*\(" --type go
Length of output: 5818
Script:
#!/bin/bash
# Check for API types and potential breaking changes
# Look for BackingImage type definition and its DataEngine field
ast-grep --pattern 'type BackingImage struct {
$$$
}'
# Check for API version definitions
fd -e go | xargs rg -l "v1beta|apiVersion"
# Look for API validation related to DataEngine
rg -B 3 -A 3 "DataEngine.*validate" --type go
Length of output: 100148
Script:
# Check for API validation related to DataEngine
ast-grep --pattern 'func $_(BackingImage $_, $_) {
$$$
DataEngine $$$
$$$
}'
# Check for API usage in webhook validators
rg -B 2 -A 2 "DataEngine" webhook/resources/backingimage/validator.go
# Check for API version conversion code
rg -B 2 -A 2 "DataEngine" k8s/pkg/apis/longhorn/v1beta2/conversion.go
Length of output: 783
k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go (1)
15-32
: Verify version compatibility handling.
Since this introduces v2 functionality alongside existing v1 features, ensure there's a clear strategy for:
- Version detection and routing
- Migration of existing data
- API version compatibility
Consider adding version-specific validation rules to prevent v1 operations on v2 backing images as mentioned in the PR objectives (no cloning, exporting, or backup support for v2).
✅ Verification successful
Let's search for validation and version handling code to verify the compatibility concerns.
Let's search for validation, version-specific handling, and conversion logic to verify the version compatibility concerns.
Let's try to find the core type definitions and version handling code to understand the compatibility mechanisms.
Based on the search results, I can now provide a final response. The code shows proper version handling between v1beta1 and v1beta2 with conversion logic, type definitions, and client implementations in place. The BackingImageV2CopyInfo structure is part of a well-structured API evolution.
Version compatibility is properly handled
The codebase demonstrates robust version compatibility handling through:
- Clear separation between v1beta1 and v1beta2 packages
- Proper type definitions and conversion logic between versions
- Structured client implementations for both versions
- Dedicated manager services for handling version-specific operations
The BackingImageV2CopyInfo structure is appropriately integrated into the v2 API surface, with proper state tracking and version-specific handling in place.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for version-specific validation rules
ast-grep --pattern 'webhook.Validator$$$'
Length of output: 43
Script:
#!/bin/bash
# Search for validation related code for backing images
rg -A 5 "func.*Validate.*BackingImage" --type go
# Look for version-specific handling in backing image code
rg -A 5 "v[12].*BackingImage" --type go
# Search for webhook validation setup
rg -A 5 "webhook.*Register" --type go
# Look for version conversion code
rg -A 5 "Convert.*BackingImage" --type go
Length of output: 141323
Script:
#!/bin/bash
# Look for validation and webhook code specific to backing images
rg -A 10 "type.*Validator.*BackingImage|func.*Validate.*BackingImage" --type go
# Search for version-specific handling in backing image code
rg -A 10 "v2.*BackingImage.*State|v2.*BackingImage.*Type" --type go
# Look for conversion logic between v1 and v2
rg -A 10 "Convert.*v1.*v2.*BackingImage|Convert.*v2.*v1.*BackingImage" --type go
Length of output: 234
Script:
#!/bin/bash
# Look for backing image type definitions and state handling
rg -A 5 "type.*BackingImage.*struct" --type go
# Search for version-specific imports and package declarations
rg "package.*v[12]beta[12].*" k8s/pkg/apis/longhorn/
# Look for validation or conversion code in webhook directory
fd -t f . webhook/ -x rg -l "BackingImage"
# Check for version handling in manager code
rg -A 5 "BackingImage.*Manager" --type go
Length of output: 118192
client/generated_backing_image.go (2)
12-13
: LGTM! Verify consistent DataEngine field usage.
The addition of the optional DataEngine
field aligns with the PR objectives for v2 backing image support and follows consistent naming patterns.
Let's verify consistent field usage across the codebase:
✅ Verification successful
The previous results showed the field exists in multiple places, but we need to verify type consistency and usage patterns to ensure the client's string type aligns with the core API's DataEngineType.
DataEngine field usage is consistent and properly validated
The verification shows that:
- The field is consistently used across the codebase with proper type handling:
- Core API uses
DataEngineType
(defined ink8s/pkg/apis/longhorn/v1beta2/volume.go
) - Client code correctly uses
string
type with proper JSON/YAML tags
- Core API uses
- The field is properly validated:
- Webhook validator prevents cloning from v2 backing images
- Mutation webhook prevents changing data engine after creation
- The field is correctly used in backing image operations:
- Creation through API
- Node/disk selection based on data engine type
- Status updates with proper type handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent DataEngine field usage in related structs and APIs
# Search for DataEngine field declarations
echo "Checking DataEngine field declarations:"
rg --type go "DataEngine.*\`json:\"dataEngine" -A 2
# Search for DataEngine field usage in API handlers
echo "Checking DataEngine field usage in handlers:"
rg --type go "DataEngine.*string.*backing.*image" -A 5
Length of output: 4722
Script:
#!/bin/bash
# Let's verify the DataEngine type consistency and its usage pattern
# Check the DataEngineType definition
echo "Checking DataEngineType definition:"
rg --type go "type DataEngineType string"
# Check DataEngine type constants/values
echo -e "\nChecking DataEngine type values:"
rg --type go "const.*DataEngine.*="
# Check if there are any validation functions for DataEngine
echo -e "\nChecking DataEngine validation:"
rg --type go "func.*validate.*DataEngine"
# Check the usage pattern in backing image related code
echo -e "\nChecking backing image related DataEngine usage:"
rg --type go -C 3 "BackingImage.*DataEngine|DataEngine.*BackingImage"
Length of output: 7224
32-32
: Verify impact of YAML tag change on existing configurations.
The YAML tag format change from secretNamespace
to secret_namespace
improves consistency but may affect existing configurations.
Let's check for existing usage and documentation:
✅ Verification successful
YAML tag change is safe and consistent with codebase conventions
The change from secretNamespace
to secret_namespace
in the YAML tag follows the established snake_case convention used throughout the codebase. The verification shows:
- No existing YAML configurations using the old format outside of CRDs
- All other fields in the same file use snake_case for YAML tags (e.g.,
current_checksum
,disk_selector
, etc.) - The JSON tag remains unchanged as
secretNamespace
, maintaining backward compatibility for API consumers
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for existing usage of the old YAML tag format and documentation references
# Search for existing YAML configurations using the old format
echo "Checking existing YAML configurations:"
rg --type yaml "secretNamespace:"
# Search for documentation references
echo "Checking documentation references:"
fd -e md -e rst | xargs rg "secretNamespace"
Length of output: 334
Script:
#!/bin/bash
# Let's examine the actual content of the CRD file to understand the context and impact
echo "Checking CRD content:"
cat k8s/crds.yaml
# Also search for any Go code that might be using this field
echo -e "\nChecking Go code references:"
rg "SecretNamespace" --type go -B 2 -A 2
Length of output: 131207
Script:
#!/bin/bash
# Let's check for any existing YAML files that might be using the old format
echo "Checking for YAML files using the old format:"
rg -l --type yaml "secretNamespace:" --glob '!k8s/crds.yaml'
# Let's also check the Go code for any references to this field
echo -e "\nChecking Go code references to SecretNamespace:"
rg "SecretNamespace" --type go -B 2 -A 2
# Let's check if there are any other similar YAML tag changes in the same file
echo -e "\nChecking the specific file for other YAML tags:"
cat client/generated_backing_image.go
Length of output: 29076
k8s/pkg/apis/longhorn/v1beta2/backingimagedatasource.go (1)
33-33
: LGTM! Verify the constant usage across the codebase.
The new constant follows the established naming convention and aligns with the PR's objective of supporting v2 backing images through DataEngine integration.
Let's verify how this constant is being used:
✅ Verification successful
The search results show that the constant DataSourceTypeParameterDataEngine
is being used appropriately in the backing_image_data_source_controller.go
where it's used to set parameters with the data engine type. Let's verify the complete context of this usage and ensure it's properly integrated.
The constant is properly integrated and used as intended
The verification shows that DataSourceTypeParameterDataEngine
is correctly used in the backing image data source controller to pass the data engine type (v2) as a parameter when handling v2 backing images. The usage is consistent with the codebase's v2 data engine implementation, where:
- It's used in
backing_image_data_source_controller.go
to configure data source parameters - The parameter is set only when dealing with v2 backing images (
bi.Spec.DataEngine == longhorn.DataEngineTypeV2
) - It integrates with the broader v2 backing image support system
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the usage of DataSourceTypeParameterDataEngine constant
# Expected: Should be used in backing image operations, particularly in data source handling
# Search for direct constant usage
rg "DataSourceTypeParameterDataEngine" -A 3
# Search for the string value to catch any hardcoded usage
rg '"data-engine"' -A 3
# Look for backing image data source parameter handling
ast-grep --pattern 'Parameters[^{]*{[^}]*"data-engine"[^}]*}'
Length of output: 1618
Script:
#!/bin/bash
# Get more context around the usage in the controller
rg "DataSourceTypeParameterDataEngine" -B 5 -A 5 controller/backing_image_data_source_controller.go
# Check for DataEngineTypeV2 constant usage since it's used together
rg "DataEngineTypeV2" -A 2
# Look for any related parameter handling in the backing image data source
ast-grep --pattern 'func ($x *BackingImageDataSource) $$$Parameters$$$'
Length of output: 13734
k8s/pkg/apis/longhorn/v1beta2/backingimage.go (2)
74-76
: LGTM! Well-defined validation constraint.
The DataEngine
field is properly constrained to only accept "v1" or "v2" values, which aligns with the PR objectives and prevents invalid values.
32-33
: Verify DataEngineType definition
The DataEngine
field uses DataEngineType
, but this type is not defined in the current file. Please ensure that DataEngineType
is properly defined and imported.
✅ Verification successful
DataEngineType is properly defined and accessible
The DataEngineType
is defined in k8s/pkg/apis/longhorn/v1beta2/volume.go
and is accessible since both files are in the same package v1beta2
. No additional imports are needed as types defined in the same package are automatically available. The usage of DataEngineType
in backingimage.go
is correct and follows Kubernetes API conventions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for DataEngineType definition
rg "type DataEngineType"
Length of output: 93
Script:
#!/bin/bash
# Let's verify if DataEngineType is imported in backingimage.go
rg "import \(" -A 10 k8s/pkg/apis/longhorn/v1beta2/backingimage.go
Length of output: 68
Script:
#!/bin/bash
# Let's check the entire content of backingimage.go to see the imports section
cat k8s/pkg/apis/longhorn/v1beta2/backingimage.go
Length of output: 5405
k8s/pkg/apis/longhorn/v1beta2/instancemanager.go (2)
195-195
: LGTM on field placement and structure.
The new BackingImages
field is well-placed within the InstanceManagerStatus
struct and follows consistent patterns with other map fields in terms of tags and structure.
195-195
: Verify BackingImageV2CopyInfo type definition.
Let's ensure the BackingImageV2CopyInfo
type is properly defined and accessible.
✅ Verification successful
BackingImageV2CopyInfo type is properly defined and integrated
The type BackingImageV2CopyInfo
is:
- Defined in
k8s/pkg/apis/longhorn/v1beta2/backingimagemanager.go
- Used consistently across the codebase
- Has proper deepcopy implementations generated
- Integrated with engine API implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for BackingImageV2CopyInfo type definition
rg --type go "type BackingImageV2CopyInfo struct"
# Check for any imports of this type
rg --type go "BackingImageV2CopyInfo"
Length of output: 1780
manager/backingimage.go (2)
97-99
: LGTM: Proper handling of DataEngine with backward compatibility
The implementation correctly:
- Sets v1 as the default data engine, maintaining backward compatibility
- Uses proper type conversion when setting the DataEngine field
- Integrates well with the existing BackingImageSpec structure
Also applies to: 117-117
97-99
: Verify DataEngine validation
While the code sets a default value for DataEngine, it might be beneficial to validate that the provided value is one of the allowed DataEngine types.
Let's check if there's validation elsewhere in the codebase:
✅ Verification successful
DataEngine validation is properly implemented
The DataEngine field is well-validated through multiple mechanisms:
- It has Kubernetes-level validation via the
+kubebuilder:validation:Enum=v1;v2
annotation - There's a common validation function
ValidateRequiredDataEngineEnabled
that checks if the specified engine type is enabled - The constants
DataEngineTypeV1
andDataEngineTypeV2
are properly defined - The default value being set to
DataEngineTypeV1
is appropriate
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for DataEngine type validation
# Look for validation of DataEngine type in webhook validators or type definitions
# Search for DataEngine type definition and allowed values
ast-grep --pattern 'type DataEngineType string'
# Search for validation logic
rg -A 5 'DataEngine.*validation|validate.*DataEngine'
Length of output: 482
Script:
#!/bin/bash
# Let's check the DataEngineType constants and any validation logic in the volume.go file
rg -A 5 "const.*DataEngine" k8s/pkg/apis/longhorn/v1beta2/volume.go
# Let's also check for any webhook validation
fd "webhook" --exec rg -l "DataEngine"
# Check for validation in the manager code
rg -A 5 "ValidateDataEngine|validateDataEngine"
Length of output: 782
Script:
#!/bin/bash
# Let's check the webhook validators for DataEngine validation
rg -A 10 "DataEngine.*" webhook/resources/backingimage/validator.go
# Also check the common validation functions
rg -A 10 "DataEngine.*" webhook/common/common.go
# Check the DataEngineType constants
rg -B 2 -A 2 "DataEngineType" k8s/pkg/apis/longhorn/v1beta2/volume.go
Length of output: 2482
webhook/resources/backingimage/validator.go (3)
75-77
: LGTM: Appropriate validation for v2 backing image cloning
The validation correctly prevents cloning from v2 backing images, which is an unsupported operation as per the PR objectives.
93-97
: LGTM: Appropriate validation for v2 volume exports
The validation correctly prevents exporting backing images from v2 volumes, which is an unsupported operation as per the PR objectives. The placement and spacing of the check improves code readability.
Line range hint 1-199
: Verify complete coverage of v2 backing image validations
Let's verify that all operations related to v2 backing images are properly validated across the codebase.
✅ Verification successful
Let me analyze the validation coverage for v2 backing images with one more verification script:
Based on the verification results, I can now provide a final response:
V2 backing image validations are properly implemented with comprehensive coverage
The verification shows that v2 backing image operations are properly validated across the codebase with consistent restrictions:
- Clone operations from v2 backing images are blocked
- Export operations from v2 volumes are prevented
- Backup operations for v2 backing images are restricted
- Data engine changes post-creation are prevented
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential missing v2 backing image validations
# Search for backing image related operations
echo "Searching for backing image operations..."
rg -l "BackingImage" --type go
# Search for DataEngine usage in validators
echo "Checking DataEngine validation coverage..."
rg "DataEngine.*not supported" --type go
# Search for v2 related constants and types
echo "Checking v2 related definitions..."
rg "DataEngineTypeV2" --type go
Length of output: 14852
Script:
#!/bin/bash
# Check for potential gaps in v2 backing image validation
# Search for v2 backing image validation in other webhook validators
echo "Checking v2 validation in other webhooks..."
rg -A 5 -B 5 "BackingImage.*Spec\.DataEngine" --type go
# Check for any v2 specific error messages or validations
echo "Checking v2 specific validations..."
rg "v2.*not supported|not supported.*v2" --type go
# Look for backing image data engine checks
echo "Checking data engine validations..."
rg "validateDataEngine|ValidateDataEngine" --type go
Length of output: 3458
engineapi/enginesim.go (3)
4-4
: LGTM: Import additions are appropriate
The new imports are correctly added and align with the new SPDK backing image functionality.
Also applies to: 12-12
290-308
: LGTM: Implementation approach is correct
The implementation of these methods:
- Follows consistent naming patterns
- Has appropriate parameter types
- Returns
ErrNotImplement
which is correct for simulator functionality - Properly uses context for watch operation
- Aligns with PR objectives for v2 backing image support
290-308
: Verify interface implementation completeness
Let's verify that all required SPDK methods from the interface are implemented in the simulator.
✅ Verification successful
Interface implementation is complete and correctly structured
All SPDK methods defined in the interface (engineapi/types.go
) are properly implemented in the simulator (engineapi/enginesim.go
) with matching signatures:
- SPDKBackingImageCreate
- SPDKBackingImageDelete
- SPDKBackingImageGet
- SPDKBackingImageList
- SPDKBackingImageWatch
Each method correctly returns ErrNotImplement
as expected for a simulator implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if all SPDK methods in the interface are implemented in the simulator
# Find all SPDK-related method signatures in the interface
echo "=== Interface SPDK Methods ==="
rg "SPDK.*\(" "engineapi/types.go" -A 1
# Find all SPDK-related method implementations in the simulator
echo -e "\n=== Simulator SPDK Implementations ==="
rg "func.*SPDK.*\(" "engineapi/enginesim.go" -A 1
Length of output: 1449
webhook/resources/backingimage/mutator.go (1)
192-196
: LGTM: Proper validation of DataEngine immutability
The implementation correctly prevents modification of the DataEngine field after creation, which is crucial for maintaining data consistency. The error message is clear and follows the same pattern as other immutable field checks.
engineapi/types.go (2)
4-4
: LGTM: Context import properly placed
The context
import is correctly placed with standard library imports and is required for the new watch functionality.
114-114
: Consider adding size validation in SPDKBackingImageCreate
The size
parameter of type uint64
should be validated to ensure it's greater than 0 and within acceptable bounds for SPDK operations.
controller/controller_manager.go (1)
88-89
: Verify proxy connection counter implementation in controllers
The addition of proxyConnCounter
to both instance manager and backing image controllers aligns with the v2 backing image support. However, we should verify that these controllers properly handle the connection counting to prevent resource leaks.
Also applies to: 96-97
✅ Verification successful
Connection handling is properly implemented
The verification shows that:
- Both controllers properly initialize
proxyConnCounter
through constructor injection - All
engineClientProxy
instances are properly closed usingdefer engineClientProxy.Close()
- The
Counter
interface is properly defined with increment/decrement operations - The
NewEngineClientProxy
implementation in engineapi package accepts the counter for connection tracking
The implementation follows proper resource management patterns with consistent connection counting and cleanup.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how proxyConnCounter is used in the controllers
# Check for connection counter usage in instance manager
rg -A 5 "proxyConnCounter" "controller/instance_manager_controller.go"
# Check for connection counter usage in backing image controller
rg -A 5 "proxyConnCounter" "controller/backing_image_controller.go"
# Look for connection cleanup patterns
rg "defer.*proxyConnCounter" "controller/"
Length of output: 3776
Script:
#!/bin/bash
# Let's check the engineClientProxy implementation to verify proper connection handling
# Check the engineClientProxy implementation
ast-grep --pattern 'type EngineClientProxy struct {
$$$
}'
# Check Close method implementation
ast-grep --pattern 'func ($_ *EngineClientProxy) Close() {
$$$
}'
# Look for deferred Close() calls
rg "defer.*Close\(\)" "controller/"
Length of output: 2411
Script:
#!/bin/bash
# Let's check the engineapi package for EngineClientProxy implementation
# Check NewEngineClientProxy implementation
rg -A 10 "func NewEngineClientProxy" "engineapi/"
# Check Counter interface implementation
rg -A 5 "type Counter interface" "util/"
Length of output: 973
controller/replica_controller.go (2)
428-430
: LGTM: DataEngine field correctly added to BackingImageDiskFileSpec
The addition of DataEngine
field to BackingImageDiskFileSpec
is correct and aligns with the PR objective of supporting v2 backing images. The field is properly propagated from the replica's spec.
428-430
: Verify DataEngine field usage in related code paths
Let's verify that the DataEngine field is consistently handled in related code paths.
✅ Verification successful
Let me gather more information to verify the DataEngine field usage comprehensively.
Let me check one more aspect to ensure DataEngine field handling is complete.
Based on the comprehensive analysis of the codebase, I can now provide the final response:
DataEngine field is consistently handled in backing image related code paths
The DataEngine field assignment in the replica controller is consistent with the rest of the codebase. The code correctly:
- Sets DataEngine in BackingImageDiskFileSpec based on replica's DataEngine type
- Maintains consistent handling between V1 and V2 data engines
- Properly integrates with backing image controller's data engine-specific logic
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent DataEngine field usage in backing image related code
# Check backing image controller handling of DataEngine
rg -A 5 "BackingImageDiskFileSpec.*DataEngine"
# Check for potential missing DataEngine field assignments
ast-grep --pattern 'BackingImageDiskFileSpec {
$$$
}'
Length of output: 586
Script:
#!/bin/bash
# Search for all BackingImageDiskFileSpec struct instantiations
rg -B 2 -A 2 "BackingImageDiskFileSpec\{"
# Check for DataEngine field references in backing image related files
rg -B 2 -A 2 "DataEngine.*=|=.*DataEngine" "controller/*backing*" "controller/*replica*"
# Look for BackingImageDiskFileSpec type definition
rg -A 5 "type BackingImageDiskFileSpec struct"
Length of output: 5541
Script:
#!/bin/bash
# Check DataEngine field handling in replica and backing image controllers
rg -B 2 -A 2 "DataEngine.*=|=.*DataEngine" controller/replica_controller.go controller/backing_image_controller.go
# Look for DataEngineType enum values and usage
rg -A 2 "DataEngineType(V1|V2)"
# Check if there are any other files handling backing image DataEngine
rg -l "BackingImageDiskFileSpec.*DataEngine|DataEngine.*BackingImageDiskFileSpec"
Length of output: 40666
engineapi/instance_manager.go (1)
576-578
: LGTM! Verify field usage across the codebase.
The addition of DiskName
, DiskUUID
, and BackingImageName
fields to the ReplicaCreateRequest aligns with the PR objectives for v2 backing image support. The field mappings are correct and follow consistent naming conventions.
Let's verify the usage of these fields across the codebase:
✅ Verification successful
Based on the search results, I can now generate the final response:
The fields DiskName
, DiskUUID
, and BackingImageName
are correctly used in the ReplicaCreateRequest
The fields are properly integrated into the codebase:
DiskName
andDiskUUID
are used consistently for disk identification across the codebase, particularly in disk management and replica operationsBackingImageName
is used throughout the backing image management system, including backup operations and data source handling- The field mappings in the ReplicaCreateRequest struct align with the instance manager client's expectations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of the new fields in ReplicaCreateRequest
# Check for DiskName field usage
echo "Checking DiskName field usage..."
rg "DiskName.*ReplicaCreateRequest" -A 2
# Check for DiskUUID field usage
echo "Checking DiskUUID field usage..."
rg "DiskUUID.*ReplicaCreateRequest" -A 2
# Check for BackingImageName field usage
echo "Checking BackingImageName field usage..."
rg "BackingImageName.*ReplicaCreateRequest" -A 2
Length of output: 377
Script:
#!/bin/bash
# Let's try a different approach to find the ReplicaCreateRequest struct and its usage
# First, find the ReplicaCreateRequest struct definition
echo "Finding ReplicaCreateRequest struct definition..."
ast-grep --pattern 'type ReplicaCreateRequest struct {
$$$
}'
# Then search for any usage of ReplicaCreateRequest
echo "Finding ReplicaCreateRequest usage..."
rg "ReplicaCreateRequest" -B 2 -A 2
# Also search for the specific field assignments
echo "Finding field assignments..."
rg "DiskName\s*:|DiskUUID\s*:|BackingImageName\s*:" -B 2 -A 2
Length of output: 82539
types/types.go (3)
756-758
: LGTM: Error check function follows consistent pattern
The function follows the same pattern as other error check functions in the file and correctly identifies logical volume related errors.
1270-1272
: LGTM: Consistent naming function
The function follows the project's naming conventions and provides a clear way to generate backing image monitor names.
1274-1276
: LGTM: Clear v2 backing image name generation
The function provides a clear way to generate v2 backing image names by combining the backing image name with disk UUID.
api/model.go (2)
269-269
: LGTM: DataEngine field addition to BackingImage struct
The addition of the DataEngine field with proper JSON tagging is well-structured and aligns with the PR's objective of supporting v2 backing images.
2015-2015
: LGTM: DataEngine field mapping in toBackingImageResource
The conversion of the DataEngine field from the domain model to the API model is implemented correctly.
Let's verify the validation handling for the DataEngine field:
✅ Verification successful
Let me gather more information about the DataEngine validation by searching for its type definition and usage.
Based on the extensive validation code found in the codebase, I can now generate the final response:
DataEngine field validation is properly handled in the codebase
The DataEngine field has comprehensive validation across multiple layers:
- Webhook validation ensures valid DataEngine values and prevents unsupported operations for V2 engine
- Type validation through the
ValidateUnmapMarkSnapChainRemoved
function - Feature compatibility checks through
ValidateV2DataEngineEnabled
- Node compatibility validation via
IsV2DataEngineDisabledForNode
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for DataEngine validation in the codebase
# Look for validation logic related to DataEngine
rg -A 5 "DataEngine.*validate|validate.*DataEngine"
# Check for related conversion functions that might need similar changes
ast-grep --pattern 'func to$$$Resource($$$) {
$$$
DataEngine: $$$
$$$
}'
Length of output: 440
Script:
#!/bin/bash
# Search for DataEngine type definition and related constants
rg -A 5 "type.*DataEngine"
# Search for DataEngine validation in API validation files
fd "validate.*go" --exec rg -A 5 "DataEngine"
# Search for DataEngine usage in conversion functions
rg -A 5 "DataEngine.*Spec|Spec.*DataEngine"
Length of output: 101528
k8s/pkg/apis/longhorn/v1beta2/zz_generated.deepcopy.go (3)
488-502
: LGTM: BackingImageV2CopyInfo deepcopy implementation is correct
The implementation follows the standard Kubernetes deepcopy patterns and correctly handles value copying for the struct.
1522-1528
: LGTM: InstanceManagerStatus deepcopy implementation properly handles the new BackingImages field
The implementation correctly handles deep copying of the BackingImages map, following the same pattern as other maps in the struct.
Line range hint 1-1
: Verify that the deepcopy code is up to date
Since this is an auto-generated file, let's verify that it's in sync with the type definitions.
✅ Verification successful
The deepcopy code is properly managed by the code generation tooling
The verification shows that:
- The deepcopy code is auto-generated by the official Kubernetes code-generator tool (v0.18.0)
- The generation is handled by a well-structured
generate_code.sh
script that:- Generates deepcopy, client, lister, and informer code
- Uses controller-gen to generate CRDs
- Applies necessary patches
- The file is marked with
//go:build !ignore_autogenerated
which is the standard marker for Kubernetes generated code
The code generation setup follows Kubernetes best practices and there's no need for manual verification of the generated code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the deepcopy code is up to date with the types
# Expected: No differences in the generated code
# Find the deepcopy-gen tool configuration
fd -t f -e yaml -e json . | xargs grep -l "deepcopy-gen"
# Find all the types that should have deepcopy
fd -t f "types\.go$" | xargs grep -l "genclient"
Length of output: 3006
Script:
#!/bin/bash
# Let's find the Longhorn API types and their deepcopy generation configuration
# Find Longhorn API types
fd -t f "types\.go$" k8s/pkg/apis/longhorn
# Check for deepcopy-gen markers in the types file
fd -t f "types\.go$" k8s/pkg/apis/longhorn | xargs grep -l "+k8s:deepcopy-gen"
# Check for any Makefile or script that handles code generation
fd Makefile . | xargs grep -l "deepcopy-gen"
fd -t f -e sh . | xargs grep -l "deepcopy-gen"
Length of output: 215
Script:
#!/bin/bash
# Let's try a different approach to find the code generation configuration
# Look for the types file directly
ls -l k8s/pkg/apis/longhorn/v1beta2/types.go
# Check the content of types.go if it exists
cat k8s/pkg/apis/longhorn/v1beta2/types.go
# Look for any go:generate directives
find k8s -type f -name "*.go" -exec grep -l "//go:generate" {} \;
# Look for hack/generate scripts which are common in k8s projects
ls -l hack/
Length of output: 400
Script:
#!/bin/bash
# Let's check the repository structure and find all Go files in the k8s directory
# List the k8s directory structure
ls -R k8s/
# Find all Go files in k8s directory
find k8s -type f -name "*.go"
# Check if there are any other API-related directories
find . -type f -name "zz_generated.deepcopy.go"
Length of output: 26868
Script:
#!/bin/bash
# Let's check the code generation script and its configuration
# Check the generate_code.sh script
cat k8s/generate_code.sh
# Check if there are any code-gen markers in the source files
find k8s/pkg/apis/longhorn/v1beta2/ -type f -name "*.go" ! -name "zz_generated*" -exec grep -l "//+k8s:deepcopy-gen" {} \;
Length of output: 2942
k8s/crds.yaml (2)
499-503
: LGTM: DataEngine enum field properly defined
The addition of the dataEngine
field with enum values v1
and v2
aligns with the PR objectives for v2 backing image support. The enum validation ensures only valid values can be used.
2280-2301
: LGTM: BackingImages status field properly structured
The addition of the backingImages
field in InstanceManager status with comprehensive sub-fields for tracking backing image states is well-structured and includes all necessary information like checksum, size, progress etc.
datastore/longhorn.go (1)
Line range hint 2896-2943
: LGTM! Verify disk type compatibility with data engines
The changes correctly enforce disk type requirements based on data engine version:
- V2 data engine requires block disks
- V1 data engine requires filesystem disks
Run the following script to verify disk type compatibility:
✅ Verification successful
Let me try a different approach to verify the disk type compatibility.
Based on the search results, I can now provide a final verification of the disk type compatibility changes:
LGTM! Disk type compatibility is correctly enforced
The code changes correctly enforce disk type requirements based on data engine version with proper validation:
- V2 data engine requires block disks (
DiskTypeBlock
) - V1 data engine requires filesystem disks (
DiskTypeFilesystem
)
The implementation is consistent across:
- Node validation in webhook
- Disk scheduling logic
- Disk monitoring and management
- Orphan handling
- Data store operations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify disk type compatibility for backing images across data engines
# Check for any backing images using incompatible disk types
rg -A 5 'DiskType: ' | grep -E 'block|filesystem'
# Check for any instance managers with incompatible disk types
ast-grep --pattern $'spec:
$$$
diskType: $_
$$$'
Length of output: 204
Script:
#!/bin/bash
# Search for disk type constants and their usage
rg "DiskType" -B 2 -A 2
# Search for specific data engine related disk type checks
rg "diskSpec.Type.*DiskType" -B 2 -A 2
Length of output: 57008
controller/instance_manager_controller.go (8)
62-62
: Proper addition of proxyConnCounter
field
The proxyConnCounter
field is correctly added to track proxy connection counts.
111-111
: Update constructor to include proxyConnCounter
The NewInstanceManagerController
function signature is correctly updated to include the proxyConnCounter
parameter.
133-133
: Initialize proxyConnCounter
in controller
The proxyConnCounter
is properly initialized in the InstanceManagerController
.
504-504
: Verify resetting im.Status.BackingImages
to nil
When syncing instance status, im.Status.BackingImages
is set to nil
. Ensure this doesn't inadvertently remove necessary data for backing image monitoring.
813-815
: Start backing image monitoring for V2 data engine
Starting backing image monitoring when the data engine is V2 is appropriate.
818-820
: Stop backing image monitoring when not using V2 data engine
Correctly stopping backing image monitoring when the data engine is not V2.
1259-1261
: Ensure backing image monitor is stopped during cleanup
In cleanupInstanceManager
, ensure that the backing image monitor is properly stopped to prevent resource leaks.
1635-1651
: Properly stop backing image monitoring
The stopBackingImageMonitoring
function correctly handles the stopping of the monitor, ensuring channels are closed appropriately.
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | ||
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | ||
if !ok { | ||
return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "") | ||
} | ||
backingImageName := backupBackingImage.Name | ||
|
||
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | ||
if err != nil { | ||
return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "") | ||
} | ||
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | ||
return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "") | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error handling and string formatting issues.
There are several issues in the validation logic:
- The error message in line 47 incorrectly combines unrelated fields and has a malformed format string
- Unnecessary use of fmt.Sprintf in line 50
- Error messages could be more descriptive for better debugging
Apply these fixes:
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error {
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage)
if !ok {
- return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "")
+ return werror.NewInvalidError(fmt.Sprintf("expected *longhorn.BackupBackingImage, got %T", newObj), "")
}
backingImageName := backupBackingImage.Name
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName)
if err != nil {
- return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "")
+ return werror.NewInvalidError(fmt.Sprintf("failed to get backing image %q: %v", backingImageName, err), "")
}
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 {
- return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "")
+ return werror.NewInvalidError("backup operation is not supported for v2 backing images", "")
}
return nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | |
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | |
if !ok { | |
return werror.NewInvalidError(fmt.Sprintf("%v is not a *longhorn.BackupBackingImage", newObj), "") | |
} | |
backingImageName := backupBackingImage.Name | |
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | |
if err != nil { | |
return werror.NewInvalidError(fmt.Sprintf("failed to get the backing image for backup", backingImage.Spec.SourceParameters, backingImage.Spec.SourceType), "") | |
} | |
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | |
return werror.NewInvalidError(fmt.Sprintf("v2 backing image is not supported for backup"), "") | |
} | |
return nil | |
} | |
func (bbi *backupBackingImageValidator) Create(request *admission.Request, newObj runtime.Object) error { | |
backupBackingImage, ok := newObj.(*longhorn.BackupBackingImage) | |
if !ok { | |
return werror.NewInvalidError(fmt.Sprintf("expected *longhorn.BackupBackingImage, got %T", newObj), "") | |
} | |
backingImageName := backupBackingImage.Name | |
backingImage, err := bbi.ds.GetBackingImageRO(backingImageName) | |
if err != nil { | |
return werror.NewInvalidError(fmt.Sprintf("failed to get backing image %q: %v", backingImageName, err), "") | |
} | |
if backingImage.Spec.DataEngine == longhorn.DataEngineTypeV2 { | |
return werror.NewInvalidError("backup operation is not supported for v2 backing images", "") | |
} | |
return nil | |
} |
🧰 Tools
🪛 golangci-lint (1.62.2)
50-50: S1039: unnecessary use of fmt.Sprintf
(gosimple)
47-47: printf: fmt.Sprintf call has arguments but no formatting directives
(govet)
func parseBackingImage(bi *imapi.BackingImage) *longhorn.BackingImageV2CopyInfo { | ||
return &longhorn.BackingImageV2CopyInfo{ | ||
Name: bi.Name, | ||
UUID: bi.BackingImageUUID, | ||
DiskUUID: bi.DiskUUID, | ||
Size: int64(bi.Size), | ||
|
||
Progress: bi.Status.Progress, | ||
State: longhorn.BackingImageState(bi.Status.State), | ||
CurrentChecksum: bi.Status.CurrentChecksum, | ||
Message: bi.Status.ErrorMsg, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add safety checks in parseBackingImage
The function should include input validation and handle potential integer overflow during size conversion.
func parseBackingImage(bi *imapi.BackingImage) *longhorn.BackingImageV2CopyInfo {
+ if bi == nil {
+ return nil
+ }
+ // Check for potential overflow when converting uint64 to int64
+ if bi.Size > uint64(^int64(0)) {
+ return &longhorn.BackingImageV2CopyInfo{
+ Name: bi.Name,
+ UUID: bi.BackingImageUUID,
+ DiskUUID: bi.DiskUUID,
+ Message: "size exceeds maximum int64 value",
+ State: longhorn.BackingImageStateError,
+ }
+ }
return &longhorn.BackingImageV2CopyInfo{
Name: bi.Name,
UUID: bi.BackingImageUUID,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func parseBackingImage(bi *imapi.BackingImage) *longhorn.BackingImageV2CopyInfo { | |
return &longhorn.BackingImageV2CopyInfo{ | |
Name: bi.Name, | |
UUID: bi.BackingImageUUID, | |
DiskUUID: bi.DiskUUID, | |
Size: int64(bi.Size), | |
Progress: bi.Status.Progress, | |
State: longhorn.BackingImageState(bi.Status.State), | |
CurrentChecksum: bi.Status.CurrentChecksum, | |
Message: bi.Status.ErrorMsg, | |
} | |
} | |
func parseBackingImage(bi *imapi.BackingImage) *longhorn.BackingImageV2CopyInfo { | |
if bi == nil { | |
return nil | |
} | |
// Check for potential overflow when converting uint64 to int64 | |
if bi.Size > uint64(^int64(0)) { | |
return &longhorn.BackingImageV2CopyInfo{ | |
Name: bi.Name, | |
UUID: bi.BackingImageUUID, | |
DiskUUID: bi.DiskUUID, | |
Message: "size exceeds maximum int64 value", | |
State: longhorn.BackingImageStateError, | |
} | |
} | |
return &longhorn.BackingImageV2CopyInfo{ | |
Name: bi.Name, | |
UUID: bi.BackingImageUUID, | |
DiskUUID: bi.DiskUUID, | |
Size: int64(bi.Size), | |
Progress: bi.Status.Progress, | |
State: longhorn.BackingImageState(bi.Status.State), | |
CurrentChecksum: bi.Status.CurrentChecksum, | |
Message: bi.Status.ErrorMsg, | |
} | |
} |
if [ -e ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/patches/crd ]; then | ||
cp -a ${GOPATH}/src/${LH_MANAGER_DIR}/k8s/patches/crd patches | ||
find patches -type f | xargs -i sh -c 'kustomize edit add patch --path {}' | ||
find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unsafe file handling in find | xargs pipeline.
The current implementation may break with filenames containing spaces or special characters.
Apply this fix to handle filenames safely:
- find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}'
+ find patches -type f -print0 | xargs -0 -I{} sh -c '${GOPATH}/bin/kustomize edit add patch --path "{}"'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
find patches -type f | xargs -i sh -c '${GOPATH}/bin/kustomize edit add patch --path {}' | |
find patches -type f -print0 | xargs -0 -I{} sh -c '${GOPATH}/bin/kustomize edit add patch --path "{}"' |
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 74-74: Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames.
(SC2038)
🪛 GitHub Check: CodeFactor
[warning] 74-74: k8s/generate_code.sh#L74
Use 'find .. -print0 | xargs -0 ..' or 'find .. -exec .. +' to allow non-alphanumeric filenames. (SC2038)
@@ -192,6 +192,8 @@ type InstanceManagerStatus struct { | |||
// +nullable | |||
InstanceReplicas map[string]InstanceProcess `json:"instanceReplicas,omitempty"` | |||
// +optional | |||
BackingImages map[string]BackingImageV2CopyInfo `json:"backingImages"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add field documentation and Kubernetes validation tags.
The new BackingImages
field should include:
- Documentation comment explaining its purpose
- The
+nullable
tag since it's a map field (Kubernetes best practice)
Apply this diff:
// +optional
+ // BackingImages contains the status of v2 backing image copies managed by this instance
+ // +nullable
BackingImages map[string]BackingImageV2CopyInfo `json:"backingImages"`
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
BackingImages map[string]BackingImageV2CopyInfo `json:"backingImages"` | |
// +optional | |
// BackingImages contains the status of v2 backing image copies managed by this instance | |
// +nullable | |
BackingImages map[string]BackingImageV2CopyInfo `json:"backingImages"` |
dataEngine: | ||
type: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding validation for dataEngine values in diskFileSpecMap
The dataEngine
field in diskFileSpecMap
properties should have the same enum validation as the top-level dataEngine
field to maintain consistency.
Add enum validation:
dataEngine:
+ enum:
+ - v1
+ - v2
type: string
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dataEngine: | |
type: string | |
dataEngine: | |
enum: | |
- v1 | |
- v2 | |
type: string |
|
||
defer func() { | ||
m.logger.Infof("Stop monitoring spdk backing image %v", m.Name) | ||
cancel() | ||
m.StopMonitorWithLock() | ||
close(m.monitorVoluntaryStopCh) | ||
}() | ||
|
||
go func() { | ||
continuousFailureCount := 0 | ||
for { | ||
if continuousFailureCount >= engineapi.MaxMonitorRetryCount { | ||
m.logger.Errorf("Instance manager SPDK backing image monitor streaming continuously errors receiving items for %v times, will stop the monitor itself", engineapi.MaxMonitorRetryCount) | ||
m.StopMonitorWithLock() | ||
} | ||
|
||
if m.CheckMonitorStoppedWithLock() { | ||
return | ||
} | ||
|
||
_, err = notifier.Recv() | ||
if err != nil { | ||
m.logger.WithError(err).Error("Failed to receive next item in spdk backing image watch") | ||
continuousFailureCount++ | ||
time.Sleep(engineapi.MinPollCount * engineapi.PollInterval) | ||
} else { | ||
m.lock.Lock() | ||
m.updateNotification = true | ||
m.lock.Unlock() | ||
} | ||
} | ||
}() | ||
|
||
timer := 0 | ||
ticker := time.NewTicker(engineapi.MinPollCount * engineapi.PollInterval) | ||
defer ticker.Stop() | ||
for { | ||
select { | ||
case <-ticker.C: | ||
if m.CheckMonitorStoppedWithLock() { | ||
return | ||
} | ||
|
||
needUpdate := false | ||
|
||
m.lock.Lock() | ||
timer++ | ||
if timer >= engineapi.MaxPollCount || m.updateNotification { | ||
needUpdate = true | ||
m.updateNotification = false | ||
timer = 0 | ||
} | ||
m.lock.Unlock() | ||
|
||
if !needUpdate { | ||
continue | ||
} | ||
if needStop := m.pollAndUpdateV2BackingImageMap(); needStop { | ||
return | ||
} | ||
case <-m.stopCh: | ||
return | ||
} | ||
} | ||
} | ||
|
||
func (m *InstanceManagerMonitor) pollAndUpdateV2BackingImageMap() (needStop bool) { | ||
im, err := m.ds.GetInstanceManager(m.imName) | ||
if err != nil { | ||
if datastore.ErrorIsNotFound(err) { | ||
m.logger.Warn("stop monitoring because the instance manager no longer exists") | ||
return true | ||
} | ||
utilruntime.HandleError(errors.Wrapf(err, "failed to get instance manager %v for monitoring", m.Name)) | ||
return false | ||
} | ||
|
||
if im.Status.OwnerID != m.controllerID { | ||
m.logger.Warnf("stop monitoring the instance manager on this node (%v) because the instance manager has new ownerID %v", m.controllerID, im.Status.OwnerID) | ||
return true | ||
} | ||
|
||
// the key in the resp is in the form of "bi-%s-disk-%s" so we can distinguish the different disks in the same instance manager | ||
resp, err := m.proxyClient.SPDKBackingImageList() | ||
if err != nil { | ||
utilruntime.HandleError(errors.Wrapf(err, "failed to poll spdk backing image info to update instance manager %v", m.Name)) | ||
return false | ||
} | ||
|
||
if reflect.DeepEqual(im.Status.BackingImages, resp) { | ||
return false | ||
} | ||
|
||
im.Status.BackingImages = resp | ||
if _, err := m.ds.UpdateInstanceManagerStatus(im); err != nil { | ||
utilruntime.HandleError(errors.Wrapf(err, "failed to update v2 backing image map for instance manager %v", m.Name)) | ||
return false | ||
} | ||
return false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential infinite loop in BackingImageMonitorRun
In BackingImageMonitorRun
, if continuousFailureCount
exceeds engineapi.MaxMonitorRetryCount
, the monitor stops. However, continuousFailureCount
is not reset on successful receive. Consider resetting continuousFailureCount
after successful operations to prevent unnecessary termination of the monitor.
Apply this diff to reset continuousFailureCount
on success:
--- a/controller/instance_manager_controller.go
+++ b/controller/instance_manager_controller.go
@@ -1684,6 +1684,7 @@ func (m *InstanceManagerMonitor) BackingImageMonitorRun() {
} else {
m.logger.WithError(err).Error("Failed to receive next item in spdk backing image watch")
continuousFailureCount++
+ } else {
+ continuousFailureCount = 0
}
}
Committable suggestion skipped: line range outside the PR's diff.
return errors.Wrapf(err, "failed to prepare the first v2 backing image") | ||
} | ||
|
||
if bic.syncV2StatusWithInstanceManager(bi); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error handling: Incorrect use of error assignment in if
statement
At line 362, the error return value of bic.syncV2StatusWithInstanceManager(bi)
is not properly assigned or checked. The current code:
if bic.syncV2StatusWithInstanceManager(bi); err != nil {
return errors.Wrapf(err, "failed to sync v2 backing image status from instance manager")
}
This might not correctly capture the error from syncV2StatusWithInstanceManager
. Consider assigning the error to a variable within the if
statement:
if err := bic.syncV2StatusWithInstanceManager(bi); err != nil {
return errors.Wrapf(err, "failed to sync v2 backing image status from instance manager")
}
🧰 Tools
🪛 golangci-lint (1.62.2)
362-362: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
func (bic *BackingImageController) cleanupAllV2BackingImageCopies(bi *longhorn.BackingImage) (cleaned bool, err error) { | ||
log := getLoggerForBackingImage(bic.logger, bi) | ||
|
||
if bic.syncV2StatusWithInstanceManager(bi); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error handling: Incorrect use of error assignment in if
statement
Similar to the issue at line 362, the error return value of bic.syncV2StatusWithInstanceManager(bi)
at line 696 is not properly assigned or checked. The code should be corrected as follows:
if err := bic.syncV2StatusWithInstanceManager(bi); err != nil {
return errors.Wrapf(err, "failed to sync v2 backing image status from instance manager")
}
🧰 Tools
🪛 golangci-lint (1.62.2)
696-696: Error return value of bic.syncV2StatusWithInstanceManager
is not checked
(errcheck)
readyV2CopyDiskUUIDs = append(readyV2CopyDiskUUIDs, v2DiskUUID) | ||
} | ||
if backingImageInProgress(biStatus.State) { | ||
log.Infof("There is one v2 backing image copy in progress, prepare the backing image one at a time", bi.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct improper use of Infof
without formatting directives
At line 758, the log.Infof
call includes arguments, but the format string has no formatting directives:
log.Infof("There is one v2 backing image copy in progress, prepare the backing image one at a time", bi.Name)
Since there is no format specifier in the string, the argument bi.Name
will not be used. If you intend to include bi.Name
in the log message, add a formatting directive:
log.Infof("There is one v2 backing image copy in progress for backing image %v, prepare the backing image one at a time", bi.Name)
If bi.Name
is not needed in the message, you can remove the extra argument and use Info
instead:
log.Info("There is one v2 backing image copy in progress, prepare the backing image one at a time")
🧰 Tools
🪛 golangci-lint (1.62.2)
758-758: printf: (*github.com/sirupsen/logrus.Entry).Infof call has arguments but no formatting directives
(govet)
engineClientProxy, err := engineapi.NewEngineClientProxy(instanceManager, log, bic.proxyConnCounter) | ||
if err != nil { | ||
return errors.Wrapf(err, "failed to get the engine client proxy for instance manager %v", instanceManager.Name) | ||
} | ||
defer engineClientProxy.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor repetitive code for creating engine client proxies into a helper function
There are multiple instances where the code for retrieving the instance manager and creating an engine client proxy is repeated. For example:
instanceManager, err := bic.ds.GetRunningInstanceManagerByNodeRO(node.Name, longhorn.DataEngineTypeV2)
if err != nil {
return errors.Wrapf(err, "failed to get the default instance manager for node %v", node.Name)
}
engineClientProxy, err := engineapi.NewEngineClientProxy(instanceManager, log, bic.proxyConnCounter)
if err != nil {
return errors.Wrapf(err, "failed to get the engine client proxy for instance manager %v", instanceManager.Name)
}
defer engineClientProxy.Close()
Consider refactoring this pattern into a helper function to reduce code duplication and improve maintainability. For example:
func (bic *BackingImageController) getEngineClientProxy(nodeName string, dataEngineType longhorn.DataEngineType, log logrus.FieldLogger) (*engineapi.EngineClientProxy, error) {
instanceManager, err := bic.ds.GetRunningInstanceManagerByNodeRO(nodeName, dataEngineType)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the default instance manager for node %v", nodeName)
}
engineClientProxy, err := engineapi.NewEngineClientProxy(instanceManager, log, bic.proxyConnCounter)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the engine client proxy for instance manager %v", instanceManager.Name)
}
return engineClientProxy, nil
}
You can then simplify the code in several places:
engineClientProxy, err := bic.getEngineClientProxy(node.Name, longhorn.DataEngineTypeV2, log)
if err != nil {
return err
}
defer engineClientProxy.Close()
// Use engineClientProxy...
Also applies to: 503-507, 639-642, 723-726, 791-795
This pull request is now in conflict. Could you fix it @ChanYiLin? 🙏 |
ref: longhorn/longhorn#6341