Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix JENKINS-56740 - AWS CLI Option for artifacts over 5Gb with S3 Storage Class select option (STANDARD, STANDARD_IA). #467

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 68 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,15 @@ Artifact Manager.
Artifact manager implementation for Amazon S3, currently using the jClouds library.
[wiki](https://wiki.jenkins.io/display/JENKINS/Artifact+Manager+S3+Plugin)

Plugin artifacts upload methods depends on the Jenkins setting at
Jenkins > Setting > AWS, checkbox 'Use AWS CLI for files upload to S3'.
For artifacts upload to Amazon S3, plugin uses either [AWS API](https://docs.aws.amazon.com/apigateway/)
(default) or [AWS CLI](https://aws.amazon.com/cli/).
AWS API is the default method, but it fails to upload artifacts bigger than 5GB.
See [JENKINS-56740](https://issues.jenkins.io/browse/JENKINS-56740)
AWS CLI allows to upload large artifacts bigger than 5GB.


# Prerequisites

First of all, you will need a Amazon account, this Amazon account should have permissions over the S3 Bucket that
Expand Down Expand Up @@ -76,6 +85,8 @@ the same configuration page.
* Use Insecure HTTP: Use URLs with the http protocol instead of the https protocol.
* Use Transfer Acceleration: Use [S3 Transfer Acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html)
* Disable Session Token: When this option is enabled the plugin won't contact AWS for a session token and will just use the access key and secret key as configured by the Amazon Credentials plugin.
* Use [AWS CLI](https://aws.amazon.com/cli/) for files upload to S3: When this option is enabled the plugin will use [AWS CLI](https://aws.amazon.com/cli/) to upload artifacts to S3. It allows to upload files over 5Gb (fix for [JENKINS-56740](https://issues.jenkins.io/browse/JENKINS-56740)).
* Storage Class: When AWS CLI option is enabled, the plugin will AWS upload artifacts with selected [S3 Storage Class](https://aws.amazon.com/s3/storage-classes/). Dy default, this selection is ignored and plugin upload artifacts with STANDARD storage class.

![](images/bucket-settings.png)

Expand All @@ -98,14 +109,30 @@ If you're using a non AWS S3 service, you will need to use a custom endpoint, us

![](images/custom-s3-service-configuration.png)

For artifacts larger than 5Gb, check to use [AWS CLI](https://aws.amazon.com/cli/) for uploads to S3.
Otherwise, upload will fail in the default AWS API mode.

Besides, with [AWS CLI](https://aws.amazon.com/cli/) mode, you may choose a
[S3 Storage Class](https://aws.amazon.com/s3/storage-classes/) for uploaded artifacts:
- **STANDARD** - [Amazon S3 Standard (S3 Standard)](https://aws.amazon.com/s3/storage-classes/#General_purpose) is a default Storage Class.
- **STANDARD_IA** - [Amazon S3 Standard-Infrequent Access (S3 Standard-IA)](https://aws.amazon.com/s3/storage-classes/#Infrequent_access)

Beware! Only _Standard_ S3 Storage Class is applied for artifacts uploaded in the default AWS API mode.

![](images/upload-configugation.png)


For Google Cloud Storage:

* the AWS Credentials need to correspond to a Google Service Account HMAC key (Access ID / Secret) - See [this documentation](https://cloud.google.com/storage/docs/authentication/hmackeys)
* the custom endpoint is `storage.googleapis.com`

Finally the "Create S3 Bucket from configuration" button allow you to create the bucket if it does not exist
Finally, the "Create S3 Bucket from configuration" button allow you to create the bucket if it does not exist
and the AWS credentials configured have permission to create a S3 Bucket.

![](images/custom-s3-service-configuration.png)


# How to use Artifact Manager on S3 plugin

Artifact Manager on S3 plugin is transparently used by the Jenkins Artifact system, so as other Artifacts Managers,
Expand Down Expand Up @@ -191,6 +218,7 @@ In order to delete stashes on the S3 Bucket, you would have to add the property
`-Dio.jenkins.plugins.artifact_manager_jclouds.s3.S3BlobStoreConfig.deleteStashes=true` to your Jenkins JVM properties
, if it is not set the stash will not be deleted from S3 when the corresponding build is deleted.


# AWS Credentials

Artifact Manager on S3 plugin needs an AWS credentials in order to access to the S3 Bucket, you can select one on the
Expand Down Expand Up @@ -255,6 +283,8 @@ mvn hpi:run
Alternately, you can test against MinIO:

```bash
docker run --rm -e MINIO_ROOT_USER=dummy -e MINIO_ROOT_PASSWORD=dummydummy -p 127.0.0.1:9000:9000 minio/minio server /data
# WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated:
docker run --rm -e MINIO_ACCESS_KEY=dummy -e MINIO_SECRET_KEY=dummydummy -p 127.0.0.1:9000:9000 minio/minio server /data
```

Expand Down Expand Up @@ -772,6 +802,43 @@ java.lang.NullPointerException
at java.lang.Thread.run(Thread.java:748)
```


# Build Plugin Package
## Prerequisites
* RequireMavenVersion: 3.8.1 required to no longer download dependencies via HTTP (use HTTPS instead)

## Build Plugin Package
In order to build the plugin, run the following command in the plugin source code folder:
```
mvn clean package
```
or
```
mvn clean package -Dchangelist=desired-version
```
or
```
mvn clean package -Dchangelist=$(git tag -l --sort=creatordate | tail -n 1).patch2.0.$(date +%Y%m%d-%H%M%S)
```
After any changes, possible to run just:
```
mvn hpi:hpi
```
### Known Build Issues
#### Failed Build Target
On a clean project, the build command `mvn package` is required prior `mvn hpi:hpi`.
Otherwise `mvn hpi:hpi` would cause the following error to appear despite of the absence
of the <description> in pom.xml and the existence of file src/main/resources/index.jelly:
```
[ERROR] Failed to execute goal org.jenkins-ci.tools:maven-hpi-plugin:3.32:hpi (default-cli) on project artifact-manager-s3:
Missing target/classes/index.jelly. Delete any <description> from pom.xml and create src/main/resources/index.jelly:
[ERROR] <?jelly escape-by-default='true'?>
[ERROR] <div>
[ERROR] The description here…
[ERROR] </div>
```


# Changelog

## 1.7 and newer
Expand Down
Binary file modified images/bucket-settings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/custom-s3-service-configuration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/upload-configugation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@
<properties>
<changelist>999999-SNAPSHOT</changelist>
<jenkins.version>2.443</jenkins.version>
<patch.version>2.2</patch.version>
<useBeta>true</useBeta>
<spotbugs.effort>Max</spotbugs.effort>
<spotbugs.threshold>Low</spotbugs.threshold>
<!-- <maven.test.skip>true</maven.test.skip>-->
</properties>

<name>Artifact Manager on S3 plugin</name>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import hudson.AbortException;
import hudson.EnvVars;
import hudson.FilePath;
import hudson.Functions;
import hudson.Launcher;
import hudson.Util;
import hudson.model.BuildListener;
Expand All @@ -37,8 +38,8 @@
import hudson.util.DirScanner;
import hudson.util.io.ArchiverFactory;
import edu.umd.cs.findbugs.annotations.NonNull;
import hudson.Functions;
import io.jenkins.plugins.artifact_manager_jclouds.BlobStoreProvider.HttpMethod;
import io.jenkins.plugins.artifact_manager_jclouds.s3.S3BlobStoreConfig;
import io.jenkins.plugins.httpclient.RobustHTTPClient;
import java.io.File;
import java.io.IOException;
Expand Down Expand Up @@ -121,23 +122,70 @@
public void archive(FilePath workspace, Launcher launcher, BuildListener listener, Map<String, String> artifacts)
throws IOException, InterruptedException {
LOGGER.log(Level.FINE, "Archiving from {0}: {1}", new Object[] { workspace, artifacts });
Map<String, String> contentTypes = workspace.act(new ContentTypeGuesser(new ArrayList<>(artifacts.values()), listener));
LOGGER.fine(() -> "guessing content types: " + contentTypes);
Map<String, URL> artifactUrls = new HashMap<>();
BlobStore blobStore = getContext().getBlobStore();

// Map artifacts to urls for upload
for (Map.Entry<String, String> entry : artifacts.entrySet()) {
String path = "artifacts/" + entry.getKey();
String blobPath = getBlobPath(path);
Blob blob = blobStore.blobBuilder(blobPath).build();
blob.getMetadata().setContainer(provider.getContainer());
blob.getMetadata().getContentMetadata().setContentType(contentTypes.get(entry.getValue()));
artifactUrls.put(entry.getValue(), provider.toExternalURL(blob, HttpMethod.PUT));
// Switch AWS API to S3 CLI mode according to the UseAWSCLI config option checkmark status
S3BlobStoreConfig config = S3BlobStoreConfig.get();
String customStorageClass = config.getCustomStorageClass();
listener.getLogger().printf("AWS Storage Class selected option: %s%n", customStorageClass);
int artifactUrlsSize = 0;
if (config.getUseAWSCLI()){

Check warning on line 131 in src/main/java/io/jenkins/plugins/artifact_manager_jclouds/JCloudsArtifactManager.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Partially covered line

Line 131 is only partially covered, one branch is missing
listener.getLogger().printf("AWS option selected: Use AWS CLI...%n");
LOGGER.fine(() -> "ignore guessing content types");
Map<String, String> artifactUrls = new HashMap<>();

// Map artifacts to urls for upload
for (Map.Entry<String, String> entry : artifacts.entrySet()) {
String path = "artifacts/" + entry.getKey();
String blobPath = getBlobPath(path);
String remotePath = "s3://" + provider.getContainer() + "/" + blobPath;
listener.getLogger().printf("Copy %s to %s%n", entry.getValue(), remotePath);

// Apply customStorageClass according to the Storage Class option selected (STANDARD or STANDARD_IA)
String[] cmd = {"aws",
"s3",
"cp",
"--quiet",
"--no-guess-mime-type",
entry.getValue(), remotePath,
"--storage-class", customStorageClass}; // (STANDARD or STANDARD_IA)
int cmdResult = launcher.launch(cmd, new String[0], null, listener.getLogger(), workspace).join();
if (cmdResult == 0) {
artifactUrls.put(entry.getValue(), remotePath);
}
else {
listener.getLogger().printf("Copy FAILED! %s%n", cmdResult);
}
}
artifactUrlsSize = artifactUrls.size();
int failedUploads = artifacts.size() - artifactUrlsSize;
if ( failedUploads > 0 ) {
String msg = String.format("FAILED AWS S3 CLI upload for %s artifact(s) of %s%n", failedUploads, artifacts.size());
listener.getLogger().printf(msg);
throw new InterruptedException(msg);
}
} else {

Check warning on line 166 in src/main/java/io/jenkins/plugins/artifact_manager_jclouds/JCloudsArtifactManager.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 132-166 are not covered by tests
listener.getLogger().printf("AWS option selected:: Use AWS API...%n");
Map<String, String> contentTypes = workspace.act(new ContentTypeGuesser(new ArrayList<>(artifacts.values()), listener));
LOGGER.fine(() -> "guessing content types: " + contentTypes);
Map<String, URL> artifactUrls = new HashMap<>();
BlobStore blobStore = getContext().getBlobStore();

// Map artifacts to urls for upload
for (Map.Entry<String, String> entry : artifacts.entrySet()) {
String path = "artifacts/" + entry.getKey();
String blobPath = getBlobPath(path);
Blob blob = blobStore.blobBuilder(blobPath).build();
blob.getMetadata().setContainer(provider.getContainer());
blob.getMetadata().getContentMetadata().setContentType(contentTypes.get(entry.getValue()));
artifactUrls.put(entry.getValue(), provider.toExternalURL(blob, HttpMethod.PUT));
}

workspace.act(new UploadToBlobStorage(artifactUrls, contentTypes, listener));
artifactUrlsSize = artifactUrls.size();
}

workspace.act(new UploadToBlobStorage(artifactUrls, contentTypes, listener));
listener.getLogger().printf("Uploaded %s artifact(s) to %s%n", artifactUrls.size(), provider.toURI(provider.getContainer(), getBlobPath("artifacts/")));
// Analyse and log upload results
listener.getLogger().printf("Uploaded %d artifact(s) to %s%n", artifactUrlsSize, provider.toURI(provider.getContainer(), getBlobPath("artifacts/")));
}

private static class ContentTypeGuesser extends MasterToSlaveFileCallable<Map<String, String>> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@
import java.io.IOException;
import java.util.function.Supplier;
import java.util.regex.Pattern;
import java.util.Collection;
import java.util.ArrayList;

import org.apache.commons.lang.StringUtils;
import org.kohsuke.stapler.DataBoundSetter;
Expand Down Expand Up @@ -95,6 +97,10 @@

private boolean useTransferAcceleration;

private boolean useAWSCLI;

private String customStorageClass;

private boolean disableSessionToken;

private String customEndpoint;
Expand All @@ -115,7 +121,7 @@
private transient S3BlobStoreConfig config;

S3BlobStoreTester(String container, String prefix, boolean useHttp,
boolean useTransferAcceleration, boolean usePathStyleUrl,
boolean useTransferAcceleration, boolean useAWSCLI, String customStorageClass,boolean usePathStyleUrl,
boolean disableSessionToken, String customEndpoint,
String customSigningRegion) {
config = new S3BlobStoreConfig();
Expand All @@ -125,6 +131,8 @@
config.setCustomSigningRegion(customSigningRegion);
config.setUseHttp(useHttp);
config.setUseTransferAcceleration(useTransferAcceleration);
config.setUseAWSCLI(useAWSCLI);
config.setCustomStorageClass(customStorageClass);

Check warning on line 135 in src/main/java/io/jenkins/plugins/artifact_manager_jclouds/s3/S3BlobStoreConfig.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 134-135 are not covered by tests
config.setUsePathStyleUrl(usePathStyleUrl);
config.setDisableSessionToken(disableSessionToken);
}
Expand Down Expand Up @@ -214,220 +222,292 @@
save();
}

public boolean getUseAWSCLI() {
return useAWSCLI;
}

@DataBoundSetter
public void setUseAWSCLI(boolean useAWSCLI){
this.useAWSCLI = useAWSCLI;
save();
}

Check warning on line 233 in src/main/java/io/jenkins/plugins/artifact_manager_jclouds/s3/S3BlobStoreConfig.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 231-233 are not covered by tests

public String getCustomStorageClass() {
return customStorageClass;
}

@DataBoundSetter
public void setCustomStorageClass(String customStorageClass){
checkValue(doCheckCustomStorageClass(customStorageClass));
this.customStorageClass = customStorageClass;
save();
}

public boolean getDisableSessionToken() {
return disableSessionToken;
}

@DataBoundSetter
public void setDisableSessionToken(boolean disableSessionToken){
this.disableSessionToken = disableSessionToken;
save();
}

public String getCustomEndpoint() {
return customEndpoint;
}

@DataBoundSetter
public void setCustomEndpoint(String customEndpoint){
checkValue(doCheckCustomEndpoint(customEndpoint));
this.customEndpoint = customEndpoint;
save();
}

public String getResolvedCustomEndpoint() {
if(StringUtils.isNotBlank(customEndpoint)) {
String protocol;
if(getUseHttp()) {
protocol = "http";
} else {
protocol = "https";
}
return protocol + "://" + customEndpoint;
}
return null;
}

public String getCustomSigningRegion() {
return customSigningRegion;
}

@DataBoundSetter
public void setCustomSigningRegion(String customSigningRegion){
this.customSigningRegion = customSigningRegion;
checkValue(doCheckCustomSigningRegion(this.customSigningRegion));
save();
}

@NonNull
@Override
public String getDisplayName() {
return "Artifact Manager Amazon S3 Bucket";
}

@NonNull
public static S3BlobStoreConfig get() {
return ExtensionList.lookupSingleton(S3BlobStoreConfig.class);
}

@VisibleForTesting
static Supplier<AmazonS3ClientBuilder> clientBuilder = AmazonS3ClientBuilder::standard;

/**
*
* @return an AmazonS3ClientBuilder using the region or not, it depends if a region is configured or not.
*/
AmazonS3ClientBuilder getAmazonS3ClientBuilder() {
AmazonS3ClientBuilder ret = clientBuilder.get();

if (StringUtils.isNotBlank(customEndpoint)) {
String resolvedCustomSigningRegion = customSigningRegion;
if (StringUtils.isBlank(resolvedCustomSigningRegion)) {
resolvedCustomSigningRegion = "us-east-1";
}
ret = ret.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(getResolvedCustomEndpoint(), resolvedCustomSigningRegion));
} else if (StringUtils.isNotBlank(CredentialsAwsGlobalConfiguration.get().getRegion())) {
ret = ret.withRegion(CredentialsAwsGlobalConfiguration.get().getRegion());
} else {
ret = ret.withForceGlobalBucketAccessEnabled(true);
}
ret = ret.withAccelerateModeEnabled(useTransferAcceleration);

// TODO the client would automatically use path-style URLs under certain conditions; is it really necessary to override?
ret = ret.withPathStyleAccessEnabled(getUsePathStyleUrl());

return ret;
}

@VisibleForTesting
public AmazonS3ClientBuilder getAmazonS3ClientBuilderWithCredentials() throws IOException {
return getAmazonS3ClientBuilderWithCredentials(getDisableSessionToken());
}

private AmazonS3ClientBuilder getAmazonS3ClientBuilderWithCredentials(boolean disableSessionToken) throws IOException {
AmazonS3ClientBuilder builder = getAmazonS3ClientBuilder();
if (disableSessionToken) {
builder = builder.withCredentials(CredentialsAwsGlobalConfiguration.get().getCredentials());
} else {
AWSStaticCredentialsProvider credentialsProvider = new AWSStaticCredentialsProvider(
CredentialsAwsGlobalConfiguration.get().sessionCredentials(builder));
builder = builder.withCredentials(credentialsProvider);
}
return builder;
}

public FormValidation doCheckContainer(@QueryParameter String container){
FormValidation ret = FormValidation.ok();
if (StringUtils.isBlank(container)){
ret = FormValidation.warning("The container name cannot be empty");
} else if (!bucketPattern.matcher(container).matches()){
ret = FormValidation.error("The S3 Bucket name does not match S3 bucket rules");
}
return ret;
}

public FormValidation doCheckPrefix(@QueryParameter String prefix){
FormValidation ret;
if (StringUtils.isBlank(prefix)) {
ret = FormValidation.ok("Artifacts will be stored in the root folder of the S3 Bucket.");
} else if (prefix.endsWith("/")) {
ret = FormValidation.ok();
} else {
ret = FormValidation.error("A prefix must end with a slash.");
}
return ret;
}

public FormValidation doCheckCustomSigningRegion(@QueryParameter String customSigningRegion) {
FormValidation ret;
if (StringUtils.isBlank(customSigningRegion) && StringUtils.isNotBlank(customEndpoint)) {
ret = FormValidation.ok("'us-east-1' will be used when a custom endpoint is configured and custom signing region is blank.");
} else {
ret = FormValidation.ok();
}
return ret;
}

public FormValidation doCheckCustomEndpoint(@QueryParameter String customEndpoint) {
FormValidation ret = FormValidation.ok();
if (!StringUtils.isBlank(customEndpoint) && !endPointPattern.matcher(customEndpoint).matches()) {
ret = FormValidation.error("Custom Endpoint may not be valid.");
}
return ret;
}

public FormValidation doCheckStorageClass(@QueryParameter String customStorageClass,
@QueryParameter boolean useAWSCLI) {
String values = "INFO: Upload mode: " + (useAWSCLI?"AWS CLI":"AWS API")
+ ", S3 Storage Class: " + customStorageClass;
FormValidation ret = FormValidation.ok(values);
if (StringUtils.isNotBlank(customStorageClass)) {
if (StringUtils.equals(
StringUtils.upperCase(customStorageClass),
StringUtils.upperCase("STANDARD_IA")
)) {
if (!useAWSCLI) {
ret = FormValidation.warning("WARNING: " + customStorageClass
+ " is not supported Storage Class in the AWS API mode! "
+ "It is supported only in AWS CLI mode. Check to Use AWS CLI for files upload to S3. "
+ "CAUTION: STANDARD Storage Class will be used regardless of an option selected!"
);
}
}
}
return ret;
}

public FormValidation doCheckCustomStorageClass(@QueryParameter String customStorageClass) {
return this.doCheckStorageClass(customStorageClass, this.useAWSCLI);
}


/**
* create an S3 Bucket.
* @param name name of the S3 Bucket.
* @return return the Bucket created.
* @throws IOException in case of error obtaining the credentials, in other kind of errors it will throw the
* runtime exceptions are thrown by createBucket method.
*/
public Bucket createS3Bucket(String name) throws IOException {
return createS3Bucket(name, getDisableSessionToken());
}

private Bucket createS3Bucket(String name, boolean disableSessionToken) throws IOException {
AmazonS3ClientBuilder builder = getAmazonS3ClientBuilderWithCredentials(disableSessionToken);
//Accelerated mode must be off in order to apply it to a bucket
AmazonS3 client = builder.withAccelerateModeEnabled(false).build();
Bucket bucket = client.createBucket(name);
if(useTransferAcceleration) {
client.setBucketAccelerateConfiguration(new SetBucketAccelerateConfigurationRequest(name,
new BucketAccelerateConfiguration(BucketAccelerateStatus.Enabled)));
}
return bucket;
}

@RequirePOST
public FormValidation doCreateS3Bucket(@QueryParameter String container, @QueryParameter boolean disableSessionToken) {
Jenkins.get().checkPermission(Jenkins.ADMINISTER);
FormValidation ret = FormValidation.ok("success");
try {
createS3Bucket(container, disableSessionToken);
} catch (Throwable t){
String msg = processExceptionMessage(t);
ret = FormValidation.error(StringUtils.abbreviate(msg, 200));
}
return ret;
}

void checkGetBucketLocation(String container, boolean disableSessionToken) throws IOException {
AmazonS3ClientBuilder builder = getAmazonS3ClientBuilderWithCredentials(disableSessionToken);
AmazonS3 client = builder.build();
client.getBucketLocation(container);
}

@RequirePOST
public FormValidation doValidateS3BucketConfig(
@QueryParameter String container,
@QueryParameter String prefix,
@QueryParameter boolean useHttp,
@QueryParameter boolean useTransferAcceleration,
@QueryParameter boolean useAWSCLI,
@QueryParameter String customStorageClass,
@QueryParameter boolean usePathStyleUrl,
@QueryParameter boolean disableSessionToken,
@QueryParameter String customEndpoint,
@QueryParameter String customSigningRegion) {
Jenkins.get().checkPermission(Jenkins.ADMINISTER);
FormValidation ret = FormValidation.ok("success");

Collection<FormValidation> validations = new ArrayList<FormValidation>();


S3BlobStore provider = new S3BlobStoreTester(container, prefix,
useHttp, useTransferAcceleration,usePathStyleUrl,
useHttp, useTransferAcceleration,useAWSCLI,customStorageClass,usePathStyleUrl,
disableSessionToken, customEndpoint, customSigningRegion);


String step = "Clouds Virtual File Validation ";
try {
JCloudsVirtualFile jc = new JCloudsVirtualFile(provider, container, prefix.replaceFirst("/$", ""));
jc.list();
validations.add(FormValidation.ok("OK: " + step));
} catch (Throwable t){
String msg = processExceptionMessage(t);
ret = FormValidation.error(t, StringUtils.abbreviate(msg, 200));
validations.add(FormValidation.error(t, "FAILED: " + step
+ StringUtils.abbreviate(processExceptionMessage(t), 200)));
}

step = "Get Bucket Location Validation ";
try {
provider.getConfiguration().checkGetBucketLocation(container, disableSessionToken);
validations.add(FormValidation.ok("OK: " + step));
} catch (Throwable t){
ret = FormValidation.warning(t, "GetBucketLocation failed");
validations.add(FormValidation.warning(t, "FAILED: " + step
+ StringUtils.abbreviate(processExceptionMessage(t), 200)));
}
return ret;
}

step = "Storage Class Validation ";
// " (" + customStorageClass + " in AWS " + (useAWSCLI?"CLI":"API") + " upload mode) ";
try {
FormValidation fv = provider.getConfiguration().doCheckStorageClass(customStorageClass,useAWSCLI);
validations.add(FormValidation.ok("OK: " + step));
validations.add(fv);
} catch (Throwable t){
validations.add(FormValidation.warning(t, "FAILED: " + step
+ StringUtils.abbreviate(processExceptionMessage(t), 200)));
}

if ( FormValidation.aggregate(validations).kind != FormValidation.Kind.ERROR) {
validations.add(FormValidation.ok("\nSUCCESS."));
}

return FormValidation.aggregate(validations);

Check warning on line 511 in src/main/java/io/jenkins/plugins/artifact_manager_jclouds/s3/S3BlobStoreConfig.java

View check run for this annotation

ci.jenkins.io / Code Coverage

Not covered lines

Lines 241-511 are not covered by tests
}
}
Loading