Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feature to ignore Iceberg tables #185

Merged
merged 67 commits into from
Nov 28, 2024
Merged
Changes from 1 commit
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
8a00255
Updated to BK-core
Nov 20, 2024
6baeee4
Updated to path-cleanup
Nov 20, 2024
7090dc4
Update PagingCleanupServiceTest.java
Nov 20, 2024
0f99aa6
cleanup
javsanbel2 Nov 20, 2024
9bbd52d
Merge branch 'feature/prevent-actions-on-iceberg-tables' of github.co…
javsanbel2 Nov 20, 2024
1e339c9
cleanup 2
javsanbel2 Nov 20, 2024
62c68d5
main business logic
javsanbel2 Nov 20, 2024
b6c718f
adding exception
javsanbel2 Nov 20, 2024
c5ad343
Add DB & table name to exception message
Nov 20, 2024
fd454a8
Update IcebergValidator.java
Nov 20, 2024
e41ac6c
Create IcebergValidatorTest.java
Nov 20, 2024
a2939d5
Update HiveMetadataCleanerTest.java
Nov 20, 2024
223e086
Updating and adding S3PathCleaner tests
Nov 20, 2024
1f6e360
Adding IcebergValidator to constructors
Nov 20, 2024
1905b0a
Updating Junit imports
Nov 20, 2024
efca2c9
Update SchedulerApiaryTest.java
Nov 20, 2024
d16bc0a
Update CommonBeans
Nov 20, 2024
9bf7248
clean-up add comment
Nov 21, 2024
4c45de2
Remove extra deletion
Nov 21, 2024
61c2f88
adding beans
javsanbel2 Nov 21, 2024
4e0b82b
fix tests
Nov 21, 2024
e548873
fixing it tests for metadata cleanup
javsanbel2 Nov 21, 2024
8b1ca85
fix path cleanup
javsanbel2 Nov 21, 2024
631502b
fix main problem with tests
javsanbel2 Nov 21, 2024
c1a7c96
Fix BeekeeperDryRunPathCleanupIntegrationTest
Nov 21, 2024
90c2871
revert changes to fix BeekeeperExpiredMetadataSchedulerApiaryIntegrat…
Nov 21, 2024
45fcc26
Added missing properties to fix BeekeeperUnreferencedPathSchedulerApi…
Nov 21, 2024
06914d1
Add integration test for metadatacleanup
Nov 22, 2024
a09e9f1
Update metadataHandler to catch beekeeperException
Nov 24, 2024
8c1ce38
cleanup
Nov 24, 2024
33a22c1
Update path-cleanup housekeeping status
Nov 25, 2024
a4b896a
cleanup
Nov 25, 2024
1be81b8
cleanup
Nov 25, 2024
66ad261
cleanup
Nov 25, 2024
0948aea
Update beekeeper to runtime exception
Nov 25, 2024
ed8745f
bump versions for testing
Nov 25, 2024
1047c57
Add Hadoop dependencies
Nov 25, 2024
eb13799
Update pom.xml
Nov 25, 2024
812565e
Revert changes to beekeeper-path
Nov 26, 2024
26b404c
revert more path-cleanup
Nov 26, 2024
101ab88
Revert path-cleanup
Nov 26, 2024
c646650
cleanup
Nov 26, 2024
e71a5ae
Added logging for table params
Nov 26, 2024
eea8403
add logging
Nov 26, 2024
95e6c64
remove logs to check filters
Nov 26, 2024
804be2f
cleaning up
javsanbel2 Nov 27, 2024
ae26519
fix validator tests
javsanbel2 Nov 27, 2024
c2e0b3f
clean up it tests
javsanbel2 Nov 27, 2024
07174b2
change expired metadata handler
javsanbel2 Nov 27, 2024
58c6e65
fix leninet
javsanbel2 Nov 27, 2024
a32e9d0
Add IcebergTableListenerEventFilter
Nov 27, 2024
9a93bd7
add event
javsanbel2 Nov 27, 2024
db1352a
Add integration test for scheduler
Nov 27, 2024
f300e60
Revert versions used for testing & changelog
Nov 27, 2024
bacd477
Revert testing version
Nov 27, 2024
04bb806
Update beekeeper-scheduler-apiary/src/main/java/com/expediagroup/beek…
HamzaJugon Nov 27, 2024
f94ba5d
Updating asserts and remove unused logging
Nov 27, 2024
1206eb8
Merge branch 'feature/prevent-actions-on-iceberg-tables' of https://g…
Nov 27, 2024
5517c7f
Implement IsIcebergTablePredicate
Nov 27, 2024
e66982e
revert changes to schedulerApiary
Nov 27, 2024
5e67a64
Update SchedulerApiary.java
Nov 27, 2024
a65f066
Updating logging so we only see stack trace on debug level
Nov 27, 2024
fd6bd88
Update logging in ExpiredMetadataHandler
Nov 27, 2024
026e769
Updating for minor comments
Nov 27, 2024
070b34d
Update logging
Nov 28, 2024
1418f1b
Update CHANGELOG.md
HamzaJugon Nov 28, 2024
b80e71d
Update CHANGELOG.md
HamzaJugon Nov 28, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
cleaning up
javsanbel2 committed Nov 27, 2024
commit 804be2f17021f1cdd113c91683853aa765a92059
Original file line number Diff line number Diff line change
@@ -126,23 +126,6 @@ public Map<String, String> getTableProperties(String databaseName, String tableN
}
}

@Override
public String getOutputFormat(String databaseName, String tableName) {
String result = null;
try {
Table table = client.getTable(databaseName, tableName);
if (table.getSd() != null) {
result = table.getSd().getOutputFormat();
}
} catch (NoSuchObjectException e) {
log.warn("Table {}.{} does not exist", databaseName, tableName);
} catch (TException e) {
throw new BeekeeperException(
"Unexpected exception when getting output format for \"" + databaseName + "." + tableName + ".", e);
}
return result;
}

@Override
public void close() {
client.close();
Original file line number Diff line number Diff line change
@@ -27,6 +27,4 @@ public interface CleanerClient extends Closeable {
boolean tableExists(String databaseName, String tableName);

Map<String, String> getTableProperties(String databaseName, String tableName);

String getOutputFormat(String databaseName, String tableName);
}
Original file line number Diff line number Diff line change
@@ -49,9 +49,8 @@ public void throwExceptionIfIceberg(String databaseName, String tableName) {
Map<String, String> parameters = client.getTableProperties(databaseName, tableName);
String tableType = parameters.getOrDefault("table_type", "").toLowerCase();
String format = parameters.getOrDefault("format", "").toLowerCase();
String outputFormat = client.getOutputFormat(databaseName, tableName);
if (tableType.contains("iceberg") || format.contains("iceberg") || (outputFormat != null
&& outputFormat.toLowerCase().contains("iceberg"))) {
String metadataLocation = parameters.getOrDefault("metadata_location", "").toLowerCase();
if (tableType.contains("iceberg") || format.contains("iceberg") || !metadataLocation.isEmpty()) {
throw new BeekeeperIcebergException(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Maybe using IllegalStateException would have been clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are purposely using a custom exception to handle the exception in the main class later on.

format("Iceberg table %s.%s is not currently supported in Beekeeper.", databaseName, tableName));
}
Original file line number Diff line number Diff line change
@@ -36,7 +36,6 @@
import java.util.Set;
import java.util.concurrent.TimeUnit;

import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
@@ -46,7 +45,6 @@
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
@@ -56,7 +54,6 @@
import io.micrometer.core.instrument.composite.CompositeMeterRegistry;

import com.amazonaws.services.sqs.AmazonSQS;
import com.amazonaws.services.sqs.model.CreateQueueResult;
import com.amazonaws.services.sqs.model.PurgeQueueRequest;
import com.amazonaws.services.sqs.model.SendMessageRequest;

@@ -67,72 +64,50 @@
import com.expediagroup.beekeeper.integration.model.AlterTableSqsMessage;
import com.expediagroup.beekeeper.integration.model.CreateTableSqsMessage;
import com.expediagroup.beekeeper.integration.utils.ContainerTestUtils;
import com.expediagroup.beekeeper.integration.utils.HiveTestUtils;
import com.expediagroup.beekeeper.scheduler.apiary.BeekeeperSchedulerApiary;

import com.hotels.beeju.extensions.ThriftHiveMetaStoreJUnitExtension;

@Testcontainers
public class BeekeeperExpiredMetadataSchedulerApiaryIntegrationTest extends BeekeeperIntegrationTestBase {

private static final int TIMEOUT = 30;
private static final String DRY_RUN_ENABLED_PROPERTY = "properties.dry-run-enabled";
private static final String APIARY_QUEUE_URL_PROPERTY = "properties.apiary.queue-url";
private static final String METASTORE_URI_PROPERTY = "properties.metastore-uri";

private static final String QUEUE = "apiary-receiver-queue";
private static final String SCHEDULED_EXPIRED_METRIC = "metadata-scheduled";
private static final String HEALTHCHECK_URI = "http://localhost:8080/actuator/health";
private static final String PROMETHEUS_URI = "http://localhost:8080/actuator/prometheus";

private static final String S3_ACCESS_KEY = "access";
private static final String S3_SECRET_KEY = "secret";

private static final String PARTITION_KEYS = "{ \"event_date\": \"date\", \"event_hour\": \"smallint\"}";
private static final String PARTITION_A_VALUES = "[ \"2020-01-01\", \"0\" ]";
private static final String PARTITION_B_VALUES = "[ \"2020-01-01\", \"1\" ]";
private static final String PARTITION_A_NAME = "event_date=2020-01-01/event_hour=0";
private static final String PARTITION_B_NAME = "event_date=2020-01-01/event_hour=1";
private static final String LOCATION_A = "s3://bucket/table1/partition";
private static final String LOCATION_B = "s3://bucket/table2/partition";
private static final String TABLE_PATH = "/tmp/bucket/" + DATABASE_NAME_VALUE + "/" + TABLE_NAME_VALUE + "/";

@Container
private static final LocalStackContainer SQS_CONTAINER = ContainerTestUtils.awsContainer(SQS);
private static AmazonSQS amazonSQS;
private static String queueUrl;

@RegisterExtension
public ThriftHiveMetaStoreJUnitExtension thriftHiveMetaStore = new ThriftHiveMetaStoreJUnitExtension(
DATABASE_NAME_VALUE);

private HiveTestUtils hiveTestUtils;
private HiveMetaStoreClient metastoreClient;

@BeforeAll
public static void init() {
System.setProperty(DRY_RUN_ENABLED_PROPERTY, "false");
amazonSQS = ContainerTestUtils.sqsClient(SQS_CONTAINER, AWS_REGION);
CreateQueueResult queue = amazonSQS.createQueue(QUEUE);
queueUrl = queue.getQueueUrl();
String queueUrl = ContainerTestUtils.queueUrl(SQS_CONTAINER, QUEUE);
System.setProperty(APIARY_QUEUE_URL_PROPERTY, queueUrl);

amazonSQS = ContainerTestUtils.sqsClient(SQS_CONTAINER, AWS_REGION);
amazonSQS.createQueue(QUEUE);
}

@AfterAll
public static void teardown() {
System.clearProperty(APIARY_QUEUE_URL_PROPERTY);
System.clearProperty(DRY_RUN_ENABLED_PROPERTY);

amazonSQS.shutdown();
}

@BeforeEach
public void setup() {
System.setProperty(METASTORE_URI_PROPERTY, thriftHiveMetaStore.getThriftConnectionUri());
metastoreClient = thriftHiveMetaStore.client();
hiveTestUtils = new HiveTestUtils(metastoreClient);

amazonSQS.purgeQueue(new PurgeQueueRequest(queueUrl));
amazonSQS.purgeQueue(new PurgeQueueRequest(ContainerTestUtils.queueUrl(SQS_CONTAINER, QUEUE)));
executorService.execute(() -> BeekeeperSchedulerApiary.main(new String[] {}));
await().atMost(Duration.ONE_MINUTE).until(BeekeeperSchedulerApiary::isRunning);
}
@@ -255,7 +230,7 @@ public void prometheus() {
}

private SendMessageRequest sendMessageRequest(String payload) {
return new SendMessageRequest(queueUrl, payload);
return new SendMessageRequest(ContainerTestUtils.queueUrl(SQS_CONTAINER, QUEUE), payload);
}

private void assertExpiredMetadata(HousekeepingMetadata actual, String expectedPath, String partitionName) {
Original file line number Diff line number Diff line change
@@ -45,7 +45,6 @@
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
@@ -67,14 +66,11 @@
import com.expediagroup.beekeeper.integration.utils.ContainerTestUtils;
import com.expediagroup.beekeeper.scheduler.apiary.BeekeeperSchedulerApiary;

import com.hotels.beeju.extensions.ThriftHiveMetaStoreJUnitExtension;

@Testcontainers
public class BeekeeperUnreferencedPathSchedulerApiaryIntegrationTest extends BeekeeperIntegrationTestBase {

private static final int TIMEOUT = 5;
private static final String APIARY_QUEUE_URL_PROPERTY = "properties.apiary.queue-url";
private static final String DRY_RUN_ENABLED_PROPERTY = "properties.dry-run-enabled";

private static final String QUEUE = "apiary-receiver-queue";
private static final String SCHEDULED_ORPHANED_METRIC = "paths-scheduled";
@@ -85,10 +81,6 @@ public class BeekeeperUnreferencedPathSchedulerApiaryIntegrationTest extends Bee
private static final LocalStackContainer SQS_CONTAINER = ContainerTestUtils.awsContainer(SQS);
private static AmazonSQS amazonSQS;

@RegisterExtension
public ThriftHiveMetaStoreJUnitExtension thriftHiveMetaStore = new ThriftHiveMetaStoreJUnitExtension(
DATABASE_NAME_VALUE);

@BeforeAll
public static void init() {
String queueUrl = ContainerTestUtils.queueUrl(SQS_CONTAINER, QUEUE);
@@ -101,17 +93,12 @@ public static void init() {
@AfterAll
public static void teardown() {
System.clearProperty(APIARY_QUEUE_URL_PROPERTY);
System.clearProperty("properties.metastore-uri");
System.clearProperty("properties.dry-run-enabled");

amazonSQS.shutdown();
}

@BeforeEach
public void setup() {
System.setProperty("properties.metastore-uri", thriftHiveMetaStore.getThriftConnectionUri());
System.setProperty("properties.dry-run-enabled", "false");

amazonSQS.purgeQueue(new PurgeQueueRequest(ContainerTestUtils.queueUrl(SQS_CONTAINER, QUEUE)));
executorService.execute(() -> BeekeeperSchedulerApiary.main(new String[] {}));
await().atMost(Duration.ONE_MINUTE).until(BeekeeperSchedulerApiary::isRunning);
@@ -121,9 +108,6 @@ public void setup() {
public void stop() throws InterruptedException {
BeekeeperSchedulerApiary.stop();
executorService.awaitTermination(5, TimeUnit.SECONDS);

System.clearProperty("properties.metastore-uri");
System.clearProperty("properties.dry-run-enabled");
}

@Test
@@ -173,7 +157,7 @@ public void unreferencedAlterPartitionEvent() throws SQLException, IOException,
public void unreferencedMultipleAlterPartitionEvent() throws IOException, SQLException, URISyntaxException {
List
.of(new AlterPartitionSqsMessage("s3://bucket/table/expiredTableLocation",
"s3://bucket/table/partitionLocation", "s3://bucket/table/unreferencedPartitionLocation", true, true),
"s3://bucket/table/partitionLocation", "s3://bucket/table/unreferencedPartitionLocation", true, true),
new AlterPartitionSqsMessage("s3://bucket/table/expiredTableLocation2",
"s3://bucket/table/partitionLocation2", "s3://bucket/table/partitionLocation", true, true))
.forEach(msg -> amazonSQS.sendMessage(sendMessageRequest(msg.getFormattedString())));
28 changes: 0 additions & 28 deletions beekeeper-scheduler-apiary/pom.xml
Original file line number Diff line number Diff line change
@@ -11,31 +11,8 @@

<artifactId>beekeeper-scheduler-apiary</artifactId>

<properties>
<hadoop.version>2.8.1</hadoop.version>
<hive.version>2.3.7</hive.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<dependencies>

<!-- Hive -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>${hadoop.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
</exclusion>
</exclusions>
</dependency>

<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sts</artifactId>
@@ -46,11 +23,6 @@
<artifactId>beekeeper-scheduler</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>com.expediagroup</groupId>
<artifactId>beekeeper-cleanup</artifactId>
<version>${project.version}</version>
</dependency>

<dependency>
<groupId>ch.qos.logback</groupId>
Original file line number Diff line number Diff line change
@@ -17,9 +17,7 @@

import java.util.EnumMap;
import java.util.List;
import java.util.function.Supplier;

import org.apache.hadoop.hive.conf.HiveConf;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.domain.EntityScan;
@@ -39,9 +37,6 @@
import com.expedia.apiary.extensions.receiver.common.messaging.MessageReader;
import com.expedia.apiary.extensions.receiver.sqs.messaging.SqsMessageReader;

import com.expediagroup.beekeeper.cleanup.hive.HiveClientFactory;
import com.expediagroup.beekeeper.cleanup.metadata.CleanerClientFactory;
import com.expediagroup.beekeeper.cleanup.validation.IcebergValidator;
import com.expediagroup.beekeeper.core.model.LifecycleEventType;
import com.expediagroup.beekeeper.scheduler.apiary.filter.EventTypeListenerEventFilter;
import com.expediagroup.beekeeper.scheduler.apiary.filter.ListenerEventFilter;
@@ -57,10 +52,6 @@
import com.expediagroup.beekeeper.scheduler.apiary.messaging.RetryingMessageReader;
import com.expediagroup.beekeeper.scheduler.service.SchedulerService;

import com.hotels.hcommon.hive.metastore.client.api.CloseableMetaStoreClient;
import com.hotels.hcommon.hive.metastore.client.closeable.CloseableMetaStoreClientFactory;
import com.hotels.hcommon.hive.metastore.client.supplier.HiveMetaStoreClientSupplier;

@Configuration
@ComponentScan(basePackages = { "com.expediagroup.beekeeper.core", "com.expediagroup.beekeeper.scheduler" })
@EntityScan(basePackages = { "com.expediagroup.beekeeper.core" })
@@ -148,35 +139,4 @@ public BeekeeperEventReader eventReader(

return new MessageReaderAdapter(messageReader, handlers);
}

@Bean
public HiveConf hiveConf(@Value("${properties.metastore-uri}") String metastoreUri) {
HiveConf conf = new HiveConf();
conf.setVar(HiveConf.ConfVars.METASTOREURIS, metastoreUri);
return conf;
}

@Bean
public CloseableMetaStoreClientFactory metaStoreClientFactory() {
return new CloseableMetaStoreClientFactory();
}

@Bean
Supplier<CloseableMetaStoreClient> metaStoreClientSupplier(
CloseableMetaStoreClientFactory metaStoreClientFactory, HiveConf hiveConf) {
String name = "beekeeper-scheduler-apiary";
return new HiveMetaStoreClientSupplier(metaStoreClientFactory, hiveConf, name);
}

@Bean(name = "hiveClientFactory")
public CleanerClientFactory clientFactory(
Supplier<CloseableMetaStoreClient> metaStoreClientSupplier,
@Value("${properties.dry-run-enabled}") boolean dryRunEnabled) {
return new HiveClientFactory(metaStoreClientSupplier, dryRunEnabled);
}

@Bean
public IcebergValidator icebergValidator(CleanerClientFactory clientFactory) {
return new IcebergValidator(clientFactory);
}
}
Original file line number Diff line number Diff line change
@@ -37,4 +37,3 @@ public boolean isFiltered(ListenerEvent listenerEvent, LifecycleEventType lifecy
return !Boolean.parseBoolean(tableParameters.get(lifecycleEventType.getTableParameterName()));
}
}

Original file line number Diff line number Diff line change
@@ -20,7 +20,6 @@
import java.io.IOException;
import java.util.EnumMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;

import org.slf4j.Logger;
@@ -29,16 +28,12 @@
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;

import com.expediagroup.beekeeper.cleanup.validation.IcebergValidator;
import com.expediagroup.beekeeper.core.error.BeekeeperException;
import com.expediagroup.beekeeper.core.error.BeekeeperIcebergException;
import com.expediagroup.beekeeper.core.model.HousekeepingEntity;
import com.expediagroup.beekeeper.core.model.LifecycleEventType;
import com.expediagroup.beekeeper.scheduler.apiary.messaging.BeekeeperEventReader;
import com.expediagroup.beekeeper.scheduler.apiary.messaging.MessageReaderAdapter;
import com.expediagroup.beekeeper.scheduler.apiary.model.BeekeeperEvent;
import com.expediagroup.beekeeper.scheduler.service.SchedulerService;
import com.expedia.apiary.extensions.receiver.common.event.ListenerEvent;

@Component
public class SchedulerApiary {
@@ -47,17 +42,14 @@ public class SchedulerApiary {

private final BeekeeperEventReader beekeeperEventReader;
private final EnumMap<LifecycleEventType, SchedulerService> schedulerServiceMap;
private final IcebergValidator icebergValidator;

@Autowired
public SchedulerApiary(
BeekeeperEventReader beekeeperEventReader,
EnumMap<LifecycleEventType, SchedulerService> schedulerServiceMap,
IcebergValidator icebergValidator
EnumMap<LifecycleEventType, SchedulerService> schedulerServiceMap
) {
this.beekeeperEventReader = beekeeperEventReader;
this.schedulerServiceMap = schedulerServiceMap;
this.icebergValidator = icebergValidator;
}

@Transactional
@@ -69,13 +61,9 @@ public void scheduleBeekeeperEvent() {

for (HousekeepingEntity entity : housekeepingEntities) {
try {
icebergValidator.throwExceptionIfIceberg(entity.getDatabaseName(), entity.getTableName());

LifecycleEventType eventType = LifecycleEventType.valueOf(entity.getLifecycleType());
SchedulerService scheduler = schedulerServiceMap.get(eventType);
scheduler.scheduleForHousekeeping(entity);
} catch (BeekeeperIcebergException e) {
log.warn("Iceberg table are not supported in Beekeeper. Deleting message from queue", e);
} catch (Exception e) {
throw new BeekeeperException(format(
"Unable to schedule %s deletion for entity, this message will go back on the queue",
Original file line number Diff line number Diff line change
@@ -20,7 +20,6 @@
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.lenient;
import static org.mockito.Mockito.mock;
@@ -44,9 +43,7 @@

import com.expedia.apiary.extensions.receiver.common.messaging.MessageEvent;

import com.expediagroup.beekeeper.cleanup.validation.IcebergValidator;
import com.expediagroup.beekeeper.core.error.BeekeeperException;
import com.expediagroup.beekeeper.core.error.BeekeeperIcebergException;
import com.expediagroup.beekeeper.core.model.HousekeepingEntity;
import com.expediagroup.beekeeper.core.model.HousekeepingMetadata;
import com.expediagroup.beekeeper.core.model.HousekeepingPath;
@@ -65,7 +62,6 @@ public class SchedulerApiaryTest {
@Mock private BeekeeperEventReader beekeeperEventReader;
@Mock private HousekeepingPath path;
@Mock private HousekeepingMetadata table;
@Mock private IcebergValidator icebergValidator;

private SchedulerApiary scheduler;

@@ -74,15 +70,14 @@ public void init() {
EnumMap<LifecycleEventType, SchedulerService> schedulerMap = new EnumMap<>(LifecycleEventType.class);
schedulerMap.put(UNREFERENCED, pathSchedulerService);
schedulerMap.put(EXPIRED, tableSchedulerService);
scheduler = new SchedulerApiary(beekeeperEventReader, schedulerMap, icebergValidator);
scheduler = new SchedulerApiary(beekeeperEventReader, schedulerMap);
}

@Test
public void typicalPathSchedule() {
Optional<BeekeeperEvent> event = Optional.of(newHousekeepingEvent(path, UNREFERENCED));
when(beekeeperEventReader.read()).thenReturn(event);
scheduler.scheduleBeekeeperEvent();
verify(icebergValidator).throwExceptionIfIceberg(path.getDatabaseName(), path.getTableName());
verify(pathSchedulerService).scheduleForHousekeeping(path);
verifyNoInteractions(tableSchedulerService);
verify(beekeeperEventReader).delete(event.get());
@@ -94,7 +89,6 @@ public void typicalTableSchedule() {
when(beekeeperEventReader.read()).thenReturn(event);
scheduler.scheduleBeekeeperEvent();

verify(icebergValidator).throwExceptionIfIceberg(table.getDatabaseName(), table.getTableName());
verify(tableSchedulerService).scheduleForHousekeeping(table);
verifyNoInteractions(pathSchedulerService);
verify(beekeeperEventReader).delete(event.get());
@@ -105,7 +99,6 @@ public void typicalNoSchedule() {
when(beekeeperEventReader.read()).thenReturn(Optional.empty());
scheduler.scheduleBeekeeperEvent();

verifyNoInteractions(icebergValidator);
verifyNoInteractions(pathSchedulerService);
verifyNoInteractions(tableSchedulerService);
verify(beekeeperEventReader, times(0)).delete(any());
@@ -121,7 +114,6 @@ public void housekeepingPathRepositoryThrowsException() {
scheduler.scheduleBeekeeperEvent();
fail("Should have thrown exception");
} catch (Exception e) {
verify(icebergValidator).throwExceptionIfIceberg(path.getDatabaseName(), path.getTableName());
verify(pathSchedulerService).scheduleForHousekeeping(path);
verify(beekeeperEventReader, times(0)).delete(any());
verifyNoInteractions(tableSchedulerService);
@@ -142,7 +134,6 @@ public void housekeepingTableRepositoryThrowsException() {
scheduler.scheduleBeekeeperEvent();
fail("Should have thrown exception");
} catch (Exception e) {
verify(icebergValidator).throwExceptionIfIceberg(table.getDatabaseName(), table.getTableName());
verify(tableSchedulerService).scheduleForHousekeeping(table);
verify(beekeeperEventReader, times(0)).delete(any());
verifyNoInteractions(pathSchedulerService);
@@ -153,26 +144,6 @@ public void housekeepingTableRepositoryThrowsException() {
}
}

@Test
public void icebergValidatorThrowsException() {
String databaseName = "database";
String tableName = "table";
when(path.getDatabaseName()).thenReturn(databaseName);
when(path.getTableName()).thenReturn(tableName);
Optional<BeekeeperEvent> event = Optional.of(newHousekeepingEvent(path, UNREFERENCED));
when(beekeeperEventReader.read()).thenReturn(event);

doThrow(new BeekeeperIcebergException("Iceberg table"))
.when(icebergValidator).throwExceptionIfIceberg(eq(databaseName), eq(tableName));

scheduler.scheduleBeekeeperEvent();

verify(icebergValidator).throwExceptionIfIceberg(databaseName, tableName);
verifyNoInteractions(pathSchedulerService);
verifyNoInteractions(tableSchedulerService);
verify(beekeeperEventReader).delete(event.get());
}

@Test
public void typicalClose() throws Exception {
scheduler.close();