Skip to content

If you write a Spark application that needs access to Cassandra, this library is for you

License

Notifications You must be signed in to change notification settings

christobill/spark-cassandra-connector

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spark Cassandra Connector Build Status

Join the chat at https://gitter.im/datastax/spark-cassandra-connector

Most Recent Release Scala Docs

2.0.0

All Versions API Docs

Lightning-fast cluster computing with Spark and Cassandra

This library lets you expose Cassandra tables as Spark RDDs, write Spark RDDs to Cassandra tables, and execute arbitrary CQL queries in your Spark applications.

Features

  • Compatible with Apache Cassandra version 2.0 or higher (see table below)
  • Compatible with Apache Spark 1.0 through 2.0 (see table below)
  • Compatible with Scala 2.10 and 2.11
  • Exposes Cassandra tables as Spark RDDs
  • Maps table rows to CassandraRow objects or tuples
  • Offers customizable object mapper for mapping rows to objects of user-defined classes
  • Saves RDDs back to Cassandra by implicit saveToCassandra call
  • Join with a subset of Cassandra data using joinWithCassandraTable call
  • Partition RDDs according to Cassandra replication using repartitionByCassandraReplica call
  • Converts data types between Cassandra and Scala
  • Supports all Cassandra data types including collections
  • Filters rows on the server side via the CQL WHERE clause
  • Allows for execution of arbitrary CQL statements
  • Plays nice with Cassandra Virtual Nodes
  • Works with PySpark DataFrames

Version Compatibility

The connector project has several branches, each of which map into different supported versions of Spark and Cassandra. Refer to the compatibility table below which shows the major.minor version range supported between the connector, Spark, Cassandra, and the Cassandra Java driver:

Connector Spark Cassandra Cassandra Java Driver
2.0 2.0 2.1.5*, 2.2, 3.0 3.0
1.6 1.6 2.1.5*, 2.2, 3.0 3.0
1.5 1.5, 1.6 2.1.5*, 2.2, 3.0 3.0
1.4 1.4 2.1.5* 2.1
1.3 1.3 2.1.5* 2.1
1.2 1.2 2.1, 2.0 2.1
1.1 1.1, 1.0 2.1, 2.0 2.1
1.0 1.0, 0.9 2.0 2.0

*Compatible with 2.1.X where X >= 5

Hosted API Docs

API documentation for the Scala and Java interfaces are available online:

2.0.0

1.6.0

1.5.0

1.4.2

1.3.1

1.2.0

Download

This project is available on Spark Packages; this is the easiest way to start using the connector: http://spark-packages.org/package/datastax/spark-cassandra-connector

This project has also been published to the Maven Central Repository. For SBT to download the connector binaries, sources and javadoc, put this in your project SBT config:

libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "2.0.0-M3"

Building

See Building And Artifacts

Documentation

Online Training

DataStax Academy

DataStax Academy provides free online training for Apache Cassandra and DataStax Enterprise. In DS320: Analytics with Spark, you will learn how to effectively and efficiently solve analytical problems with Apache Spark, Apache Cassandra, and DataStax Enterprise. You will learn about Spark API, Spark-Cassandra Connector, Spark SQL, Spark Streaming, and crucial performance optimization techniques.

Community

Reporting Bugs

New issues may be reported using JIRA. Please include all relevant details including versions of Spark, Spark Cassandra Connector, Cassandra and/or DSE. A minimal reproducible case with sample code is ideal.

Mailing List

Questions and requests for help may be submitted to the user mailing list.

IRC

#spark-cassandra-connector on irc.freenode.net. If you are new to IRC, you can use a web-based client.

Gitter

Join the chat at https://gitter.im/datastax/spark-cassandra-connector

Contributing

To develop this project, we recommend using IntelliJ IDEA. Make sure you have installed and enabled the Scala Plugin. Open the project with IntelliJ IDEA and it will automatically create the project structure from the provided SBT configuration.

Tips for Developing the Spark Cassandra Connector

Before contributing your changes to the project, please make sure that all unit tests and integration tests pass. Don't forget to add an appropriate entry at the top of CHANGES.txt. Create a Jira at the Spark Cassandra Connector Jira Finally open a pull-request on GitHub and await review.

Please prefix pull request description with the JIRA number, for example: "SPARKC-123: Fix the ...".

Testing

To run unit and integration tests:

./sbt/sbt test
./sbt/sbt it:test

By default, integration tests start up a separate, single Cassandra instance and run Spark in local mode. It is possible to run integration tests with your own Cassandra and/or Spark cluster. First, prepare a jar with testing code:

./sbt/sbt test:package

Then copy the generated test jar to your Spark nodes and run:

export IT_TEST_CASSANDRA_HOST=<IP of one of the Cassandra nodes>
export IT_TEST_SPARK_MASTER=<Spark Master URL>
./sbt/sbt it:test

Generating Documents

To generate the Reference Document use

./sbt/sbt spark-cassandra-connector-doc/run (outputLocation)

outputLocation defaults to doc/reference.md

License

Copyright 2014-2016, DataStax, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

About

If you write a Spark application that needs access to Cassandra, this library is for you

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 85.8%
  • Java 13.7%
  • Shell 0.5%