CI | Test coverage(%) | Code quality | Stable version | ScalaDoc | Chat | Open issues | Average issue resolution time |
---|---|---|---|---|---|---|---|
Phantom also brings in support for batch statements. To use them, see IterateeBigTest.scala. Before you read further, you should remember batch statements are not used to improve performance.
Read the official docs for more details, but in short batches guarantee atomicity and they are about 30% slower on average than parallel writes at least, as they require more round trips. If you think you're optimising performance with batches, you might need to find alternative means.
We have tested with 100 statements per batch, and 1000 batches processed simultaneously. Before you run the test, beware that it takes ~40 minutes.
Batches use lazy iterators and daisy chain them to offer thread safe behaviour. They are not memory intensive and you can expect consistent processing speed even with very large numbers of batches.
Batches are immutable and adding a new record will result in a new Batch, just like most things Scala, so be careful to chain the calls.
phantom also supports COUNTER
batch updates and UNLOGGED
batch updates.
To start, we need an example database connection.
import com.datastax.driver.core.SocketOptions
import com.outworkers.phantom.connectors._
import com.outworkers.phantom.dsl._
object Connector {
val default: CassandraConnection = ContactPoint.local
.withClusterBuilder(_.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(20000)
.setReadTimeoutMillis(20000)
)
).noHeartbeat().keySpace(
KeySpace("phantom").ifNotExists().`with`(
replication eqs SimpleStrategy.replication_factor(1)
)
)
}
Now let's define a few tables to allow us to exemplify batch queries.
import scala.concurrent.Future
import com.outworkers.phantom.dsl._
case class CounterRecord(id: UUID, count: Long)
abstract class CounterTableTest extends Table[
CounterTableTest,
CounterRecord
] {
object id extends UUIDColumn with PartitionKey
object entries extends CounterColumn
}
case class Article(
id: UUID,
name: String,
content: String
)
abstract class Articles extends Table[Articles, Article] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
object content extends StringColumn
}
class TestDatabase(
override val connector: CassandraConnection
) extends Database[TestDatabase](connector) {
object articles extends Articles with Connector
object counterTable extends CounterTableTest with Connector
}
object TestDatabase extends TestDatabase(Connector.default)
trait TestDbProvider extends DatabaseProvider[TestDatabase] {
override val database = TestDatabase
}
import java.util.UUID
import com.outworkers.phantom.dsl._
trait LoggedQueries extends TestDbProvider {
Batch.logged
.add(db.articles.update.where(_.id eqs UUID.randomUUID).modify(_.name setTo "blabla"))
.add(db.articles.update.where(_.id eqs UUID.randomUUID).modify(_.content setTo "blabla2"))
.future()
}
import com.outworkers.phantom.dsl._
trait UnloggedQueries extends TestDbProvider {
Batch.unlogged
.add(db.articles.update.where(_.id eqs UUID.randomUUID).modify(_.name setTo "blabla"))
.add(db.articles.update.where(_.id eqs UUID.randomUUID).modify(_.content setTo "blabla2"))
.future()
}
import com.outworkers.phantom.dsl._
trait CounterQueries extends TestDbProvider {
Batch.counter
.add(db.counterTable.update.where(_.id eqs UUID.randomUUID).modify(_.entries increment 500L))
.add(db.counterTable.update.where(_.id eqs UUID.randomUUID).modify(_.entries decrement 300L))
.future()
}
Counter operations also offer a standard overloaded operator syntax, so instead of increment
and decrement
you can also use +=
and -=
to achieve the same thing.
import com.outworkers.phantom.dsl._
trait CounterOpsQueries extends TestDbProvider {
Batch.counter
.add(db.counterTable.update.where(_.id eqs UUID.randomUUID).modify(_.entries += 500L))
.add(db.counterTable.update.where(_.id eqs UUID.randomUUID).modify(_.entries -= 300L))
.future()
}