Lightbend Slick: Five do's and don'ts

by Roman Fürst
Tags: Open Source , Functional Programming , Scala

If you ever have been asked to query a database as Scala developer you most probably already heard of Slick. Today I'd like to share some of my best practices I learned during the last year working with Slick with five do's and don'ts.

Slick logo

#1: Don't imitate ORM concepts

I often see people struggling with Slick as it requires a mindshift especially if you are coming from ORM libraries such as Hibernate. First things first: Slick is NOT an ORM. So don't try to imitate ORM concepts like navigable object graphs or lazy / eager loading. If you had to remember one sentence from this post, it would be that one. Slick is a FRM ("Functional Relational Mapping") library which is fundamentally different from Hibernate or any other JPA framework. Slick allows you to work with databases just like they were immutable collections. It tries to map well known SQL concepts to the most closely corresponding Scala feature. Think of writing type-safe SQL queries that look like Scala collections.

#2: Do compose your queries, but don't share them

Slick encourages you to write reusable and composable queries. That's a good thing and one of Slick's core features. However, do not share queries among multiple classes. You will end up with a bunch of tightly coupled components that are hard to maintain and extend. Keep them local instead. I made some good experience by encapsulating queries in their corresponding Data Access Objects. Speaking of DAOs …

#3: Do use the DAO pattern a.k.a. don't be afraid to expose DBIO

The DAO / DTO pattern plays well with Slick. I usually create domain specific DAO classes called SomethingDbio. That's just my personal flavor, you can name them whatever you want. These classes contain all the queries and expose functions that return DBIO objects. I know, we all learned to not expose library internals in your projects. Well, no rule without exceptions. Arguably switching out the persistance framework will become harder, but to be honest: How many times have you replaced the persistence layer in a project? Right, me neither. On the other side you will benefit from a simple, easy to maintain DB API that can be composed to mighty DB operations.

#4: Do use a database evolution tool

I personally like Liquibase. Of course there are many alternatives and actually it doesn't really matter which one you are using, as long as you are using one. This is not specific to Slick but it will substantially simplify your development cycles. Also this comes in very handy when using slick-codegen.

#5: Do use code generation

Now this is a huge time saver. When I started my first project with Slick I wrote all my mapped case classes and database definitions by hand. This worked well in the beginning when the project consisted of just a few DB tables. The project grew bigger and bigger, and so did the time it took to maintain those definitions and my frustration level. This was the point I set up slick-codegen. Since then the initial setup time alrady paid off multiple times in all of my projects.

Give me an example!

Let's put everything together in a simple example. First, we introduce some definitions. We have a Book and an Author class. Books and Authors have a 1:1 relationship. Sticking to #4 and #5 you will not have to write these definitions by hand. For the sake of completeness I will write them down anyway:

// Our DTOs. A book always has an author.
case class Book(id: Long, title: String, authorId: Long)
case class Author(id: Long, name: String)

// Our slick table definitions
class Books(tag: Tag) extends Table[Book](tag, "book") {
  def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
  def title = column[String]("first")
  def authorId = column[Long]("author_id")
  def * = (id.?, title, authorId) <> (Book.tupled, Book.unapply)
}

class Authors(tag: Tag) extends Table[Author](tag, "author") {
  def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
  def name = column[String]("name")
  def * = (id.?, name) <> (Author.tupled, Author.unapply)
}

Now let's create a DAO class that exposes a simple API #3. The queries are encapuslated within the DAO #2.

class LibraryDbio {

    def findBookById(id: Long): DBIO[Option[Book]] =
        Query.bookById(id).result.headOption

    def findBooksWithAuthor: DBIO[Seq[(Book, Author)]] =
        Query.booksWithAuthor.result.toSeq

    def insertBook(book: Book): DBIO[Book] = Query.writeBooks += book

    def insertAuthor(author: Book): DBIO[Author] = Query.writeAuthors += author

    // As mentioned under #2, we do encapsulate our queries
    object Query {
        val books = TableQuery[Books]
        val authors = TableQuery[Authors]

        // Return the book / author with it's auto incremented
        // id instead of an insert count
        val writeBooks = books returning books
            .map(_.id) into((book, id) => book.copy(Some(id)))
        val writeAuthors = authors returning authors
            .map(_.id) into((author, id) => author.copy(Some(id)))

        val bookById = book.findBy(_.id)

        val booksWithAuthor = for {
            b <- books
            a <- authors if b.authorId === a.id
        } yield (book, author)
    }
}

Finally we are going to consume the LibraryDbio API:

class SomeService(libraryDbio: LibraryDbio) {

    // Simple function that returns a book
    def findBookById(id: Long): Future[Option[Book]] =
        db.run(libraryDbio.findBookById(id))

    // Simple function that returns a list of books with it's author
    def findBooksWithAuthor: Future[Seq[(Book, Author)]] =
        db.run(libraryDbio.findBooksWithAuthor)

    // Insert a book and an author composing two DBIOs in a transaction
    def insertBookAndAuthor(book: Book, author: Author): Future[(Book, Author)] = {
        val action = for {
            author <- libraryDbio.insertAuthor(author)
            book <- libraryDbio.insertBook(book.copy(id = Some(author.id)))
        } yield (book, author)

        db.run(action.transactionally)
    }
}

While findBookById and findBooksWithAuthor are straightforward, we can already see some of the benefits of exposing DBIO in insertBookAndAuthor. First off we wouldn't be able to use transactions without DBIO. Secondly we can mix and match as many DAO operations in one big enclosing action using a simple for comprehension.

Conclusion

Once you have made the mindshift, Slick is a great way to work with databases in Scala. The list above is of course not universally true and based solely on my personal experience. Keep in mind that we just scratched the surface of Slick. If you want to read more about Slick, a good starting point would be the official manual. Thanks for reading and see you next time.