Using Auto Incrementing fields with PostgreSQL and Slick
Solution 1
This is working here:
object Application extends Table[(Long, String)]("application") {
def idlApplication = column[Long]("idlapplication", O.PrimaryKey, O.AutoInc)
def appName = column[String]("appname")
def * = idlApplication ~ appName
def autoInc = appName returning idlApplication
}
var id = Application.autoInc.insert("App1")
This is how my SQL looks:
CREATE TABLE application
(idlapplication BIGSERIAL PRIMARY KEY,
appName VARCHAR(500));
Update:
The specific problem with regard to a mapped table with User (as in the question) can be solved as follows:
def forInsert = first ~ last <>
({ (f, l) => User(None, f, l) }, { u:User => Some((u.first, u.last)) })
This is from the test cases in the Slick git repository.
Solution 2
I tackled this problem in an different way. Since I expect my User
objects to always have an id in my application logic and the only point where one would not have it is during the insertion to the database, I use an auxiliary NewUser
case class which doesn't have an id.
case class User(id: Int, first: String, last: String)
case class NewUser(first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id ~ first ~ last <> (User, User.unapply _)
def autoInc = first ~ last <> (NewUser, NewUser.unapply _) returning id
}
val id = Users.autoInc.insert(NewUser("John", "Doe"))
Again, User
maps 1:1 to the database entry/row while NewUser
could be replaced by a tuple if you wanted to avoid having the extra case class, since it is only used as a data container for the insert
invocation.
EDIT: If you want more safety (with somewhat increased verbosity) you can make use of a trait for the case classes like so:
trait UserT {
def first: String
def last: String
}
case class User(id: Int, first: String, last: String) extends UserT
case class NewUser(first: String, last: String) extends UserT
// ... the rest remains intact
In this case you would apply your model changes to the trait first (including any mixins you might need), and optionally add default values to the NewUser
.
Author's opinion: I still prefer the no-trait solution as it is more compact and changes to the model are a matter of copy-pasting the User
params and then removing the id
(auto-inc primary key), both in case class declaration and in table projections.
Solution 3
We're using a slightly different approach. Instead of creating a further projection, we request the next id for a table, copy it into the case class and use the default projection '*' for inserting the table entry.
For postgres it looks like this:
Let your Table-Objects implement this trait
trait TableWithId { this: Table[_] =>
/**
* can be overriden if the plural of tablename is irregular
**/
val idColName: String = s"${tableName.dropRight(1)}_id"
def id = column[Int](s"${idColName}", O.PrimaryKey, O.AutoInc)
def getNextId = (Q[Int] + s"""select nextval('"${tableName}_${idColName}_seq"')""").first
}
All your entity case classes need a method like this (should also be defined in a trait):
case class Entity (...) {
def withId(newId: Id): Entity = this.copy(id = Some(newId)
}
New entities can now be inserted this way:
object Entities extends Table[Entity]("entities") with TableWithId {
override val idColName: String = "entity_id"
...
def save(entity: Entity) = this insert entity.withId(getNextId)
}
The code is still not DRY, because you need to define the withId method for each table. Furthermore you have to request the next id before you insert an entity which might lead to performance impacts, but shouldn't be notable unless you insert thousands of entries at a time.
The main advantage is that there is no need for a second projection what makes the code less error prone, in particular for tables having many columns.
Solution 4
The simplest solution was to use the SERIAL type like this:
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
Here's a more concrete block:
// A case class to be used as table map
case class CaseTable( id: Long = 0L, dataType: String, strBlob: String)
// Class for our Table
class MyTable(tag: Tag) extends Table[CaseTable](tag, "mytable") {
// Define the columns
def dataType = column[String]("datatype")
def strBlob = column[String]("strblob")
// Auto Increment the id primary key column
def id = column[Long]("id", SqlType("SERIAL"), O.PrimaryKey, O.AutoInc)
// the * projection (e.g. select * ...) auto-transforms the tupled column values
def * = (id, dataType, strBlob) <> (CaseTable.tupled, CaseTable.unapply _)
}
// Insert and get auto incremented primary key
def insertData(dataType: String, strBlob: String, id: Long = 0L): Long = {
// DB Connection
val db = Database.forURL(jdbcUrl, pgUser, pgPassword, driver = driverClass)
// Variable to run queries on our table
val myTable = TableQuery[MyTable]
val insert = try {
// Form the query
val query = myTable returning myTable.map(_.id) += CaseTable(id, dataType, strBlob)
// Execute it and wait for result
val autoId = Await.result(db.run(query), maxWaitMins)
// Return ID
autoId
}
catch {
case e: Exception => {
logger.error("Error in inserting using Slick: ", e.getMessage)
e.printStackTrace()
-1L
}
}
insert
}
Solution 5
I've faced the same problem trying to make the computer-database sample from play-slick-3.0 when I changed the db to Postgres. What solved the problem was to change the id column (primary key) type to SERIAL in the evolution file /conf/evolutions/default/1.sql (originally was in BIGINT). Take a look at https://groups.google.com/forum/?fromgroups=#%21topic/scalaquery/OEOF8HNzn2U
for the whole discussion.
Cheers,
ReneX
Comments
-
Jack almost 2 years
How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:
//import scala.slick.driver.H2Driver.simple._ //import scala.slick.driver.BasicProfile.SimpleQL.Table import scala.slick.driver.PostgresDriver.simple._ import Database.threadLocalSession object TestMappedTable extends App{ case class User(id: Option[Int], first: String, last: String) object Users extends Table[User]("users") { def id = column[Int]("id", O.PrimaryKey, O.AutoInc) def first = column[String]("first") def last = column[String]("last") def * = id.? ~ first ~ last <> (User, User.unapply _) def ins1 = first ~ last returning id val findByID = createFinderBy(_.id) def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id } // implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession() implicit val session = Database.forURL("jdbc:postgresql:test:slicktest", driver="org.postgresql.Driver", user="postgres", password="xxx") session.withTransaction{ Users.ddl.create // insert data print(Users.insert(User(None, "Jack", "Green" ))) print(Users.insert(User(None, "Joe", "Blue" ))) print(Users.insert(User(None, "John", "Purple" ))) val u = Users.insert(User(None, "Jim", "Yellow" )) // println(u.id.get) print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" ))) } session.withTransaction{ val queryUsers = for { user <- Users } yield (user.id, user.first) println(queryUsers.list) Users.where(_.id between(1, 2)).foreach(println) println("ID 3 -> " + Users.findByID.first(3)) } }
Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:
[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
-
Bruno over 11 yearsSorry, I'm a scala beginner, I think I misunderstood your example. I'm using the driver version 9.1-901-1.jdbc4. I've edited my answer to illustrate exactly what I'm doing here. Hope it helps.
-
Cristian Boariu over 11 years@JacobusR have you solved the issue with id being null when sending it as in your question, with
None
? Because I have that issue too... -
Cristian Boariu over 11 years@JacobusR Thanks a lot! My question is also, if for instance your
last
would be an int value asOption[Int]
how you'd define it? AnotherforInsert
for this specific case? Because if I define it likeOption[Int]
I have to put None each time I add an insert, while there be cases when this is not None... -
Jack over 11 yearsHi Christian, I'm not sure I understand... Won't you please ask it as a separate question so that you can post a larger part of your code. With a bit of luck, some other clever Slick guru stumbles over your question and sets us both straight :-)
-
Jack about 11 years+1 Thanks for sharing. The duplication of the members is not ideal (one may consider traits or inheritance), but it does make the code very readable.
-
Travis Stevens almost 11 yearsJust a note. If your data type is int or bigint and you want to use AutoInc, you need to create a sequence and set the default to use the sequence.
-
AlexBrand almost 11 yearsWhat can be done when tables are larger? maybe 15-20 columns... the
forInsert
method becomes a monster! -
mdedetrich over 10 yearsHave to agree with @alexBrand, this solution is pretty stupid when you have tables with a large amount of columns, the amount of boilerplate is ridiculous. Slick should automatically handle the
Option[Long]
andid.?
case automatically when dealing with postgres -
Daghan --- about 10 yearsDid anyone attempt to make this work with Slick 2.0.1? The default projection syntax has changed so the above code doesn't compile and I can't find any working example with Postgres
-
virtualeyes about 9 yearsInteresting approach, but ties the application directly to Postgres. A database independent approach would be to use Slick's
Sequence
class and have each driver implement according to sequence naming convention (e.g. Postgres' is "tableName_pkName_seq"). Batch inserts won't work so well with nextval unfortunately. Not sure if current Slick (3.0) solves the boilerplate issue referenced in accepted answer comments; if not, pretty hideous having to manually map out case class as a tuple sans pk column...