Rollback batch execution when using jdbc with autocommit=true

14,062

Solution 1

The direct answer to your question is: no. If an exception occurs you have to manually call the rollback method. And before doing so you must setAutoCommit to false. By default auto commit is set to true. With auto commit set to true you can't do a rollback, an exception will occur telling you that.

Later on do not forget to set autoCommit back to true or you may have non expectable results with other methods you may have.

Here's an example on how to implement this feature. This is just sketch and you should probably pay more attention on how to handle the connection, prepared statment, exception and so on.

public void insertAndRollback(Connection connection) {
    try {
        final ArrayList parameters = new ArrayList();

        // Add your parameters to the arraylist
        parameters.add("John");
        parameters.add("Lee");
        parameters.add("Mary");
        parameters.add("Peter");
        parameters.add("Lewis");
        parameters.add("Patrick");

        final String parameterizedQuery = "insert into person (name) values (?)";

        final int batchSize = 5; // Set your batch size here
        int count = 0;
        int aux = 0;

        // Get the total number of '?' in the query
        int totalQueryParameters = Utils.countCharOccurrences(parameterizedQuery, '?');
        final int auxTotalQueryParameters = totalQueryParameters;

        final PreparedStatement preparedStatement = connection.prepareStatement(parameterizedQuery);

        // Auto Commit must be set to false
        connection.setAutoCommit(false);

        for(int i = 0; i < parameters.size(); i++)
        {
            Object obj = parameters.get(i);

            aux++;
            preparedStatement.setObject(aux, obj);

            if(totalQueryParameters == i + 1) { // Because the ArrayList starts from zero.
                // First query "parsed" - > Add to batch
                preparedStatement.addBatch();
                // One query has been added to the batch. Re-adapt the cycle.
                totalQueryParameters = totalQueryParameters + auxTotalQueryParameters;
                aux = 0;
            }

            if(++count % batchSize == 0) {
                preparedStatement.executeBatch();
            }
        }

        preparedStatement.executeBatch(); // insert remaining queries
        preparedStatement.close();
        connection.setAutoCommit(true); // Make it back to default.
    } catch (SQLException ex) {
        // Do the rollback
        doRollback(connection);

        try {
            // Make it back to default.
            connection.setAutoCommit(true);
        } catch (SQLException ex1) {
            ex1.printStackTrace();
        }

        // Dont forget to close the preparedStatement and the connection
        // if you don't need the connection open any more.

        ex.printStackTrace();
    }
}


private void doRollback(Connection c) {
    try {
        c.rollback();
    } catch (SQLException ex) {
        ex.printStackTrace();
    }
}

Solution 2

As a matter of fact PreparedStatement.executeBatch does not clarify the issue, maybe somewhere else, but I am sure it is not an atomic operation because SQL has no batch operation so executeBatch executes each statement separately at DB level. I tested it on MySQL:

t1 is an empty table that has n1 INT(11) Not Null column, autocommit = true

    ResultSet rs1 = conn.createStatement().executeQuery("select count(*) from t1");
    rs1.next();
    System.out.println(rs1.getInt(1));

    String query = "insert into t1 (n1) values(?)";
    PreparedStatement ps = conn.prepareStatement(query);
    ps.setObject(1, 1);
    ps.addBatch();
    ps.setObject(1, null);
    ps.addBatch();
    try {
        ps.executeBatch();
    } catch (Exception e) {
        System.out.println(e);
    }

    ResultSet rs2 = conn.createStatement().executeQuery("select count(*) from t1");
    rs2.next();
    System.out.println(rs2.getInt(1));

it prints

0
java.sql.BatchUpdateException: Column 'n1' cannot be null
1

that is, there were 2 inserts in the batch; first succeded, the second failed, still t1 got 1 row

Solution 3

tricky one , autocommit=true Strongly not recommended, when executing batch.

Having said that, i recommend use getUpdateCount() and build logic around to execute remaining.

finally commit

Share:
14,062
Teja
Author by

Teja

Updated on July 17, 2022

Comments

  • Teja
    Teja almost 2 years

    Im using JDBC, with autocommit=true. In one of the operation, I'm doing a batch inserts, using prepared statements.

    public void executeBatchInsert(String query, List<Object[]> entityList)  {
            try {
                pstmt = conn.prepareStatement(query);
                for(int i=0; i<entityList.size(); i++) {
                    int j=1;
                    for(Object o: entityList.get(i)) {
                        pstmt.setObject(j++, formatColumnValue(o));
                    }
    
                    pstmt.addBatch();
                    if((i+1)%1000 == 0) {
                        pstmt.executeBatch();
                    }
                }
                pstmt.executeBatch();
            } catch (SQLException e) {
            }
        }
    

    If I get an exception while executing it, when I close this connection, will all the locks be released and the rollback happens?

    -- B. Teja.

    • Mark Rotteveel
      Mark Rotteveel over 11 years
      Behavior for batch execution with autoCommit=true is (explicitly!) not defined in the JDBC specification. In general you simply should not use batch execution without disabling autoCommit.
    • Basil Bourque
      Basil Bourque over 2 years
      FYI… Regarding the Comment by Mark Rotteveel, that point is documented in section 14.1.1 of the JDBC™ 4.3 Specification: The commit behavior of executeBatch is always implementation-defined when an error occurs and auto-commit is true.
  • Teja
    Teja over 11 years
    Interesting.. So, lets say there are two threads t1 and t2 updating records in a table. If t1 is updating 1-100 records and t2 updating 50-150 records. When both the threads are trying to execute 'executeBatch', t2 will be blocked on t1, as t1 locked these records.. If t2 commit fails (for some reason), will the records (partial) be committed and the locks are released on 50-150? Or should we explicitly execute rollback on it?
  • Evgeniy Dorofeev
    Evgeniy Dorofeev over 11 years
    But you said autocommit = true. The main point is that the behavior of executeBatch with 2 (or more) insert / update statements is the same as executing 2 separate executeUpdate statement.
  • Teja
    Teja over 11 years
    Got it.. So, there wont be any rollback in this case. And if batch has 1-10, if it gets an exception at 5, then 1-4 will be committed and 5-10 wont be executed. Rite?
  • Evgeniy Dorofeev
    Evgeniy Dorofeev over 11 years
    Absolutely, but I would suggest to emulate and test any situation you have doubts about
  • Michael
    Michael about 9 years
    Thank you for this full example. However, I think you may have an error in your catch clause: // Make it back to default. connection.setAutoCommit(false); I believe you want to set it to true here, also.