2.13. JDBC 2.0The original JDBC API (JDBC 1.0) was first introduced as an add-on package for JDK 1.0, and it became a part of the core Java API with Java 1.1. In May 1998, Sun released the specification for JDBC 2.0. This new version of the API provides support for extended result handling, Java-aware databases, BLOB fields, and other minor improvements. All in all, there are enough new features in JDBC 2.0 to warrant a separate section in this chapter. The new version of the API is backward-compatible; code written for JDBC 1.0 compiles and runs just fine under JDBC 2.0. The updated API ships with Version 1.2 of the Java 2 platform and is also available for download separately. As of early 1999, there are very few JDBC 2.0-compliant drivers available, although Sun and InterSolv are working towards an updated version of the JDBC-ODBC Bridge. 2.13.1. Results HandlingWith JDBC 1.0, the functionality provided by the ResultSet interface is rather limited. There is no support for updates of any kind and access to rows is limited to a single, sequential read (i.e., first row, second row, third row, etc., and no going back). JDBC 2.0 supports scrollable and updateable result sets, which allows for advanced record navigation and in-place data manipulation. With scrolling, you can move forward and backward through the results of a query, rather than just using the next() method to move to the next row. In terms of scrolling, there are now three distinct types of ResultSet objects: forward-only (as in JDBC 1.0), scroll-insensitive, and scroll-sensitive. A scroll-insensitive result set generally does not reflect changes to the underlying data, while scroll-sensitive ones do. In fact, the number of rows in a sensitive result set does not even need to be fixed. As of JDBC 2.0, result sets are also updateable. From this perspective, there are two different kinds of result sets: read-only result sets that do not allow changes to the underlying data and updateable result sets that allow such changes, subject to transaction limitations and so on. To create an updateable, scroll-sensitive result set, we pass two extra arguments to the createStatement() method: Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATEABLE); If you do not pass any arguments to createStatement(), you get a forward-only, read-only result set, just as you would using JDBC 1.0. Note that if you specify a scrollable result set (either sensitive or insensitive), you must also specify whether or not the result set is updateable. After you have created a scrollable ResultSet, use the methods listed in Table 2-2 to navigate through it. As with JDBC 1.0, when you start working with a ResultSet, you are positioned before the first row of results. Table 2-2. JDBC 2.0 Record Scrolling Functions
The JDBC 2.0 API also includes a number of methods that tell you where you are in a ResultSet. You can think of your position in a ResultSet as the location of a cursor in the results. The isFirst() and isLast() methods return true if the cursor is located on the first or last record, respectively. isAfterLast() returns true if the cursor is after the last row in the result set, while isBeforeFirst() returns true if the cursor is before the first row. With an updateable ResultSet, you can change data in an existing row, insert an entirely new row, or delete an existing row. To change data in an existing row, you use the new updateXXX() methods of ResultSet. Let's assume we want to update the CUSTOMER_ID field of the first row we retrieve (okay, it's a contrived example, but bear with me): Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATEABLE); ResultSet rs = stmt.executeQuery("SELECT NAME, CUSTOMER_ID FROM CUSTOMERS"); rs.first(); rs.updateInt(2, 35243); rs.updateRow(); Here we use first() to navigate to the first row of the result set and then call updateInt() to change the value of the customer ID column in the result set. After making the change, call updateRow() to actually make the change in the database. If you forget to call updateRow() before moving to another row in the result set, any changes you made are lost. If you need to make a number of changes in a single row, you can do that with multiple calls to updateXXX() methods and then a single call to updateRow(). Just be sure you call updateRow() before moving on to another row. The technique for inserting a row is similar to updating data in an existing row, with a few important differences. The first step is to move to what is called the insert row, using the moveToInsertRow() method. The insert row is a blank row associated with the ResultSet that contains all the fields, but no data; you can think of it as a pseudo-row where you can compose a new row. After you have moved to the insert row, use updateXXX() methods to load new data into the insert row and then call insertRow() to append the new row to the ResultSet and the underlying database Here's an example that adds a new customer to the database: ResultSet rs = stmt.executeQuery("SELECT NAME, CUSTOMER_ID FROM CUSTOMERS"); rs.moveToInsertRow(); rs.updateString(1, "Tom Flynn"); rs.updateInt(2, 35244); rs.insertRow(); Note that you do not have to supply a value for every column, as long as the columns you omit can accept null values. If you don't specify a value for a column that cannot be null, you'll get a SQLException. After you call insertRow(), you can create another new row, or you can move back to the ResultSet using the various navigation methods shown in Table 2-2. One final navigation method that isn't listed in the table is moveToCurrentRow(). This method takes you back to where you were before you called moveToInsertRow(); it can only be called while you are in the insert row. Deleting a row from an updateable result set is easy. Simply move to the row you want to delete and call the deleteRow() method. Here's how to delete the last record in a ResultSet: rs.last(); rs.deleteRow(); Calling deleteRow() also deletes the row from the underlying database. Note that not all ResultSet objects are updateable. In general, the query must reference only a single table without any joins. Due to differences in database implementations, there is no single set of requirements for what makes an updateable ResultSet. As useful as scrollable and updateable result sets are, the JDBC 2.0 specification does not require driver vendors to support them. If you are building middleware or some other kind of system that requires interaction with a wide range of database drivers, you should avoid this functionality for the time being. The extended JDBC 2.0 DatabaseMetaData object can provide information about scrolling and concurrency support. 2.13.2. Batch UpdatesThe original JDBC standard isn't very efficient when it comes to loading large amounts of information into a database. Even if you use a PreparedStatement, your program still executes a separate query for each piece of data inserted. If your software is inserting 10,000 rows into the database, there may be performance problems. The new addBatch() method of Statement allows you to lump multiple update statements as a unit and execute them at once. You call addBatch() after you create the statement and before execution: con.setAutoCommit(false); // If some fail, we want to rollback the rest Statement stmt = con.createStatement(); stmt.addBatch("INSERT INTO CUSTOMERS VALUES (1, "J Smith", "617 555-1323"); stmt.addBatch("INSERT INTO CUSTOMERS VALUES (2, "A Smith", "617 555-1132"); stmt.addBatch("INSERT INTO CUSTOMERS VALUES (3, "C Smith", "617 555-1238"); stmt.addBatch("INSERT INTO CUSTOMERS VALUES (4, "K Smith", "617 555-7823"); int[] upCounts = stmt.executeBatch(); con.commit(); Notice that we turn transaction auto-commit off before creating the batch. This is because we want to roll back all the SQL statements if one or more of them fail to execute properly. After calling addBatch() multiple times to create our batch, we call executeBatch() to send the SQL statements off to the database to be executed as a batch. Batch statements are executed in the order they are added to the batch. executeBatch() returns an array of update counts, where each value in the array represents the number of rows affected by the corresponding batch statement. If you need to remove the statements from a pending batch job, you can call clearBatch(), as long as you call it before calling executeBatch(). Note that you can use only SQL statements that return an update count (e.g., CREATE, DROP, INSERT, UPDATE, DELETE) as part of a batch. If you include a statement that returns a result set, such as SELECT, you get a SQLException when you execute the batch. If one of the statements in a batch cannot be executed for some reason, executeBatch() throws a BatchUpdateException. This exception, derived from SQLException, contains an array of update counts for the batch statements that executed successfully before the exception was thrown. The addBatch() method works slightly differently for PreparedStatement and CallableStatement objects. To use batch updating with a PreparedStatement, create the statement normally, set the input parameters, and then call the addBatch() method with no arguments. Repeat as necessary and then call executeBatch() when you're finished: con.setAutoCommit(false); // If some fail, we want to rollback the rest PreparedStatement stmt = con.prepareStatement( "INSERT INTO CUSTOMERS VALUES (?,?,?)"); stmt.setInt(1,1); stmt.setString(2, "J Smith"); stmt.setString(3, "617 555-1323"); stmt.addBatch(); stmt.setInt(1,2); stmt.setString(2, "A Smith"); stmt.setString(3, "617 555-1132"); stmt.addBatch(); int[] upCounts = stmt.executeBatch(); con.commit(); This batch functionality also works with CallableStatement objects for stored procedures. The catch is that each stored procedure must return an update count and may not take any OUT or INOUT parameters. 2.13.3. Java-Aware DatabasesJava is object-oriented; relational databases are not. As a result, it's decidedly difficult to shoehorn a Java object into a stubbornly primitive-oriented database table. Luckily, the wind is changing, and newer database systems, including object-oriented database management systems (OODBMS) and Java-relational database management systems,[5] provide direct support for storing and manipulating objects. Where a regular relational database can store only a limited number of primitive types, a JDBMS system can store entire, arbitrary Java objects.
Say we want to store a customized Java Account object in the ACCOUNTS table in a database. With a standard DBMS and JDBC 1.0, we have to pull each piece of data (account number, account holder, balance, etc.) out of the Account object and write it to a complicated database table. To get data out, we reverse the process. Short of serializing the Account object and writing it to a binary field (a rather complex operation), we're stuck with this clumsy approach.[6]
With JDBC 2.0, the getObject() method has been extended to support these new Java-aware databases. Provided that the database supports a Java-object type, we can read the Account object just like any primitive type: ResultSet rs = stmt.executeQuery("SELECT ACCOUNT FROM ACCOUNTS"); rs.next(); Account a = (Account)rs.getObject(1); To store an object, we use a PreparedStatement and the setObject() method: Account a = new Account(); // Fill in appropriate fields in Account object PreparedStatement stmt = con.prepareStatement( "INSERT INTO ACCOUNTS (ACCOUNT) VALUE (?)"); stmt.setObject(1, a); stmt.executeUpdate(); A column that stores a Java object has a type of Types.JAVA_OBJECT. The JDBC API does not take any special steps to locate the bytecodes associated with any particular class, so you should make sure that any necessary objects can be instantiated with a call to Class.forName(). 2.13.4. BLOBs and CLOBsBinary large objects (BLOBs) and character large objects (CLOBs) store large amounts of binary or character data. Different database vendors have different names for these fields. For example, on Oracle7 systems, they are known as LONG and LONG RAW fields, while Microsoft Access refers to them as OLE object fields. Oracle8 introduces actual BLOB and CLOB types. JDBC 1.0 makes programs retrieve BLOB and CLOB data using the getBinaryStream() or getAsciiStream() methods (a third method, getUnicodeStream(), has been deprecated in favor of the new getCharacterStream() method, which returns a Reader). In JDBC 2.0, the ResultSet interface includes getBlob() and getClob() methods, which return Blob and Clob objects, respectively. The Blob and Clob objects themselves allow access to their data via streams (the getBinaryStream() method of Blob and the getCharacterStream() method of Clob) or direct-read methods (the getBytes() method of Blob and the getSubString() method of Clob). In addition, you can set Blob and Clob objects when you are working with a PreparedStatement, using the setBlob() and setClob() methods. There are update methods for streams, but no updateBlob() and updateClob()methods. Note that the lifespan of a Blob or Clob object is limited to the transaction that created it. 2.13.5. The JDBC Standard ExtensionThe javax.sql package is a standard extension for JDBC 2.0 that includes support for a variety of enterprise-development activities. For example, the standard extension lets you use JNDI for connecting to a database, making it possible to obtain the name of a database from a name service, rather than using a hardcoded name. Another key feature is the ability to treat the results of a database query as a JavaBeans component, using the RowSet interface. The PooledConnection interface offers connection-pooling functionality, so that your application can have a cache of open database connections. The standard extension also provides support for distributed transactions by allowing a JDBC driver to utilize the standard two-phase commit protocol used by the Java Transaction API (JTA), which facilitates the use of JDBC in Enterprise JavaBeans components. Copyright © 2001 O'Reilly & Associates. All rights reserved. | ||||||||||||||||||
|