Home » Java » Oracle JDBC performance of ResultSet

Oracle JDBC performance of ResultSet

Posted by: admin December 28, 2021 Leave a comment

Questions:

I was using so far something like this for querying my database that was working perfectly fine :

PreparedStatement prepStmt = dbCon.prepareStatement(mySql);
ResultSet rs = prepStmt.executeQuery();

But then I needed to use the rs.first(); in order to be able to iterate over my rs multiple times. So I use now

PreparedStatement prepStmt = dbCon.prepareStatement(mySql,ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);    

My question is related to the performance of the two. What do I lose if I use the second option? Will using the second option have any negative effect in the code that I have written so far?

PS: Note that my application is a multi-user, database-intensive web application (on a Weblogic 10.3.4) that uses a back end Oracle 11g database.

Thanks all for your attention.

UPDATE

My maximum reslutset size will be less than 1000 rows and 15-20 columns

Answers:

If you’re using scrollability (your second option), pay attention to this:

Important: Because all rows of any scrollable result set are stored in
the client-side cache, a situation where the result set contains many
rows, many columns, or very large columns might cause the client-side
Java Virtual Machine (JVM) to fail. Do not specify scrollability for a
large result set.

Source: Oracle Database JDBC Developer’s Guide and Reference

###

Since an Oracle cursor is a forward-only structure, in order to simulate a scrollable cursor, the JDBC driver would generally need to cache the results in memory if it wants to be able to ensure that the same results are returned when you iterate through the results a second time. Depending on the number and size of the results returned from the query, that can involve a substantial amount of additional memory being consumed on the application server. On the other hand, that should mean that iterating through the ResultSet a second time should be much more efficient than the first time.

Whether the extra memory required is meaningful depends on your application. You say that the largest ResultSet will have 1000 rows. If you figure that each row is 500 bytes (this will obviously depend on data types– if your ResultSet just has a bunch of numbers, it would be much smaller, if it contains a bunch of long description strings, it may be much larger), 1000 rows is 500 kb per user. If you’ve got 1000 simultaneous users, that’s only 500 MB of storage which probably isn’t prohibitive. If you’ve got 1 million simultaneous users, on the other hand, that’s 500 GB which is probably means that you’re buying a few new servers. If your rows are 5000 bytes rather than 500, then you’re talking about 5 GB of RAM which could be a large fraction of the memory required on the application server to run your application.