Connection Pooling with c3p0

We at Bring have been using c3p0 in several of our applications, over the past years. Recently we faced an issue where we started to see lots of CannotGetJdbcConnectionException errors. The underlying database is MySQL for reference.

We thought that the issues were related to a migration to a new data center at first, but we ruled that out once we checked out that there were not enough slow queries to back this claim.


org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(
	at org.springframework.jdbc.core.JdbcTemplate.execute(
	at org.springframework.jdbc.core.JdbcTemplate.query(
	at org.springframework.jdbc.core.JdbcTemplate.query(
	at org.springframework.jdbc.core.JdbcTemplate.query(
	at org.springframework.jdbc.core.JdbcTemplate.query(
Caused by: java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
	at com.mchange.v2.sql.SqlUtils.toSQLException(
	at com.mchange.v2.sql.SqlUtils.toSQLException(
	at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(
	at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(
	at sun.reflect.GeneratedMethodAccessor93.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at net.bull.javamelody.JdbcWrapper$3.invoke(
	at net.bull.javamelody.JdbcWrapper$DelegatingInvocationHandler.invoke(
	at com.sun.proxy.$Proxy52.getConnection(Unknown Source)
	at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(
	at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(
	... 99 more
Caused by: com.mchange.v2.resourcepool.TimeoutException: A client timed out while waiting to acquire a resource from com.mchange.v2.resourcepool.BasicResourcePool@7d3c22a5 -- timeout at awaitAvailable()
	at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(
	at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(
	at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(
	at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(
	at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(
	... 108 more

Upon digging in further we found that:

  • There are not enough slow queries in the database to explain these errors.
  • The threaddumps for this application are weird. Lots of threads stuck waiting on something called GooGooStatementCache in c3p0.
  • Heapdump reveals several hundreds of NewPreparedStatement from c3p0.
  • The application logs contain lots and lots of “APPARENT DEADLOCK” WARN-entries from c3p0, indicating that the problem comes from the connection pool library, not mysql.

Futher on analysing the application logs, which looked something like this:

WARN  [c.ThreadPoolAsynchronousRunner] [user:] - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@15046d35 -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
WARN  [c.ThreadPoolAsynchronousRunner] [user:] - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@15046d35 -- APPARENT DEADLOCK!!! Complete Status:

Some of those were CONNECTION TIMEOUT and APPARENT DEADLOCK WARN Errors which is how we identified it was a c3p0 issue. Then we found more details about this on stack overflow. Reply from Steve Waldman, who developed c3p0 helped us solving this problem.

Looking at our configurations we found one property to be configured somewhat wrong. maxStatements in c3p0 was set to a higher value.

So we decided to set the maxStatements=0 which will disable the statement cache entirely and it won’t need to defer the close. Problem solved, isn’t it?

Yes it did, but what we saw after deploying this fix was quite fascinating. We set up a metric in Kibana to check the response time for this application. These are the graphs:

The response time here got reduced almost by a factor of 5. Last week, 90th percentile response time was ~900ms, now it’s 200ms. Quite Impressive. Hope this might help someone who has a similar problem.