Re: Connection pool issues

  • From: Sandra Becker <sbecker6925@xxxxxxxxx>
  • To: MARK BRINSMEAD <mark.brinsmead@xxxxxxxxx>
  • Date: Thu, 26 Feb 2015 13:51:09 -0700

We are discussing increasing the connection pool.  Most everyone is
concerned that creating a standby from the active 19T database is causing
the application queries to run longer.  Looking into it, but this is our
5th try and no one reported any issues on the previous 4.  The recovery
phase always failed.  Still trying to figure that out, but that's another
discussion.

Sandy

On Thu, Feb 26, 2015 at 12:05 PM, MARK BRINSMEAD <mark.brinsmead@xxxxxxxxx>
wrote:

> End users (and application developers) are almost always "positive" that
> it is a database issue.  :-)
>
> And it might be, too.  Its just way too soon to know.
>
> In the meantime, if the database server is not busting at the seams, you
> might want to consider configuring your app servers to allow the connection
> pools to grow a little larger.  The REAL error is that the app server is
> not allowing the app to open a new connection.
>
> On Thu, Feb 26, 2015 at 1:43 PM, Sandra Becker <sbecker6925@xxxxxxxxx>
> wrote:
>
>> Thank you.  Definitely some things I can look at.  I was told there were
>> no changes to the code or app servers.  Of course, the end user is positive
>> it's a database issue.  I can definitely do AWR before/after.  We did have
>> some network blips earlier in the week, but nothing today.
>>
>> Sandy
>>
>> On Thu, Feb 26, 2015 at 11:39 AM, MARK BRINSMEAD <
>> mark.brinsmead@xxxxxxxxx> wrote:
>>
>>> It sounds to me like you should be as interested (or more) in connected
>>> sessions (as reported by gv$session).
>>>
>>> There is no need for *anything* to have changed in the database -- it
>>> is entirely possible that a change in the application code, or even in the
>>> application server can cause the problems you are encountering.
>>>
>>> Possible issues would be:
>>> (*) Queries running longer than they used to, meaning that applications
>>> need to hold connections from the pool open longer.  There is no reason to
>>> expect that these would necessarily appear as "longops", though.
>>> (*) Application threads not releasing connections when (as soon as) they
>>> are done with them.
>>> (*) More application threads running that before, perhaps due to
>>> increased user population or other factors.
>>> (*) JDBC connection pool management, causing the maximum number of
>>> available connections to be fewer, or causing "idle" connections to be
>>> returned to the pool more slowly.
>>> (*) Changes to limits on session-idle-time, forcing idle sessions to
>>> disconnect from the database.
>>>
>>> I bet I could think of at least 5 more causes if I thought about it for
>>> another 10 minutes.  I haven't even considered networking problems yet!
>>>
>>> Rebuilding indexes would cause execution plans to be invalidated, and
>>> might cause new plans to be chosen.  It is possible that you have seen an
>>> adverse change in plans on a commonly run query.  (Check AWR and Statspack
>>> for top queries by elapsed time, before and after the "event".)  But its
>>> just as possible that the problem has nothing to do with the database at
>>> all.
>>>
>>> On Thu, Feb 26, 2015 at 12:30 PM, Sandra Becker <sbecker6925@xxxxxxxxx>
>>> wrote:
>>>
>>>> Oracle - EE 11.2.0.2
>>>>
>>>> This week users started having issues connecting to a database.  Error
>>>> below:
>>>>
>>>> Could not retrieve documents from the docStore: exception was
>>>> com.ghx.docstore.DocStoreException: Cannot get Connection to: db_arch
>>>>
>>>>
>>>> In the log someone also saw
>>>>
>>>> Caused by: oracle.ucp.UniversalConnectionPoolException: All
>>>> connections in the Universal Connection Pool are in use
>>>>
>>>>
>>>> Obviously, something isn't releasing the connection back to the pool.
>>>> I've queried longops to no avail.  I've seen sessions using a tremendous
>>>> amount of CPU or I/O.  When those sessions were killed, we see an immediate
>>>> uptick in processing.  The only change to the database occurred last
>>>> Friday--we removed the oldest year of data from partitioned tables and
>>>> rebuilt unusable indexes.  No code releases were done.
>>>>
>>>> My question:  Is there anything I can query to see what session(s) are
>>>> causing the problem other than what I'm already doing?
>>>>
>>>> Any suggestions are appreciated.
>>>> --
>>>> Sandy
>>>> GHX
>>>>
>>>
>>>
>>
>>
>> --
>> Sandy
>> GHX
>>
>
>


-- 
Sandy
GHX

Other related posts: