Huge useless busy socket when using Java API
Greg Whalin
gwhalin at meetup.com
Tue May 10 20:01:23 PDT 2005
It is possible for "hung" connections in the busy pool to stay
indefinitely hung as the maint thread does not currently mess w/ busy
connections. I think the best solution is to set a busy connection
timeout that the maint thread will try to enforce. I will try to work
on making this happen this week.
As to why the connections ended up hanging in the busy thread, I still
suspect some network related issue.
Greg
Michael Su wrote:
> Hi, Greg,
>
> I'm using java_memcached v1.2.1 ..
>
> When the huge connections appear, the only thing I can do is to restart Resin.
> If I only restart memcached, the sockets in busy pool will NOT
> decrease, since those idle connections between Resin and memcached
> have been already closed.
>
> Is there any chance or posibillity that the MemCachedClient allocate a
> socket (which is placed in busy pool) and doesnt call sock.close()?
>
> BR
> Shuo
>
> On 5/9/05, Greg Whalin <gwhalin at meetup.com> wrote:
>
>>This is strange? Seems as if connections are leaking, but only for one
>>server (though the others look pretty high as well). My initial
>>thoughts would be some sort of networking problem, but just to be safe,
>>can you let me know which version of the client you are running. I will
>>do some additional testing to look for possible memory leaks, and also
>>to see if I can change the pool to look for hung connections and try to
>>deal w/ them gracefully. I should say that this does not seem like
>>normal behavior for the client. We have been running this client for
>>some time w/ no such connection leaks or stability problems that I am
>>aware of.
>>
>>Greg
>>
>>Michael Su wrote:
>>
>>>Hi,
>>>
>>>I've encounter a strange problem ... When using Java API, there are
>>>many connections in busy pool for the first one server after running
>>>30~60 minutes. Other servers' busy pool also grow slowly.
>>>
>>>How could I solve this strange problem? Thanks in advance.
>>>
>>>
>>>Here's the debug message
>>>
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Starting self maintenance....
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of avail pool for host ( 192.168.4.203:11211) = 8
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of busy pool for host (192.168.4.203:11211 ) = 190
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of avail pool for host (192.168.4.202:11211) = 2
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of busy pool for host ( 192.168.4.202:11211) = 3872
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Need to create 3 new sockets for pool for host: 192.168.4.202:11211
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of avail pool for host (192.168.4.205:11211) = 22
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of busy pool for host (192.168.4.205:11211) = 52
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of avail pool for host ( 192.168.4.204:11211) = 6
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - ++++
>>>Size of busy pool for host (192.168.4.204:11211 ) = 200
>>>com.danga.MemCached.SockIOPool Sun May 08 03:17:02 CST 2005 - +++
>>>ending self maintenance.
>>>
>>>Here's my environment:
>>>1. Resin 2.1.x on 4 machine
>>>2. Memcache 1.1.11 on 4 machine
>>>3. Init Code:
>>> pool.setInitConn(5);
>>> pool.setMinConn(5);
>>> pool.setMaxConn(10);
>>> pool.setMaxIdle(10000L);
>>> pool.setMaintSleep(3000L);
>>> pool.setSocketTO(500);
>>> pool.setSocketConnectTO (500);
>>> pool.setNagle(false);
>>>4. Server List:
>>> "192.168.4.202:11211", "192.168.4.203:11211", "
>>>192.168.4.204:11211", "192.168.4.205:11211"
>>
>>
More information about the memcached
mailing list