Memcached implementation inquiry
Brad Fitzpatrick
brad at danga.com
Thu Apr 19 17:46:53 UTC 2007
No, chunk size has nothing to do with it.
You can do a "get" request and fetch multiple keys at once to reduce
your round-trips.
Your whole design strategy is still totally broken, but you could
definitely make it faster by fetching, say, 2,000 keys at once and only
doing 300 round trips instead of 60,000 round trips. Then it'd only be
0.12 seconds or so.
And -s for local unix sockets also isn't what you want.
On Thu, 19 Apr 2007, Michael Firsikov wrote:
> Brad,
>
> Thank you very much for the insight, I assume chunk size also plays a role?
> Would increasing it help?
>
> I don't think there is a silver bullet that would help reducing the number
> of round-trips at this point, unless we use memcached to work in localmode
> only? (-s that disables network support I think)
>
>
> Michael
>
>
> -----Original Message-----
> From: Brad Fitzpatrick [mailto:brad at danga.com]
> Sent: Thursday, April 19, 2007 1:18 PM
> To: Michael Firsikov
> Cc: memcached at lists.danga.com
> Subject: Re: Memcached implementation inquiry
>
> Michael,
>
> Latency from round-trips is killing you, it's not memcached.
>
> Just ping your memcached server and look at the ping round-trip time, then
> multiply by 600,000.
>
> Observe:
>
> $ ping 192.168.64.1
> PING 192.168.64.1 (192.168.64.1) 56(84) bytes of data.
> 64 bytes from 192.168.64.1: icmp_seq=1 ttl=255 time=0.398 ms
> 64 bytes from 192.168.64.1: icmp_seq=2 ttl=255 time=0.407 ms
> 64 bytes from 192.168.64.1: icmp_seq=3 ttl=255 time=0.414 ms
> 64 bytes from 192.168.64.1: icmp_seq=4 ttl=255 time=0.400 ms
>
> --- 192.168.64.1 ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 2997ms
> rtt min/avg/max/mdev = 0.398/0.404/0.414/0.025 ms
>
>
> http://www.google.com/search?q=0.403+ms+*+600%2C000
>
> 0.403 milliseconds * 600,000 = 4.03 minutes
>
>
> It's not memcached that's slow... it's your huge loop with a network
> round-trip in it.
>
> - Brad
>
>
> On Thu, 19 Apr 2007, Michael Firsikov wrote:
>
> > Gentlemen,
> >
> > We have recently stumbled upon Memcached after our mysql databases were
> > unable to withstand the concurrency load (even in replicated environment)
> >
> > I have thoroughly read almost all archived digests; however, I still have
> > not been able to properly grasp one important concept. (Do not worry, I
> will
> > not ask whether you can list all stored keys :)
> >
> > We have a large subscriber database (1.4 million users roughly), each
> having
> > somewhat detailed profile. For benchmark tests I am preloading the
> > information into memcached, and when pulling info for a particular
> profile,
> > everything is pretty smooth. (single get for a specific key)
> >
> > However, one of the main reasons to explore memcache for us, were the
> > searches. (The complexity of searches in MySQL (a myriad of joins, etc)
> > resulted in sub-par performance). I am pretty certain it is against
> > memcached best practices, but I have done a basic loop to get thru roughly
> > 600K records, get and check a value.
> > This process takes over 2 minutes on a decent box running Ubuntu server
> with
> > 2GB ram allocated to memcached. Is it bottlenecking at TCP level of the
> > connection (roughly would need 2MB transfer for the 600,000 gets)?
> > Or is the retrieving memory key takes the bulk of the time?
> >
> > Thank you in advance for reading this convoluted message :)
> >
> > P.S. Setup is 1 memcached machine, with everything running in local,
> nothing
> > put apache+php, (pecl extension for memcached), dedicated machine for the
> > test, memcached never hits swap as it only takes 500MB for the 600K
> records.
> >
> > Michael F
> >
> >
> >
>
>
More information about the memcached
mailing list