skylos at gmail.com
Thu Jun 23 10:42:44 PDT 2005
I never suggested that. You're only caching for second-and-on hits on
a particular zip code.
Time is expensive. RAM is cheap. Memcached allows you to exchange
RAM for time.
If the existing system scales sufficiently on the timescale you're
allowed, then don't cache. If it doesn't, then do.
On 6/23/05, Eamon Daly <edaly at nextwavemedia.com> wrote:
> Each table has a unique set of columns, so they can't be
> merged. In any case, as I mentioned earlier, it seems
> extremely wasteful to cache entire tables when only 10% of
> the rows are active.
> Eamon Daly
> ----- Original Message -----
> From: "David Phillips" <electrum at gmail.com>
> To: <memcached at lists.danga.com>
> Sent: Thursday, June 23, 2005 12:07 PM
> Subject: Re: Namespaces
> > On 6/23/05, Eamon Daly <edaly at nextwavemedia.com> wrote:
> >> I have 100 tables, each containing 20,000
> >> rows, with zipcode as the PK. When a zipcode comes in, our
> >> application checks each table and reports which tables
> >> contain that zipcode.
> > If you only have 2,000,000 rows, why not store them all in the same
> > table? It should be a lot faster than checking one hundred tables
> > each time. It sounds like you could do something like this:
> > SELECT data FROM zips WHERE zipcode = '$zip' ORDER BY version DESC LIMIT
> > 1;
> > If your average record size is less than about 1k, you could cache the
> > entire thing in memory.
More information about the memcached