Possible use for memcache?
mike
mike503 at gmail.com
Sun Nov 27 23:42:43 PST 2005
Perhaps I should also clarify this more.
*NOTE: This isn't necessarily related to memcached itself. I thought I
would bring it up here as it could be a tool that could aid in this -
or the concept of it could be used*
I've had two people now say "I'm not quite sure what your problem is"
- so let me explain quick.
The problem is this.
NFS is great for a centralized storage server. However, what are the
main pain points?
- Single point of failure (redundancy/HA)
- Becomes a bottleneck when the network becomes too busy (scaling)
- Directly tied to the physical storage backend
Okay, so you want to fix the redundancy piece. NFS does not come with
clustering out of the box. You can roll your own though, there's a lot
of HOWTOs and other information around the net about that.
But what about scalability? Multi-path/shared storage?
Multi-path storage can only be accomplished using a "globally aware"
filesystem. For those of us who want to roll our own, or simply cannot
afford million dollar equipment, we have a few options:
- NFSv4 - way too far in the future right now, nothing concrete either
(for the clustering/etc)
- GFS - Linux-specific. FreeBSD port died years ago. Even difficult to
get working on specific Linux distros (I'm talking Redhat's GFS,
ex-Sistina)
- OCFS2 - Linux-specific - still not sure if this is
production-stable, and good for webhosting, or basically designed only
for RAC. Not mainline kernel supported either (have to use -mm patched
kernels, or (K?)ubuntu)
- CXFS - SGI's clustered XFS - commercial, still haven't got a quote
back on how much it costs. Normally sold with their storage equipment,
not standalone
- Lustre - Linux-specific. FreeBSD port supposedly in progress. Seems
like configuration can be quite complicated, but can scale well and is
really fast from what I've read.
Then a variety of proprietary storage options like offered by Panasas.
I've been Googling for weeks now, reading mail threads, looking up
websites, asking for quotes, etc. The most cost-effective and easiest
to implement would be an NFS-based setup; but I'd like to try to solve
the issues explained above.
I believe by making some sort of middle layer as a traffic cop (for
lack of a better metaphor) could allow for n+1 scaling, a simplistic
but effective way to fence I/O requests from conflicting with each
other on shared (read: same physical unit) storage, but be completely
transparent to any NFS client (which is a standard and available in
just about every OS there is)
I hope that explains it better.
I will most likely look for some NFS mailing lists soon and post this
idea. They'd know all the file locking and I/O specifics and whether
or not it can be done this simple. I was already on the memcached list
though and thought I'd share my idea - see if anyone else has thought
or heard of anything like this or solved this using another method.
Thanks for listening,
mike
More information about the memcached
mailing list