Possible use for memcache?
mike
mike503 at gmail.com
Sun Nov 27 21:58:50 PST 2005
Firstly, thanks for the reply.
On 11/27/05, David Phillips <david at acz.org> wrote:
> First off, and generally unrelated to the subject at hand: memcached
> is a cache. Period. If you ever think of doing anything with it
> besides caching data, stop. More than likely it is a very bad idea.
Isn't that basically what file locking mechanisms are? Cached
placeholders claiming a file is in use ("cached" used loosely, as it
can be any amount of time I suppose)
This would be a distributed mechanism for claiming ownership of files
and only allowing that one path to the file; if the cache expires, the
server dies, etc, another one will pick up in it's place. I don't see
a problem with that, even a cache of 60 seconds would alleviate two
separate servers trying to access the same file.
> What you are asking for is the "holy grail" of distributed file
> systems. The reason that nothing you want exists for free is that the
> problem is very difficult. Every approach is going to have trade
> offs. Distributed computing is all about identifying and minimizing
> trade offs.
Actually, I'm not sure I'm asking for that much. There's a lot of
proprietary multipath I/O and distributed filesystems, as well as more
open and accessable ones like you've mentioned below. Technology is
evolving fast and the demand is increasing for those capabilities; I
think my approach is actually *much* simpler than a fully-flushed
global filesystem.
> Depending on what you want, exactly, which was not clear in your
> email, you should look into AFS, Coda and Lustre. Red Hat GFS and
> Oracle Cluster FS might also be worth looking into.
I'd pair Lustre with GFS and OCFS. The problem is, GFS is Linux-only.
Same with Lustre (although a BSD port is supposedly in the works) -
OCFS is just barely public now it seems (in the -mm kernels and in
Ubuntu)
> One approach similar to MogileFS is the Google File System. Read the
> paper, then understand the design and the trade offs. You will be
> amazed at the complexity and it still isn't exactly what you want.
Again - what I want is pretty simple (on a high level) - something as
simple as NFS, but the ability to have N number of servers access the
same hardware.
> If you want to connect machines to a cluster, have them add their
> local storage to the pool, mount the file system and have it all
> magically work, then you will be disappointed. Nothing like that
> exists and I'm not sure it can.
Actually, I don't need a lot of local storage added. I'm still
planning on a centralized storage mechanism (with redundancy built in
the hardware layer) - but Lustre I believe would solve what you said
as well (it appears to be able to support adding capacity from any
disks on the network as long as it runs the software)
> An interesting approach might be Solaris' ZFS with network block
> devices, maybe using ATA over Ethernet (AoE). I wonder if anyone has
> tried that?
AoE is what I would hope to use, but may have to use iSCSI since it's
better supported. There's a thread I found whilst researching this
stuff where someone vehemently was trying to disspell supposed truths
about AoE and it's multi-targetting and other features.
More information about the memcached
mailing list