<br>The conventional wisdom for memcached pretty much lines up with what Steve and Jeetu have said - only store things in memcached if you keep a persistent copy of them somewhere else.<br><br>Our experience is that you can successful use memcached in two other scenarios:
<br><br>1. Queue of transient messages. We use memcached for our chat infrastructure. Each queue of chat messages goes into memcached. A client can connect to any chat server in the pool and send/receive messages. This lets us scale chat horizontally across machines without having to have a affinity for a chat client to a particular server. Chat messages get typically get delivered within 1 second, so bouncing a memcached instance affects only a small number chat messages.
<br><br>2. Application level monitoring. We store large amounts of performance information into memcached (counts of slow pages and slow queries, counts of queries per database table, execution traces for slow pages, database connection info for debugging too many connection errors in MySQL, etc.). Most these are stored either on slow pages (when a page takes 1+ seconds, doing an extra couple memcached hits isn't a big deal) or are randomly sampled so that the expected number of memcached hits per event is small. If counters get reset or we lose a few execution traces no big deal - worst impact is that our rrd graphs have blip for five minutes.
<br><br>You can use memcached for scenarios other than caching objects or database queries. You just have to be careful to work within the constraint that your data will sometimes disappear in one of a handful of pretty well defined ways.
<br><br>Anyone else have good memcached application outside of caching?<br><br>Chris<br><br><br><br><div><span class="gmail_quote">On 11/8/06, <b class="gmail_sendername">Steven Grimm</b> <<a href="mailto:sgrimm@facebook.com">
sgrimm@facebook.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Oscar Kené wrote:<br>> Is there any functionality to make it write these to a mySQL-database
<br>> instead?<br><br>Memcached is typically used to cache data that's already in your<br>database to begin with, in which case this becomes unnecessary; if an<br>object isn't in your cache, you just hit the database instead.
<br><br>In my opinion it is not a good idea to *only* store data in memcached.<br>Memcached processes can get killed accidentally, your data center can<br>have a power outage, your sysadmins can decide they need to move servers
<br>to different racks, your memcached machines can have hardware glitches<br>and spontaneously reboot, etc. (To name some of the things that have<br>happened to our memcached servers.) Even if you modified it to write<br>
data somewhere else at expiration time, you would still be vulnerable to<br>the cache getting blown away for whatever reason; you'd lose all the<br>non-expired data.<br><br>> Right now my data is INSERTed, SELECTed only once and then DELETEd.
<br>> But every "set" of data is not handled sequentially. So one "set" of<br>> data can be INSERTEd but not SELECTed or DELETEd before the entry is<br>> subject to the "LRU-rule". I.e
. I want to keep the most recent<br>> "INSERTs" in memcache as they are the most likely to be operated on<br>> first.<br><br>If that's generally your usage pattern, then memcached's LRU semantics<br>may not even come into play, assuming you're careful to delete items
<br>from the cache after you're done operating on them (which should be fine<br>if you're only reading them once.) Memcached will always reuse space<br>from deleted or expired items before it will evict any valid items from
<br>the cache, so you just need to have enough space in your cache to hold<br>your typical backlog of items. If you're constantly reading and deleting<br>the most recent items, then the next items you write will just reuse
<br>that space over and over again.<br><br>FYI, memcached's expiration policy is not strictly LRU if your cache<br>items are of significantly different sizes; there is a separate LRU<br>queue for each range of sizes computed by the slab allocator. In normal
<br>operation that's usually not noticeable, but if you're trying to do<br>something that depends in any way on eviction order, you'll want to be<br>aware of that.<br><br>-Steve<br></blockquote></div><br><br clear="all"><br>
-- <br><a href="http://avatars.imvu.com/chris">http://avatars.imvu.com/chris</a>