<br><div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>> Long story short a local cache can be 4-8x faster than normal memcached.<br>
<br>That sounds about right for a shared memory cache. A local in-process<br>cache (in Perl) would be at least 10 times faster than Memcached. That<br>still isn't fast enough to make it worth doing unless you have some very
<br>small hot data that you always need.</blockquote><div><br>Yes......... but if the data isnt' in the local cache it won't really slow down the system very much and for certain types of applications the speedup might be significant. Having benchmarks of the local cache is important to figure out if it's contributing to a performance boost.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">BigTable isn't really a distributed hash. It provides a complex data<br>access API and is heavily oriented towards redundancy and failover.
<br>It's a closer cousin to MySQL Cluster than to Memcached.</blockquote><div><br>Sort of........ it's cell/row based mechanism so you can view it as a map/dictionary. There's no SQL or sorting indexes so I think you have to build that out on top.... I've only had a chance to read about 80% of the paper so this was an open question I had...... I'm hoping to get the Bigtable guys to come to MySQL camp. Need to send them an email now.
<br></div></div><br><br clear="all"><br>-- <br>Founder/CEO <a href="http://Tailrank.com">Tailrank.com</a><br>Location: San Francisco, CA<br>AIM/YIM: sfburtonator<br>Skype: burtonator<br>Blog: <a href="http://feedblog.org">
feedblog.org</a><br>Cell: 415-637-8078