Already added :)<br><br>I think I'm going to try to ping the Google/Bigtable guys to see if they can come. I'm also probably going to move it to Friday which would make it easier for them.....<br><br>Kevin<br><br><div><span class="gmail_quote">
On 11/2/06, <b class="gmail_sendername">Jay Pipes</b> <<a href="mailto:jay@mysql.com">jay@mysql.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi all!<br><br>Not sure if it's been mentioned yet, but this would be an excellent<br>session to be discussed at MySQL Camp this coming week. Kevin, feel<br>like putting a few slides (to get the ideas going) together and putting
<br>the session up on the grid?<br><br>Cheers,<br><br>Jay<br><br>On Thu, 2006-11-02 at 15:38 -0500, Perrin Harkins wrote:<br>> On Wed, 2006-11-01 at 14:53 -0800, Kevin Burton wrote:<br>><br>> > Yes......... but if the data isnt' in the local cache it won't really
<br>> > slow down the system very much and for certain types of applications<br>> > the speedup might be significant. Having benchmarks of the local<br>> > cache is important to figure out if it's contributing to a performance
<br>> > boost.<br>><br>> The gist of it is that things that are fetched from the local cache will<br>> be faster, things that are fetched from memcached will be slower (due to<br>> looking in the local cache first), things that are fetched from the disk
<br>> cache will be faster (since they aren't coming from the database), and<br>> things that come from the database will be slower, since they have to<br>> wait for three cache fetches. Updates and inserts will all be much
<br>> slower since they have to be written to four places.<br>><br>> I think that a better approach is to just use multiple caches for<br>> different things, rather than the sort of hierarchical approach you<br>
> suggest. If you designate certain types of data for each cache and<br>> don't write the same data to multiple ones, you avoid the mess of<br>> talking to multiple caches for each get or set.<br>><br>> > BigTable isn't really a distributed hash. It provides a
<br>> > complex data<br>> > access API and is heavily oriented towards redundancy and<br>> > failover.<br>> > It's a closer cousin to MySQL Cluster than to Memcached.
<br>> ><br>> > Sort of........ it's cell/row based mechanism so you can view it as a<br>> > map/dictionary. There's no SQL or sorting indexes so I think you have<br>> > to build that out on top....
<br>><br>> The Wikipedia summary isn't bad: <a href="http://en.wikipedia.org/wiki/BigTable">http://en.wikipedia.org/wiki/BigTable</a><br>> It stores data in sorted order by row key. They have a custom language<br>
> for querying, called Sawzall. The biggest difference though, is the<br>> emphasis on redundancy. With memcached, the more servers you add to a<br>> cluster, the more likely you are to experience data loss (more servers +
<br>> no redundancy = more failures), while BigTable works very hard to avoid<br>> this by using multiple copies, commit logs, etc.<br>><br>> - Perrin<br>><br>><br><br></blockquote></div><br><br clear="all">
<br>-- <br>Founder/CEO <a href="http://Tailrank.com">Tailrank.com</a><br>Location: San Francisco, CA<br>AIM/YIM: sfburtonator<br>Skype: burtonator<br>Blog: <a href="http://feedblog.org">feedblog.org</a><br>Cell: 415-637-8078