Thursday, May 10, 2007

Where cluster needs to be

I'm really excited about MySQL Cluster. It has a lot of potential to be a good competitor to Oracle RAC, and not just in a copy-cat kind of way.

The thing that makes MySQL different and a good balance, when you're fortunate enough to have the choice, is the "Share nothing" concept. In RAC (and I'm no expert), you use shared storage, usually a Netapp Filer. While filers are seriously cool, they are also seriously expensive, and, no matter how much you gussy it up, it's still a single point of failure.

I was glad I sat in on the Intro to cluster talk at the Users conference, it helped me understand what cluster has to offer, and what it still needs. These needs probably aren't any big secret to the Cluster developers, they openly talked about them and made them very clear. Others are my (admittedly limited) opinions based only on what I heard at the conference and not actually from using cluster mysql (yet).


  1. Dynamically adding and removing Storage nodes

    Doing a lot of queries on a lot of storage nodes is fine, but until I can dynamically add and remove nodes without cluster downtime, I can't easily recover from failures, I can't add capacity, and I can't get my data to re-balance itself over the nodes.

  2. I need to scale bigger than physical memory

    I know 5.1 has disk-based storage for non-indexed data, but I need to scale my indexes beyond RAM too. RAM is too expensive to scale this way forever. For what cluster should be useful for, like a terabyte or two of data, I'd probably have a hard time fitting all my indexes in physical memory.

  3. Replicating a cluster

    Talk about SPoFs all you want, but until you consider your colocation a point of failure, you're still vulnerable. The replication in 5.1 for cluster is good in some ways, but I can't help feeling like it's a little hacky. Passing the replication data back (upstream) into the SQL nodes from the data cluster just doesn't feel right somehow.

    I do like the epoch concept, I hope that makes it upstream into generic replication and isn't just NDB specific forever.

    From what I heard, even with single threaded replication, cluster is so fast that it can scale quite a ways anyway. With clever partitioning, multiple replication threads can be used, but it wouldn't be easy to implement in normal cases.

  4. So how do I backup this thing?

    So I know data nodes can be redundant, and I know I can replicate to another colo. But how can I simply get a cumulative data node cluster's data onto a single backup medium in a consistent state? Maybe I missed this at the conference, but is backing up your binary log enough? It's not how I would back up any other MySQL server.

    Netapps may be expensive, but taking a consistent snapshot of all of your data comes in pretty handy.



All of that aside, I'm still looking forward to sitting down and playing with NDB.

2 comments:

Stewart said...

issuing START BACKUP in the management client will create a consistent backup of the cluster.

the use of multiple replication channels is currently only for redundancy, not additional performance.

Jay Janssen said...

Whoops!, misspelled Oracle RAC as RAQ. I used to work at an ISP that used Sun Cobalt RAQ servers, so I always think of spelling it like that.


In reply to stewart's comment regarding replication channels: I was referring to the idea of actually setting up two masters and slaves between two clusters and replicating separate dbs or sets of tables down each channel. Not good for consistency, but if you knew it was safe, it would work.