Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Databases Networking Programming Software

Choosing Interconnects for Grid Databases? 31

cybrthng asks: "With all of the choices from Gig-E, 10 Gig-E, SCI/Infiniband and other connections for grid database applications, which one is actually worth the money and which ones are overkill or under performing? In a Real Application Cluster (RAC), latency can be an issue with cached memory and latches going over the interconnect. I don't want to recommend an architecture that doesn't achieve desired results, but on the flipside, I don't necessarily want overkill. Sun had recommended SCI, Oracle has said Gig-E and other vendors have said 10 Gig-E. Seems sales commissions drive many of what people recommend, so I'm interested in any real world experience you may have. Obviously, Gig-E is more affordable from a hardware perspective but does this come at a cost of application availability and performance to the end users? What has been your success or failures of grid interconnects?"
This discussion has been archived. No new comments can be posted.

Choosing Interconnects for Grid Databases?

Comments Filter:
  • Gigabit Ethernet (Score:1, Informative)

    by Anonymous Coward
    Switched, gigabit ethernet is going to offer the best performance. Gigabit ethernet is also cheap. Infiniband is too expensive and underperforms. Fibre channel is way too expensive and is no faster than Gig E. 10Gbps ethernet is only good on dedicated switches because a PC cannot drive it. Most PCs can't even drive 1 Gbps ethernet.
  • Gig-E (Score:5, Informative)

    by tedhiltonhead ( 654502 ) on Wednesday October 12, 2005 @12:55PM (#13774496)
    We use Gig-E for our 2-node Oracle 9i RAC cluster. We have each NIC plugged into a different switch in our 3-switch fabric (which we'd have anyway). This way, if a switch or one of the node's interfaces dies, the other node's link doesn't drop. On Linux, the ethX interface sometimes disappears when the network link goes down, which can confuse Oracle. To my knowledge, this is the Oracle-preferred method for RAC interconnect.
    • ON linux oracle preaches Gig-E because it fits the affordable architecture, however after calling both Oracle and Sun they both run there core ERP system on SCI based systems :)

      So they don't necessarily practice what they preach - however i guess they assume the scale of there customers is smaller. (which kind of defeats the purpose of scoping a GRID environment). RAC gig-e - "GRID" conceptually ????
  • GigE (Score:5, Informative)

    by Yonder Way ( 603108 ) on Wednesday October 12, 2005 @01:03PM (#13774577)
    Until a couple of months ago I was the Sr Linux Cluster admin for the research division of a major pharma company. Our cluster did just fine with GigE interconnectivity, without bottlenecking.

    Make sure you tune your cluster subnet, adjusting window sizes, utilize jumbo frames, etc. Just the jump from 1500 MTU to jumbo frames made a HUUUUGE difference in performance, so spending a couple of days just tuning the network will make all the difference in the world.
    • We do this already for our generic RAC systems running mostly for high availability and some of the clustering functionality.

      Our new platform will be the enterprise ERP suite and CRM with hundreds of concurrent users and live transactions running in everything from order entry, product configurator to processing invoices, taking support requests and all.
    • { excerpting from my own reply made in a different section of this article ... }

      There are many people posting here who are completely confusing what the word "cluster" means for this particular question.

      This article is about APPLICATION CLUSTERING (in this case a very specific relational database) and you are answering the question with information that is generalized to a COMPUTE FARM or a Linux cluster built and optimized for high performance computing.

      Broadly speaking the word "cluster" means different t
  • by TTK Ciar ( 698795 ) on Wednesday October 12, 2005 @01:11PM (#13774639) Homepage Journal

    In my own experience, fully switched Gig-E was sufficient for operating a high performance distributed database. The bottlenecks were at the level of tuning the filesystem and hard drive parameters, and memory pool sizes. But that was also a few years ago, when the machines were a lot less powerful than they are now (though hard drives have not improved their performance by all that much).

    Today, high-end machines have no trouble maxing out a single Gig-E interface, but unless you go with PCI-Express or similarly appropriate IO bus, they might not be able to take advantage of more. That caveat aside, if Gig-E proved insufficient for my application today, I would add one or two more Gig-E interfaces to each node. There is software (for Linux at least; not sure about other OS's) which allows for efficient load-balancing between multiple network interfaces. 10Gig-E is not really appropriate, imo, for node interconnect, because it needs to transmit very large packets to perform well. A good message-passing interface will cram multiple messages into each packet to maximize performance (for some definition of performance -- throughput vs latency), but as packet size increases you'll run into latency and scheduling issues. 10Gig-E is more appropriate for connecting Gig-E switches within a cluster.

    The clincher, though, is that this all depends on the details of your application. One user has already suggested you hire a professional network engineer to analyze your problem and come up with an appropriate solution. Without knowing more, it's quite possible that single Gig-E is best for you, or 10Gig-E, or Infiniband.

    If you're going to be frugal, or if you want to develop expertise in-house, then an alternative is to build a small network (say, eight machines) with single channel Gig-E, set up your software, and stress-test the hell out of it while looking for bottlenecks. After some parameter-tweaking it should be pretty obvious to you where your bottlenecks lie, and you can decide where to go from there. After experimentally settling on an interconnect, and having gotten some insights into the problem, you can build your "real" network of a hundred or however many machines. As you scale up, new problems will reveal themselves, so incorporating nodes a hundred at a time with stress-testing in between is probably a good idea.

    -- TTK

    • We have professionals on site, and i use that term loosely. Professinals are hardly proficient in proving the concept of grid computing because they few and far between.

      It comes down to a latency issue and figuring out how that latency impacts real world use and the answer comes down to how much money you want to throw at that problem.

      My question is that latency that much for the interconnects that cutting it by high speed interconnects isn't lost in all of the other overheads in the system (such as local
      • unless you're including the cost of installation and configuration in the sci stuff, i believe your numbers are pretty far off. i was looking at dolphin sci cards which could do roughly 8-10gb and remote dma. the prices i was discussing were on the order of $1-1.5k / device (dual ported cards, depending on volume) and about $5k for an 8 port switch (cables will add a pretty significant amount to these totals). (check their pricelist at http://www.dolphinics.com/pdf/pricing/Price%20Lis t %2020050701.pdf [dolphinics.com]). the
      • The obvious efficiency of SCI over GigE is no IP overhead, less cpu cycles and a latency 1/10th of that of the fastest GigE card

        I'm assuming you're using MPI for message passing. Why can't you go with MPI directly over GigE without going over IP? There are MPI libraries that communicate directly with SCI, MyriNet, Infiniband, etc, why not GigE? I did a quick google for such a library, but didn't find any, so maybe it's part of standard MPI implementations already.

        If you're not using MPI, there must be a
        • We are running on Sun Cluster 3.1, so open source MPI libraries won't cut it as far as remaining certified for support. Sun doesn't support MPI over GigE because of the latency and timing which would cost more than IP over the same interface.

          On a linux system, that would be an interesting benchmark for proof of concept, but not something a vendor would support us for.

          This is for a large corporation so supported platforms is critical.. i don't believe redhat or suse have certified MPI over GigE in any form
  • Multiple Networks (Score:5, Informative)

    by neomage86 ( 690331 ) on Wednesday October 12, 2005 @01:47PM (#13774926)
    I have worked with some bioinformatics clusters, and each machine usually was in two seperate networks.

    One was a high latency high bandwidth switched network (I reccomend GigE since it has good price/performance) and one was a low latency low bandwidth network just for passing messages between CPUs. The application should be able to pass off thoroughput intensive stuff (file transfers and the like) to the high latency network, and keep the low latency network clear for inter-cpu communication.

    The low latency network depends on your precise application. I've seen everything from a hypercube topolgy w/ GigE (for example with 16 machines in the grid, you need 4 gigE connections for the hypercube per computer. It always seemed to me that the routing in software would be really high latency, but people smarter than me tell me it's low latency so it's worth looking into). Personally, I just use a 100mbit line with a hub (i tried with switch, but it actually introduced more latency at less than 10% saturation since few collisions were taking place anyways) for the low latency connect. The 100mbit line is never close to saturated for my application, but it really depends on what you need.

    The big thing is make sure your software is smart enough to understand what the two networks are for, and not try to pass a 5 gig file over your low latency network. Oh, and I definetly agree. If you are dealing with more than $10K-20k it's definetly worth it to find a consultant in that field to at least help with the design, if not the implementation.
    • There are many people posting here who are completely confusing what the word "cluster" means for this particular question.

      This article is about APPLICATION CLUSTERING (in this case a very specific relational database) and you are answering the question with information that is generalized to a COMPUTE FARM or a Linux cluster built and optimized for high performance computing.

      The two areas are completly distinct and have different "best practices" when it comes to network topology, configuration and interc
  • I'm aware that it's feasible to have multiple SCSI cards in a SCSI chain, is there any way to do this with hardware RAID cards? It would be a great boon to reliability & costs were this feasible for me. Systems fail but RAID-5 and RAID-6 far less so.

    Myren
    • I ask because its either this or using drbd [drbd.org] to replicate the entire file system over multi GigE lines while having to use twice the number of hard drives. I'd much prefer to avoid these interconnections altogether and simply have SCSI itself be the common communication bus, at least such that either controller can access the raid array should the other fail. I'm not /totally/ OT. I was looking at 4 gige or a 10gb solutions which would have pounded cpu usage to death... I'd much rather just make sure I can
      • Most major SCSI card/storage array vendors have SCSI cluster support. The easy big names being Compaq/HP and Dell.

        Most the time they are used in Hot/Cold clusters. It's easiest to manage. Hot/Hot is possible, but you need to make sure your applications know how to handle it properly.

        It works by each of your servers having a SCSI/RAID controller card. They connect to a shared backplane in some sorta storage array (like a PowerVault). Make sure your backplane isn't set to 'split' mode, in which case each serv
        • Excellent, thanks so very much, the Hot/Cold is exactly what I need to save myself some perposterous high-availability pains.

          Will it be possible to find this sort of support in "consumer" RAID gear? I'm looking at the Intel SRCU42X and LSI MegaRAID 320-2e (both $700) and dont know how it would work. The main thing I dont understand is what form the utilities for these kinds of cards will be in... how arrays are setup, how hotswap is performed, &all... I doubt the firmware interfaces are open enough to
  • gigE with jumbo frames is the way to go

    and don't skimp on switches. Get something from the Extreme Networks Summit range if possible
  • We have a 2-node RAC in place with plans to add two additional nodes over the next few weeks. We're using GigE and it's working perfectly. Unfortunately, only two GEthernet ports were spec'd originally, so we're also adding a third so we can have a redundant interconnect, heh.
    --
    Barebones computer reviews [baremetalbits.com]

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...