How does an UltraSPARC IV+ compare to Intel

10,476

Solution 1

Sparc IV processors enjoy a supercomputer-like memory bus that has only been approached or equaled by the introduction of Core i7. As a result the are good at churning through large quantities of data (think database, large analysis) at a consistant rate. Actual Dhrystone benchmarks usually trail Intel.

UltraSPARC-T1 processors plow through parallel processing with lots of threads (think Java enterprise apps) well, due to their many cores and register windows, but don't have nearly as much floating point power as Sparc IV or Intel, since all 8 or 10 cores share only one floating point unit. UltraSPARC-T2/Plus processors have one FP unit per core.

Intel processors have tended to be better at satisfying the high interrupt rate of an individual GUI user (go figure), and have very good floating point as well. Recently the 4 Gigahertz boundary has forced Intel to go wide instead of deep, so the latest introductions have had more cores and more memory bandwidth and less advances in individual thread speed.

In other words, they are getting more Sparc-IV like. The latest Core i7 Xeons with both HT and multi-core are getting to be very much like a SPARC, though they don't have as many cores as a SPARC chip has register windows.

Most all of these processors are now I/O bound by disk access time, even with RAID involved. RAID usually increases latency on small accesses while greatly increasing throughput on long consecutive accesses.

Solution 2

On our scientific work load, which, to be honest, has a lot of integer and I/O bits, we see about a factor of 2 faster for equal speed CPUs for x86 systems vs Sparc systems.

Now, some of that, no doubt, is the compiler. Gcc does a fairly good job of generating code for the x86 vs the Sun C compilers on the Sparc.

On my personal usage on the home systems I see just about the same speedup for cpu intensive tasks.

Your mileage, of course, may greatly vary. I liked sparc systems, it is sad to see Sun slowly die.

Solution 3

I tend to run various CPU bound Java application servers. So, to benchmark, I use the DaCapo benchmarking suite with the lusearch. In my testing, UltraSPARCs consistently suck compared to any x86 based CPU.

Consider this - at my work, we have a relatively recent SPARC Enterprise T2000. This runs the above Java benchmark in about the same time as my 3.5 year old Intel Core Duo Mac Book Pro. That's an old two core laptop on par with a new 16 core server.

Comparing a 16 core x86 machine with a 16 core SPARC, shows the x86 based one about 20 times faster. Yes, one can complain about problems with benchmarks, but for me, they are well correlated to the performance I see in the in my actual app servers, so I find them useful.

Solution 4

Really hard to get a valid comparison for general use. You'll have to go for specific applications. System design greatly impacts it too. I can tell you that when UltraSPARCIV+ was new, the new at the time Intel chip comparison went roughly like this (keep in mind that the IV+ version of SPARC is quite dated now):

  • SETI@Home (and other FP heavy stuff) on SPARC was much faster.
  • Heavy loaded multitasking felt smoother on the SPARC system (function of I/O?).
  • Simple apps/screen updates faster on Intel.

Solution 5

As others have noted, this is not an apples to apples comparison. The UltraSPARC is highly multi-threaded. So, it will definitely fare well on certain types of workloads while the Intel processors have nothing close to the same level of concurrency.

Share:
10,476
BIBD
Author by

BIBD

Oh gawd, I never fill these out

Updated on September 17, 2022

Comments

  • BIBD
    BIBD almost 2 years

    I'm trying to compare an 2.1 GHz UltraSPARC IV+ to something I'm more familiar with, and Intel seems to be the benchmark.

    What would be a comparable processor on the Intel side?

    • Michael Stum
      Michael Stum almost 15 years
      As they are different architectures, maybe you could state the desired usage, as this may have a high impact on speed? Database? Web Server? Scientific Calculations?
    • esm
      esm almost 15 years
      Agreed: without knowing the target workload, comparisons are meaningless. Heavily concurrent or single-threaded workloads? Processing or I/O bound? You need to understand what you intend to do with a given architecture before building it.
  • DisabledLeopard
    DisabledLeopard almost 15 years
    Totally agree here - IO & FP heavy tasks the SPARC's will just plough through steadily - similar workload at an equiv Intel server and watch it basically just stop responding
  • Brian Knoblauch
    Brian Knoblauch almost 15 years
    Aren't some of the IBM POWER chips running 5ghz now? How'd they get up there when no one else seems able to?
  • kmarsh
    kmarsh almost 15 years
    It's Intel's 4GHz barrier, not IBM's. :)
  • polyglot
    polyglot over 14 years
    Informative - but why are apps going to be i/o bound by disk access time? Unless your DB is in the terabyte region you are likely to be able to stage it nearly all in memory, and any disk flush waits will yield to other threads/processes which are runnable.
  • kmarsh
    kmarsh over 14 years
    Exactly- while the application is blocking on that I/O, other threads are processes will be running- meaning that process (or thread) is bound by disk access time.
  • James A Mohler
    James A Mohler over 11 years
    Links to bench marks might back up your opinion
  • Michael Hampton
    Michael Hampton over 10 years
    You realize, of course, that it was current at the time this question was asked?
  • Andrew B
    Andrew B about 10 years
    If this were an an objective fact, 1-2 failures a year for every Intel CPU in my infrastructure would require my team to be grown by at least a third. Please refrain from contributing answers like this to questions that are approaching their five year anniversary.
  • CMDody
    CMDody about 10 years
    SPARC cpu and other RISC processors(like intel itanium, IBM power) are referred as mission critical cpu, if you google the "mission critical server" you will see... On the other hand, WLF(work load factor) is an important criteria for server failure, System failure per year is directly proportional with WLF. If you are operating servers with low WLF, You can operate your servers 2 or 3 years without failure but in this case your servers are dissipating energy... If you are operating servers with high WLF, you may have 1, 2 system failure per year...
  • Andrew B
    Andrew B about 10 years
    Those would be great details to edit into your answer. As it stands, it states that x86 processors fail 1-2 times a year as an unqualified statement. (and we have many clusters with load consistent load which would beg to differ)