Why aren't all CPUs 'overclocked' as a factory default?

5,826

Solution 1

First of all, not all CPUs are capable of overclocking. Many have fixed or range-limited multipliers. This is intended by the industry, hardware vendors are happy to sell CPUs and peripheral hardware with more freedom for higher prices. Real 'overclockers' seem to pay anything as long as it enables them to double the factory defaults ...

Secondly it's a cooling and efficiency problem. Energy consumption and frequency don't scale linearly, nor does the actual performance (especially considering that, with faster CPUs, other system components quickly become bottlenecks ...).

With overclocked CPUs, there is also a strong variance in durability and lifetime even within a manufacturing series. The frequency at which they're sold is a frequency at which all units of a series are known to work stable, regardless of possible differences in detail. One CPU of a series may fail quickly as you overclock it while another may work stable up to 4+ Ghz.

Solution 2

CPU Binning is relevant here:

http://en.wikipedia.org/wiki/Product_binning

Semiconductor manufacturing is an imprecise process, with some estimates as low as 30% for yields. Defects in manufacturing are not always fatal, however. In many cases, it is possible to salvage a part by trading off performance characteristics, such as by reducing its clock frequency or by disabling non-critical parts that are defective. Rather than simply discarding these products, their performance level can be marked down accordingly and sold at a lower price, fulfilling the needs of lower-end market segments.

This practice occurs throughout the semiconductor industry, including central processing units, computer memory, and graphics processors as well.

Solution 3

As well as the tolerances and MTBF reasons posted, there is another one as well.

(Please bear with me as I have not kept up with hardware for a very long time.)

The cost for intel to make a fabrication plant that can create a specific chip is a very large fixed cost. The cost for them to make a single processor once they have built the plant is very, very small.

There is an economic advantage of making the same die for a series of chips, and then locking the chips at different multipliers for product differentiation and pricing. This way, the chips all come out of the same plant. Instead of having a unique plant for each single speed of chip. If you want to buy a low-end chip, the economic way for intel to do it for you is often to sell you a mid-end chip which is set up to run at a lower frequency.

You will see this in other markets as well, when the manufacturing process requires a high initial fixed cost and a very low marginal cost. Every major brand aluminum bicycle, for example, is actually made in the same factory, by the same robots.

Solution 4

Because in many cases, over-clocking results in a reduced life (in terms of time), and a lot more heat.

Some processors are sold as over-clockable - Like AMD's Black Edition (which has an unlocked multiplier), and Intels Extreme Edition.

Solution 5

It is the difference between recommended speed and possible speed.

The manufacturers can't make a processor to max out at the exact speed the processor is created for; it's created with ability above that, but you don't know what the upper range is until you cross it.

Not to mention the extra heat that may be produced that the system is not built to handle, thus the need for extra cooling systems when overclocking too far.

Share:
5,826

Related videos on Youtube

Matt Phillips
Author by

Matt Phillips

I create Machine Learning algorithms to solve problems in computer vision, biomedical image understanding, and EHR (electronic health record) understanding, deployed locally or in the cloud (AWS). I'm always happy to enter into new domains, particularly where the data are well-structured. Deep neural networks (CNNs, RNNs, Autoencoders, etc.) are generally my tools of choice. I use PyTorch and Tensorflow/Keras for DL, and Python/PIL/OpenCV for general ML and CV. I also have extensive experience in software engineering (esp. C++) and have loved using it to create products and applications in ML, signal processing, data analysis and visualization. I also have substantial background in experimental and computational neuroscience, which provides good general conceptual tools to help attack my current research and engineering problems. Some of my open-source software includes WaveSorter for sorting and classifying (neural) wavevforms, as well as an application for the convenient visualization of dynamical systems, DynaSys.

Updated on September 18, 2022

Comments

  • Matt Phillips
    Matt Phillips over 1 year

    Possible Duplicate:
    Why are modern CPUs “underclocked”?

    When I was searching around for a desktop awhile back I came across a lot of discussions where techies talked about taking an e.g. 2.67GHz processor and 'overclocking' it so that it ran at 4GHz. If a CPU is capable of such speed at all, why doesn't it come that way out of the box?

    • hotpaw2
      hotpaw2 about 13 years
      Look up manufacturing tolerances and the statistics of normal variation. Nothing is average. Especially over worse case temperature and voltage ranges.
    • uxout
      uxout about 13 years
      My car has a top speed of 150mph. Why do I have to drive 65?
    • Mr.Wizard
      Mr.Wizard about 13 years
      @Shinrai that seems to me a poor analogy.
    • Sirex
      Sirex about 13 years
      as an aside: how exactly do you "overclock" something by default ? the factory set the default to begin with.
  • Wuffers
    Wuffers about 13 years
    You don't have to have an upgraded cooling system. But it sure helps.
  • Toybuilder
    Toybuilder about 13 years
    Same thing happens with engines, too. When car engines are meticulously crafted to their design ("blueprinting"), they perform far better than the same design made on the production line with a far more generous tolerance. Unlike car engines, CPU's generally come out much closer to their ideal design, and yield more capable processors that can withstand overclocking.
  • Dave Jacoby
    Dave Jacoby about 13 years
    I'd like to hit the cooling and efficiency topic here. I know people who run large compute clusters, and in summer months, sometimes their cooling doesn't keep up. They worked out a way to underclock their servers in software, so that when their systems can't handle the heat, they can drastically scale back the power being used and thus the heat being generated, rather than having to shut down machines that are running jobs that, even with modern clusters with multiple cores, can take several months to run. Amping up the processors to get extra cycles doesn't make sense there.
  • Dave Jacoby
    Dave Jacoby about 13 years
    Similarly, I'm on a netbook right now, for the uses I give it, it doesn't need all the proc speed, and running cooler would be running better, so again, ramping the clock down, not up, makes more sense.
  • SplinterReality
    SplinterReality about 13 years
    I've read that Intel does this for the "Core" series of processors. They manufacture the Core processors as two cores on a single die. If one of the chips tests bad, they disable it, and sell the die as a Core Solo. Two viable Core Duo chips are sealed in the same enclosure to make a Core Extreme. In this way, Intel is salvaging their otherwise defective stock, and ensuring that the defect rate for Core Extreme chips is effectively zero.
  • Jeff Atwood
    Jeff Atwood about 13 years
    @charles I'm not sure that is true for latest Intel CPUs, however. The Core 2 series was not true multi-core, but multiple chips on the same die.. whereas Core i3, i5, i7 etc are all true multi-core designs. See extremetech.com/article2/0,2845,2049688,00.asp
  • SplinterReality
    SplinterReality about 13 years
    @VarLogRant you've just described Dynamic Frequency Scaling, (en.wikipedia.org/wiki/Dynamic_frequency_scaling) which many modern CPUs, especially for mobile devices, do. Power consumption is effectively a function of clock speed (since CMOS circuitry consumes very little power when in a static state) and so for mobile chips this is an invaluable tool to conserve power.
  • Mr.Wizard
    Mr.Wizard about 13 years
    I am curious about the bicycle note. Can you give more information?
  • dkxox
    dkxox about 13 years
    @Jeff - sure about the Core 2 series? I thought it was the Core processors (without the 2) that were basically two P4s glued together, and in practice were normally both slower and hotter than a single-core P4. The "2" in "Core 2" doesn't mean dual core - that's what the "Duo" means in "Core 2 Duo".
  • dkxox
    dkxox about 13 years
    @Jeff - BTW - probably worth mentioning the economics aspect of binning. When testing your chips and downrating some of them, you probably don't end up downrating enough of them to satisfy the cheap low-end processor market, and of course you don't want a glut in the high-end market driving the price down.
  • Chris Marisic
    Chris Marisic about 13 years
    Very interesting information
  • Jeff Atwood
    Jeff Atwood about 13 years
    @steve this has nothing at all to do with pentium 4, see the above link.
  • dkxox
    dkxox about 13 years
    @Jeff - The link is about how the quad core QX6700 Core 2 Extreme is made made by gluing two genuine dual cores (each a Core 2 Duo) together. Each Core 2 Duo piece, as with Core 2 Duo in general, is genuine multi-core. To be honest, I didn't read the link earlier, so my bad - but with enough effort, I can still find a way to blame you and claim victory...
  • underscore_d
    underscore_d over 8 years
    This reads like a lot of speculation to me.
  • underscore_d
    underscore_d over 8 years
    plus 1, although the aforementioned disabling can result from either deliberate choice or just quality constraints (binning). it doesn't really matter as long as they get enough produced for each target market segment.