several t2.micro better than a single t2.small or t2.medium

10,840

Solution 1

Your analysis seems correct.

While the processor type isn't clearly documented, I typically see my t2.micro instances equipped with one Intel Xeon E5-2670 v2 (Ivy Bridge) core, and my t2.medium instances have two of them.

The micro and small should indeed have the same burst performance for as long as they have a reasonable number of CPU credits remaining. I say "a reasonable number" because the performance is documented to degrade gracefully over a 15 minute window, rather than dropping off sharply like the t1.micro does.

Everything about the three classes (except the core, in micro vs small) multiplies by two as you step up: baseline, credits earned per hour, and credit cap. Arguably, the medium is very closely equivalent to two smalls when it comes to short term burst performance (with its two cores) but then again, that's also exactly the capability that you have with two micros, as you point out. If memory is not a concern, and traffic is appropriately bursty, your analysis is sensible.

While the t1 class was almost completely unsuited to a production environment, the same thing is not true of the t2 class. They are worlds apart.

If your code is tight and efficient with memory, and your workload is appropriate for the cpu credit-based model, then I concur with your analysis about the excellent value a t2.micro represents.

Of course, that's a huge "if." However, I have systems in my networks that fit this model perfectly -- their memory is allocated almost entirely at startup and their load is relatively light but significantly variable over the course of a day. As long as you don't approach exhaustion of your credit balances, there's nothing I see wrong with this approach.

Solution 2

There is a lot's of moving targets here. What are your instances are doing? You said the traffic varies over the day but not spiky. So if you wish to "Closely follow" the load with a small amount of t2.micro instances, you won't be able to use too much bursting, because at each upscaling you will have a low CPU credits. So if most of your instances are running only when they are under load, they will never collect CPU credits. Also you loose time and money with each startup time and the unused but started usage hours, so doing a too frequent up/down scaling isn't the most cost efficient. Last but not least, the operating system, other softwares has more or less a fix overhead, running it 2 times instead of one, may takes more resources away from your application in a system, where you gets CPU credits only under 20% of load.

If you want extreme cost efficiency, use spot instances.

Share:
10,840

Related videos on Youtube

Ariel Flesler
Author by

Ariel Flesler

Full Stack developer, passionate about Node.js & JavaScript. I've been a developer for the past 14 years. Over the years I've been alternating between front-end, video games and back-end web development. I'm a former jQuery Core developer, co-authored O'Reilly's jQuery Cookbook, along with other fellow members of the jQuery community. I love to bring my ideas to the real world, as can be seen on the amount of Open Source projects I accumulated over the years on Github. Among them are many jQuery plugins, one of them being jQuery.scrollTo that has been used on tons of sites over the years. A few years ago, I co-founded AmberAds, a startup where I got to develop a Demand-Side Platform from the ground up, running at very high-scale on the cloud and with extreme stability and reliability requirements, it's been running very successfully on production for years now. Right now I'm working remotely from Buenos Aires, Argentina.

Updated on September 14, 2022

Comments

  • Ariel Flesler
    Ariel Flesler over 1 year

    I read EC2's docs: instance types, pricing, FAQ, burstable performance and also this about CPU credits. I even asked the following AWS support and the answer wasn't clear.

    The thing is, according to the docs (although not too clear) and AWS support, all 3 instance types have the same performance while bursting, it's 100% usage of a certain type of CPU core.

    So this is my thought process. Assuming t2.micro's RAM is enough and that the software can scale horizontally. Having 2 t2.micro has the same cost as 1 t2.small, assuming the load distributed equally between them (probably via AWS LB) they will use the same amount of total CPU and consume the same amount of CPU credits. If they were to fall back to baseline performance, it would be the same.

    BUT, while they are bursting, 2 t2.micro can achieve x2 the performance of a t2.small (again, for the same cost). Same concept applies to t2.medium. Also using smaller instances allows for tigther auto (or manual) scaling which allows one to save money.

    So my question is, given RAM and horizontal scale is not a problem, why would one use other than t2.micro.

    EDIT: After some replies, here are a few notes about them:

    • I asked on AWS support and supposedly, each vCPU of the t2.medium can achieve 50% of "the full core". This means the same thing I said applies to t2.medium (if what they said was correct).
    • T2.micro instances CAN be used on production. Depending on the technology and implementation, a single instance can handle over 400 RPS. I do, and so does this guy.
    • They do require a closer look to make sure credits don't go low, but I don't accept that as a reason not to use them.
  • Ariel Flesler
    Ariel Flesler almost 9 years
    The traffic is not spiky but varies during the day. Thank you for commenting but I'm not looking for an advice on whether to rely on T2 or not, I evaluated the options some time ago. Note that 2 instances (for the same load) will be at half the CPU utilization than a single one, so they won't sustain for half as long, it's the same time, because each instance has less work to do.
  • Ariel Flesler
    Ariel Flesler almost 9 years
    Hi, thank you for your comment. The scaling was really a side-note. Suppose I leave them running all day. The only advantage for bigger machines is the OS processing, but really, that's very low in comparison.
  • Adam Ocsvari
    Adam Ocsvari almost 9 years
    Generally you shouldn't use a T type instance for production. If you don't mind having some downtime ( what you may have with t if you have a load and you our out of Credits) then you can use also a fleet of spot instances. But then you can get an m3.small for a good price. (And have an autoscaling group ready with t instance in case if the bids are getting too high.)
  • Ariel Flesler
    Ariel Flesler almost 9 years
    Thank you, with no offense to the other comments but this is the first answer that focuses on the question instead of something else. I EDITed additional content based on this reply.
  • Ariel Flesler
    Ariel Flesler almost 9 years
    Note that I added some more info the question, related to T's on production. There are ways to monitor credits and act on that before it happens. The point here is the extreme cost efficiency of the T2 instances, which makes all this a viable discussion.
  • BobMcGee
    BobMcGee about 8 years
    @AdamOcsvari I agree with your suggestion of using a t2 pool + spot instances to very cheaply meet spikes/spot price hikes + base load. I disagree about t2's not being production-capable though. They're perfectly adequate for many cases where your just need a server (or HA pair) continuously up. The 2 hours of full burst that a t2.micro can sustain (or ~5 for small/medium) are plenty of time to react automatically to sustained load. They are also perfect where workloads are moderate and disk or network I/O bound. t2.micro/medium hosts especially make the best Jenkins servers.
  • Adamz
    Adamz over 6 years
    According to the benchmarks here vpsbenchmarks.com/compare/ec2_vs_gce Average Response time of t2-small 45.8ms, t2-micro 161.0ms. Where is the performance hit coming from?