Intel Admits Its Chips are Over-Priced

“In terms of the question, “Will Intel consider developing on the ARM platform?” the answer is no. For Intel, we need to develop the best microprocessor we can and have a business model to support it, so that we can get paid. We think that with Intel architecture we have a fundamental advantage in performance and over time we will be very competitive on power, especially as we move to new transitions. So if we are at the same or better power and at better performance, with best-in-class chips, then there is no advantage of going to ARM. We would simply be beholden to them. We would have to pay them royalties and we would have lower profits. Why would we do that?

If you look at the market in a business way, the number one silicon vendor today in terms of profits based on tablets on smartphones is Intel. And by that I mean if you look at the mobile market, for every 600 smartphones that are sold and every 122 tablets that are sold, this creates the sale of a server and you could imagine that Intel margins and profits for our servers are quite healthy.”

see Digitimes – Intel steps up its pace: Interview with Navin Shenoy, general manager, Asia-Pacific region for Intel
Translation: There is no advantage for Intel to migrate to ARMed production. They make too much money per chip with x86. There are obvious cost advantages for makers and consumers of IT to go to ARM, however. Lower profits for Intel = lower costs for everyone else. Monopolists… They think the world owes them a livintg.

Intel is right that they can produce more powerful and higher power consumption chips. They will, in a few years, catch ARM in power consumption. Not actually achieving lower power consumption that ARM but when both technologies are sufficiently advanced, power consumption of either will be acceptable. It all comes down to price. If Intel publicly states that 600 new smart phones need one Intel server to run and that gives them more profit than the chip-makers of the smart phones, then Intel is seriously over-priced. Don’t expect that barrier to be overcome any time soon. Intel’s x86 instruction set, transistors required per CPU, actual mass of silicon and size of chip all lead to higher costs and Intel will forgo profit only as a last resort.

ARM has a bright future and there are a lot of excellent reasons to move as much computing as possible to ARM sooner or later if only for the price of IT. My next client will be ARMed. I have a server running well with AMD64. When it will be replaced in a few years, I will certainly consider ARM. By then ARM will have chips designed for servers
“As for server-related products, Brown pointed out that ARM already established an R&D team in 2008 for conducting all the pre-work and its Cortex-A15 for high-end applications will be provided to related vendors for development and testing at the end of 2011 with mass shipments to start after 2012.”

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

44 Responses to Intel Admits Its Chips are Over-Priced

  1. oiaohm says:

    On such server setups, little one, you never, ever combine services of different domains on one server, because this doesn’t scale.
    Deaf Spy no this case you are a clueless wonder. Google staff covered exactly what I described as a LCA conference. So it does scale.

    Mixed domains is also part of running docker systems. Keep on attempting to push out of date logic. Around 2005 the first videos appear of mixed loads to for higher utilization’s so more bang for buck for hardware investment cost.

    Deaf Spy keep on pushing and I will just show you are complete out of date on method by many years. So about time you get off you high horse and admit you have over played your hand this time.

  2. kurkosdr quoth, “The Mali hardware is a nuisance but the software supplied has the kluges built in” and responded, “Whether those kludge actually work as they should and whether they will keep working in the next version of the OS is a classic case of caveat emptor.”.

    For that to be a problem you either have to believe the supplier can take away the binary code on my machine or that the Linux ABI will change sufficiently to prevent it working. The former won’t happen. The latter could but is unlikely. Folks are still using Linux 3* in the world and I’m using 4.4.10. Linus hates to break “user-space” or even kernel-space in such widely used hardware. Nothing prevents me from running an old kernel. I can handle not running revolutionary new hardware on Beast… Beast has never seen such a revolution so far.

  3. Deaf Spy says:

    Fifi, you latest post reveals how utterly clueless you are. As usual.

    On such server setups, little one, you never, ever combine services of different domains on one server, because this doesn’t scale.

    You are an idiot, Fifi. A clueless, fraudulent, lying idiot with zero knowledge and real-life experience in anything. Perhaps the only area of expertise you might have rudimentary skills is standing under a lamp-post.

  4. oiaohm says:

    DrLoser something to be aware of here due to cgroups/control groups a Linux system can be pushed way harder.

    So your server has normal load of 85 percent. Is this a worry. Maybe not. How much of that load is configured by cgroups to give way. You might have a web service on there currently using 5 percent and the other 80 percent are tasks like data mining or auditing that can be stopped at a moments notice and given a very low cgroup priority. So a x16 increase spike in load on the web service if it set as dominate by cgroups is not going to be problem. The 5 percent space between 85 and 90 will give enough processing space to allow the 80 percent of load to be pushed aside.

    Google server load is higher than most would think because when there is not web requests to service there are other things like making sure data is replicated out to prevent data loss and so on. There are a lot of google server tasks that are not time critical.

    Like robert could if he wanted to like background bit coin mining and the like to push load averages up without noticing any operational performance effect. The reality there is not a normal load when you talk about a Linux server. There are time critical and non time critical loads with some of the time critical loads have spike multiplier requirements. Guessing x3 as spike multiplier is really bad if it turns out be x10 or worse. Even running a web server on Linux with request caching has time non critical loads like clearing out of date cache items. So your normal load on Linux is normal load for what ever the configuration is. Your SLA define limits don’t 100 percent link to normal load due to cgroup effect of being able to tag this services as loss all disc io/network io and cpu time when other services are using time. So seeing a Linux server with a normal load of 85% can be perfectly fine and acceptable its all about how that load breaks down into time critical loads and non time critical loads and that they are all correctly allocated.

  5. oiaohm says:

    DrLoser
    2. It doesn’t count if you quote a cite after I have made a comment, now, does it?
    In fact I quote the cite before you made your comment and after. So you are being the idiot.

    As a rule of the thumb, under normal load, a server must not exceed 30% load.
    That is what DeafSpy said. Normal load does not have a must not exceed because each load has different levels of variations. So a bing work load and a google work load might set limit at 40 percent when not loaded but a a different type of server can be sitting way high.

    The reason to limit “normal service SLAs” to this low level is because there will inevitably be peaks. For some reason the peaks for a search engine occur shortly before people leave for work — lasting until shortly after they get there — and during the early part of the evening. These peaks regularly stretch to 120% usage, or about 3X “normal load.”

    Note what you said here there will be peaks. Size of peak you calculate from history. So peak might be 10% or might be 2000%. 3X normal load is extremely bad guess. The site I was quote was the limitations solution had to be built to have the SLA.

    So DrLoser you are a idiot here as well. SLA of hosting providers state spike limit. If you solution is going to go above spike limit bad things is going to happen. Guessing x3 results in either not allocating enough or allocating too much.

  6. kurkosdr says:

    caveat emptor (autocorrect)

  7. kurkosdr says:

    At most they need to add something special for the video card.

    And at most you and me need to add something special to ourselves to look like Brad Pitt.

    The Mali hardware is a nuisance but the software supplied has the kluges built in

    Whether those kludge actually work as they should and whether they will keep working in the next version of the OS is a classic case of caveat empty.

  8. Dr Loser says:

    Look again soasta is a server farm group. Please also stop being a guessing idiot Dr Loser. The answer for a Linux system is 65% or higher is your target. With must not exceed values of 75% on hyper-visored and 90% on bare metal. Basically even your 40 percent is pure crap. I give a solid cite and you still go and say crap. So you are a idiot Dr Loser.

    1. You are an idiot, Fifi.
    2. It doesn’t count if you quote a cite after I have made a comment, now, does it?
    3. You are an idiot, Fifi.
    4. Your cite is, as usual, picked at random and has no generic weight.
    5. You are an idiot, Fifi.
    6. I was responding to Robert. In that context, I was using information gleaned whilst working with large server farms that do in-house stuff. Your cite, largely irrelevant in any case, specifically deals with cloud service providers (specialized, as far as I can see, to Web servers). I was not referring to cloud service providers.
    7. You are an idiot, Fifi.
    8. I didn’t get around to defining “normal service levels,” but what I (and I think Deaf Spy mean by that is “non-peak demand service levels.” In the case of Bing or Google this is, indeed, around the 40% level.
    9. You are an idiot, Fifi.
    10. The reason to limit “normal service SLAs” to this low level is because there will inevitably be peaks. For some reason the peaks for a search engine occur shortly before people leave for work — lasting until shortly after they get there — and during the early part of the evening. These peaks regularly stretch to 120% usage, or about 3X “normal load.”
    11. You are an idiot, Fifi.
    12. “But how can that be?” you ask. “120% is definitionally not achievable.” A good question. The answer is that Bing, Google, and everybody else who does this sort of thing spreads their ginormous data centers geographically across time-zones. As one DC ramps down from a peak, the next one fires up and takes the slack. Which means that, yes, the overall peak across the server farm estate is roughly 85% to 90%.*
    13. You have by now have formed the nascent opinion that I consider you to be an idiot, Fifi. If so, well done! It’s about the only intelligent conclusion you have reached on this thread.

    * This is not strictly true, either, because any massive server-farm based operation worth its salt has to build in a significant buffer for a) future consumer growth and b) future feature growth. That 85% to 90% is, realistically, going to be closer to 75% – 80%.

    I don’t know what Google or Facebook does every time they are faced with a need to fulfil “growth requirements,” but what Bing does is to open up a whole new data center at a time. This takes about 2 months, minimum, to provision, and is a feat of considerable engineering ability.

    Usually what they do is to build a “replacement” for their oldest and weediest DC. So, if that DC has (say) 5,000 servers running quad core with 64GB from three years ago, they build the new one with 5,000 servers running eight core with 128GB. (Details obviously approximated.) This means that they can essentially “retire” the old DC, strip it down, and wait for the next growth spurt.

    I, Fifi, have seen this done and been a part of the Dev-Ops team whilst it is being done.
    Robert, it hardly bears repeating, has seen no such thing.
    And, did I mention? You are an idiot, Fifi.

  9. kurkosdr wrote, “all those ARM boards are DIY junk which may not work”.

    Nonsense. The developers making and using these boards are running stock GNU/Linux kernels. At most they need to add something special for the video card. The Odroid-C2 works fine with Debian GNU/Linux and I can customize a kernel if I need that. The Mali hardware is a nuisance but the software supplied has the kluges built in and I can update the Debian packages as needed. I don’t need the latest and greatest performance. That Mali-450 blows away my old radeon whatnot on Beast (Radeon 2100).

  10. kurkosdr says:

    So, what am I missing?

    -Steam (has TLW even tried games beyond the usual FOSS junk like OpenArena and SpeedDreams?)
    -That Pascal Compiler
    -Wine (aka Photoshop and MS Office)
    -Flash (if you still need it, otherwise ignore this)
    – GPU drivers worth using
    – Intel putting their name on the Ubuntu flavor of ComputeStick, which means it will mostly work, while all those ARM boards are DIY junk which may not work.

    My point is that DIY ARM boards are not meant for your “production” system.

    Now it’s time for me to ask: What do you gain from an ARM board that a NUC or ComputeStick doesn’t give you?

    Isn’t the proprietary Mali driver against your FOSS ideals, while the Intel GPU driver is open source?

  11. oiaohm says:

    Deaf Spy you have terms wrong for what google does.

    https://support.google.com/adwords/answer/2472735?hl=en-AU&ref_topic=3119128
    Google uses CPC and CPV and CPM. CPM is CPI in units of ~1000. When I say ~1000 it mean 1000+ a bit to make sure 1000 was displayed. Mostly because units of 1 are not truly dependable. So losing a percentage in the CPM model is expected.

    Lot of sites are blocking content until advertisements are displayed since they get paid for advertisements. So there is some flex on CPM/CPI. CPV user has to select to start playing video or the like this does not have to be a click through to another site.

    CPV and CPC being too slow where each one counts is a bigger problem that missing few CPM displays where loses are expected.

    Also notice the google advertising system is based on bids. So google can control their loses by displaying highest bids first. Deaf Spy get it google level to risk by being slow is limited to lower paying client-el. Google has designed quite a smart system to minimize their loses and max their profits even if they having a bad performing day. So Google don’t lose a huge profit share when they are slow. Of course being slow does hurt Google a little but the bidding system really does limit the damage.

  12. Deaf Spy says:

    No, they don’t. Google is paid by the ad/click, not by the speed of delivery.

    Yes, they do. Google serves both CPI and CPC ads. For CPC to work, you must have first served an ad. In this business, slow delivery means no ad.

  13. Deaf Spy wrote, “shall you use a x86 server, or an ARM one?”

    I like the Odroid-C2 clients available today. I sorta like Lemaker Cello, but I’m not happy with AMD releasing yesterday’s tech. It’s built in obsolescence. I could wait until 2017 for their K12 design which should be cool, literally. Still Lemaker Cello is OK and perhaps the ageing Beast won’t wait. He’s eight years old. His hard drives are the chief concern. Both should be upgraded this year. I could do drives this year and wait for 14nm to arrive in the ARMed server world.

  14. oiaohm wrote, ” Yes Desktop Nvidia cards with open source drivers also work on Arm64 hardware with full size pci-e ports and they are just a buggy as on x86 using open source drivers. Turns out open source video card drivers have basically nothing that is cpu dependent. So it just getting Arm64 motherboards that are more like the ATX ones we know and love.”

    There’s no particular love of any video cards here. I’ve never paid extra to get this or that particular card, for example. The Lemaker Cello doesn’t even have a video card yet it will work for me. The other client boards will certainly play 1080p video just fine. What more could I want? ARM is ready for me. The board-makers are just a bit behind the curve mostly because they are small outfits with marginal capital to build factories so they out source the manufacture of the boards. That takes extra time to market. AMD do have a proper server chip (K12) up the pipe. I don’t think I want to wait for it, but I could. Beast can serve ARMed clients today. He does it every time there’s a party here. No one has ever claimed Beast is slow to load pages on a smartphone. The Odroid-C2 is faster than any smartphone in networking. That’s currently my choice for clients.

  15. Deaf Spy says:

    Btw, Robert, what is your plan: shall you use a x86 server, or an ARM one? Since you will use thin clients, workstations are irrelevant.

  16. DeafSpy wrote, ” If they are too slow to deliver an ad on a publisher’s site or within an app, they lose money.”

    No, they don’t. Google is paid by the ad/click, not by the speed of delivery. All that matters is the throughput, not the responsiveness as long as the transaction doesn’t delay the process too much. Google is slow from time to time and they still keep earning more money.

  17. Deaf Spy says:

    Yes, and Google loses huge share each time their responsiveness slows a bit… [SARCASM]

    Yes, they do. If they are too slow to deliver an ad on a publisher’s site or within an app, they lose money. If they are too slow to register an ad click, they lose money. If they are too slow to track the tracking pixel and investigate for conversion, they lose money.

    Perhaps you should keep your sarcasm for areas in which you have at least some tiny expertise.

  18. kurkosdr wrote, “Whatever little good Desktop Linux software exists, it’s on x86.”

    Hmmm… Debian’s ARM64 stack is 2/3 the size of its x86 stack. So, most packages exist on ARM64. You can run GNOME on it after all, so it must have all the usual applications.
    Libreoffice, check
    xfce4, check
    gnumeric, check
    autokey, check
    firefox, check
    php7, check
    apache2, check
    mariadb-server, check
    vlc, check
    gwakeonlan, check (not sure if arm64 can wake up, but this is in the repo)
    openssh-server, check
    youtube-dl, check
    build-essential, check

    So, what am I missing? When I actually diff the package lists I find things like abiword missing. Shock. What will I do without it? Chrome browser could be an issue with flash sometimes, but I can live without it. Then there’s FreePascal compiler. It’s not on arm64. People want it so it will happen eventually. It’s already working on iOS.

  19. oiaohm says:

    You can do everything you want with the “uncore” but the core is not changeable. Haven’t seen anyone else make any “share back” improvement when it comes to ARM cores. This is probably why you have no link to prove it.
    The blog I pulled that reference from covers the share back all the time. Also if you read the ARM programming manuals you will see particular instruction ideas and testing come from particular soc developers.

    https://www.qualcomm.com/documents/enabling-next-mobile-computing-revolution-highly-integrated-armv8-based-socs

    Qualcomm doing Kryo cores is part funded by ARM core company. This is a R&D competition. Who ever designed the best gets a slice out of the total market of arm chips.

    https://en.wikipedia.org/wiki/Comparison_of_ARMv8-A_cores this kinda list the competing designs.
    You have the Kryo from Qualcomm and the Mongoose from samsung and Vulcan from Broadcom and the Helix from AppliedMicro all competing to be the next ARM reference chip.

    Please note Qualcomm won the last round so every chip MediaTek makes Qualcomm gets a cut same with every one else using the arm stock design.

    Qualcomm’s only hope is basically their excellent driver support
    When you win a round you have had longer to develop your drivers than anyone else.
    Nvidia
    Lets look at the Nvidia arm64 compatible chip at core its a transmeta VLIW so at core its not an ARM chip. Instead it a chip that can emulate an arm64 chip. Now this introduces some serous performance and hardware handling differences compared to every other Arm64 chip in existence so need unique hardware drivers.

    MediaTek issue is like a lot of other Arm vendors using stock chip rapid development and not spending enough time developing drivers. Yes being sheep puts you in a bad location with time to market were those competing for the next design get access to instruction set and design changes before groups like MediaTek has so have more time to work on driver support. So MediaTek is finding they have to enter the game as well or always be at a disadvantage.

    MATLAB has a arm64 version so that is not an example of a application that arm platforms don’t have. Lightworks fails on most Arm boards because GPU suxs and it will not work without a decent GPU. MS Office in wine on qemu performs ok its not that GPU or CPU heavy MS Office most of the time cpu is idling on x86 under 1 percent so taking 10 percent on arm64 hardware is not trouble. Photoshop loves CPU and GPU so same issue as Lightworks and lack of CPU when you add on emulation.

    VMWare is just them choosing not to support Arm there are other options.

    Steam not so black and white. https://github.com/ValveSoftware/steamlink-sdk remoting steam applications to arm and arm64 is directly support by Value. Building steam runtime for arm64 any error found Valve accepts patches. Steam client itself x86 only. So the mix of Valve standing on arm64 kinda suggests if enough market appears they will change.

    http://www.phoronix.com/scan.php?page=news_item&px=AMD-LeMaker-Cello on the question of a ARM board that takes a Desktop GPU the reference Opteron A1100 motherboard from AMD takes AMD video cards using open source drivers. Yes it performs just as bad as AMD video cards on x86 using open source drivers.

    kurkosdr something you have missed more and more server loads use GPU to accelerate them. So no GPU support equals poor server performance under particular conditions as well. So arm64 video card support is improving but I am not expecting Nvidia or AMD desktop cards with closed source drivers to work any time near soon. AMD Arm64 cpu with AMD video cards with open source drivers is about your best on Arm64 hardware at this stage. Yes lightworks will fire up on that combination. Yes Desktop Nvidia cards with open source drivers also work on Arm64 hardware with full size pci-e ports and they are just a buggy as on x86 using open source drivers. Turns out open source video card drivers have basically nothing that is cpu dependent. So it just getting Arm64 motherboards that are more like the ATX ones we know and love.

  20. kurkosdr says:

    BTW people who plan to run Desktop Linux on ARM boards have to be the biggest software plebs in existence. Whatever little good Desktop Linux software exists, it’s on x86. Steam, MATLAB, VMware, Lightworks and the versions of Photoshop and Office that run on Wine.

    But even if you are willing to restrict yourself to only software that comes with a source (which doesn’t have x86 assembly bits), let me ask you the following:

    Have you SEEN the Desktop Linux GPU drivers of the average ARM board? Hint: Almost mobody buys them to run Desktop Linux on it, not with a GUI for sure, so please let your imagination run wild on how much they care about fixing bugs.

    Did I say plebs? I did. They could buy a NUC or ComputeStick (which can be ordered with Desktop Linux btw) and be done with it, and have an ironed out experience (mostly) because someone actually was paid to check the integration status and fix any bugs, but nope, must have principle. Transistors with the taint of the evil Chipzilla empire must be eliminated (*loads the broken proprietary Mali driver to make hardware acceleration half-work in an ARM board*)

  21. kurkosdr says:

    I doubt even Qualcomm = I think even Qualcomm

  22. kurkosdr says:

    make any “share back” improvement = make any “share back” claim

  23. kurkosdr says:

    Think again. Kurkosdr. Arm licensing operates like the Linux kernel. Yes the license is cheap but any improvement you do you have to share back with Arm.

    Bollocks. You can do everything you want with the “uncore” but the core is not changeable. Haven’t seen anyone else make any “share back” improvement when it comes to ARM cores. This is probably why you have no link to prove it.

    Anyway, the gist of the story is that Intel could pull a Snapdragon and come out with their own core, but they don’t want to do it. Current ARM “commodity cores” are very good for any purpose except benchmark bragging and e-peen enlargement, so it’s a market accelerating fast to commoditization. In fact, I doubt even Qualcomm will have a hard time competing with their Kryo cores, with ARM cores selling fast thanks to the likes of MediaTek and hence ARM rolling in dollars (more accurately, pounds) that allows them to do some really serious R&D.

    Qualcomm’s only hope is basically their excellent driver support, much better than Nvidia’s or Mediatek’s. Intel drivers are not as good though, so they can’t be like Qualcomm.

    Intel knows the mobile phone and ARM tablet markets are accelerating towards razor-thin margins, this is why they pulled out and canned Atom. There is no reason to try to claw razor-thin margins when they can sell chips inside Surface Pros and Dell Venues and make fat margins.

    And btw Qualcomm isn’t interested in the desktop either. Sorry Pog. Even if they math a Core i3 in benchmark power, nobody is going to abandon their existing x86 software for tiny savings (Qualcomm will have to make up for the R&D, so early products will be expensive). The Pogsons of this world proclaim they will buy ARM boxes, maybe they will buy one, but no one else will.

    So, expect prices for ARM SoCs to go down, while prices for Intel CPUs and SoCs to go up.

    If Google decides to bring Android on the desktop, maybe someday the Desktop PC will become more and more like the SGI workstations and the Sun Ultras of the old, but Intel isn’t losing sleep over this. It’s too much of a maybe and too far away. Better focus on the margins of the now.

  24. oiaohm says:

    Dr Loser
    Deaf Spy is more or less correct on large server farms. I think 30% is a bit low “under normal load.” I’d place it more at 40%.
    https://www.soasta.com/blog/load-performance-testing-best-practices

    Look again soasta is a server farm group. Please also stop being a guessing idiot Dr Loser. The answer for a Linux system is 65% or higher is your target. With must not exceed values of 75% on hyper-visored and 90% on bare metal. Basically even your 40 percent is pure crap. I give a solid cite and you still go and say crap. So you are a idiot Dr Loser.

    Please note Dr Loser that Deaf Spy did not say 30 percent was normal load but upper limit to load that you don’t exceed that is totally wrong. As you said even for over the total server farm with N+1 architecture 30% is way too low. You would expect at worst in a not fully sold out to customers server farm to be under 50% usage. No matter how you look at the 30% number there is no way its correct under current day conditions you have to go back at least 20 years to find a case for it to be current when so much work was not put in to avoid power waste.

    http://www.pcworld.com/article/2891257/in-marriage-of-convenience-intel-taps-arm-graphics-in-new-x86-smartphone-chip.html
    kurkosdr this was when the death bell started ringing on the atom chip.

    MediaTek that don’t even design their own cores but share the commoditized same core with others and hence operate on tiny margins.
    Think again. Kurkosdr. Arm licensing operates like the Linux kernel. Yes the license is cheap but any improvement you do you have to share back with Arm. Since each arm vendor wants to out perform the other ones yes they take a commoditized core design but they do minor redesigns to get more performance or add extra feature.

    https://community.arm.com/groups/processors/blog/
    MediaTek recent addition to arm design Tri-Cluster CPU. MediaTek is not the only Arm producer like this. So saying MediaTek does not design their own cores is wrong. The fact that none of MediaTek new core design is 100 percent theirs that is another matter. Under Arm Licensing any new feature the maker of it gets it for 12 months in production without competitors being able to use it. So having unique features to sell is the advantage to each vendor to keep on extending and improving the arm design. So MediaTek arm cores are not always 100 percent copy of everyone else. This is true for all major arm producers. The problem for intel is working on Arm equals you alter it you cannot keep what you have done secret. Intel has produced Arm chips at different times and does lease out production capacity to other Arm vendors.

    Please note Intel leases out it fabs to other Arm vendors for them to produce their chips so Intel producing Arm chips would be competing against some of their currently loyal customers. Reason why Intel has to pay for an Arm license is a fab legally cannot produce arm chips without it. Most people don’t ever go and read Arm list of licensed producers and see Intel listed. Yes amd could go Arm because they don’t have a long list of Arm Soc vendors already using them. Intel has a long list of arm soc vendors using them.

    Wizard Emeritus look at the A1100 line AMD there are new arm reference boards for servers that are ARM at the moment. Where ARM64 is going in servers is still up in air but they are getting closer to a normal x86 hardware interfaces.

  25. Wizard Emeritus says:

    Oh by the way…

    “Read Many-Core Key-Value Store”

    Its an interesting paper, Robert Pogson, but ultimately pointless. Tilera was just acquired (2/16) by Mellanox. From what is announced in he press release of the acquisition, I don’t think that you will be seeing Tilera CPU tech in your price range any time soon.

  26. Wizard Emeritus says:

    “Well, I lose nothing. I will get a server that works for me and doesn’t depend on Intel.”
    There of course is the small fly in your ointment Robert Pogson. The economies of scale that provided you with your cheap white box x86 desktop systems that you could turn into servers is simply not going to be there for you any time soon. Yes you can have your armed server , but its probably going to be one of the Android developer boards that will set you back about 2 to 3 times what you want to pay.

    I have to admit though I found this review of the gigabyte micro atx motherboard of 2015 interesting.

    http://techreport.com/news/28014/gigabyte-latest-microatx-board-has-an-eight-core-armv8-soc

    especially the comments and associated references.

    Perhaps you have read them, eh?

    happy hunting.

  27. DrLoser wrote, “But, whatever. Normal load is not the Service Level one aims at when one runs 10,000 servers in a data center — common for Google, Bing, Amazon, Facebook, etc.
     
    High load is the minimum SLA. And given the specific work-loads and work-flows that these somewhat sophisticated bleeding huge systems require, Robert … you lose”

    Well, I lose nothing. I will get a server that works for me and doesn’t depend on Intel, just like FaceBook and others in the “Open Compute Project”. They are freeing themselves from lock-in and M$ and Intel are contributing because they don’t want to lose the big boys as customers. FaceBook has tested ARMed servers and find they work very well for particular loads and they’ve even tested them on real loads not simulations.

    Read Many-Core Key-Value Store

    “Our experiments show that a tuned version of Memcached on the 64-core Tilera TILEPro64 can yield at least 67% higher throughput than low-power x86 servers at comparable latency. When taking power and node integration into account as well, a TILEPro64-based S2Q server with 8 processors handles at least three times as many transactions per second per Watt as the x86-based servers with the same memory footprint.
     
    The main reasons for this performance are the elimination or parallelization of serializing bottlenecks using the on-chip network; and the allocation of different cores to different functions such as kernel networking stack and application modules. This technique can be very useful across architectures, particularly as the number of cores increases. In our study, the TILEPro64 exhibits near-linear throughput scaling with the number of cores, up to 48 UDP cores.”

    These guys are concerned about throughput, responsiveness and power consumption as well as cost and ARM works for them. They compared a low-power Xeon with a multi-core ARMed SoC. I just need a few cores and I have no doubt about the outcome, idling most of the time and the service I want the rest of the time.

  28. kurkosdr says:

    (Oops I meant Schwartz, slip of the tounge)

  29. Dr Loser says:

    Did it ever occur to you that ARM’s interests are different from Intel’s?

    Does anything in the real world occur to Pog? Maybe five years ago. I think he’s slipping.

    All he has now is ludicrous, unresearched links borrowed (second hand, but hey! The Four Freedoms!) from Dr Roy Schestovitz.

    What an awful way for four tremendously successful careers to implode.

    Still, c’est la vie.

  30. Dr Loser says:

    Skew, kurtosis, latency … you just don’t have a clue, Robert.
    Five data points fitted to an arbitrary conical section doesn’t really cut it in Big Data, old man.

  31. Dr Loser says:

    Yes, and Google loses huge share each time their responsiveness slows a bit… [SARCASM]

    Not responsiveness (as user experience) Robert. Server load.

    Deaf Spy is more or less correct on large server farms. I think 30% is a bit low “under normal load.” I’d place it more at 40%.

    But, whatever. Normal load is not the Service Level one aims at when one runs 10,000 servers in a data center — common for Google, Bing, Amazon, Facebook, etc.

    High load is the minimum SLA. And given the specific work-loads and work-flows that these somewhat sophisticated bleeding huge systems require, Robert … you lose

    You lose, big time. You’ve never even been near administering a system that involves more than ten servers, have you?

    Some of the rest of us have done that part time, in our evenings.

    You are genuinely and provably completely and utterly ignorant on this topic.

    Do yourself a favor and just shut up about it.

  32. Dr Loser says:

    “As you know, Twitter is not exactly handling the XMPP protocol correctly right now [2008].”

    Hard to argue with the FLOSS Pony Tail guy on that one.

  33. Dr Loser says:

    I am almost as old, and I yet have aspirations to be as senile, as Robert Pogson.

    Remind me again. Which lovable little Mom’N’Pop company bought Sun for pennies on the dollar?

  34. Dr Loser says:

    Scott McNeally would, but no sane CEO would.

    Well, to be fair, Kurks, it wasn’t actually Scott McNeally who took Sun down. It was in fact The Pony Tailed Moron.

    Jonathan Schwartz is Robert’s favorite sort of guy. Somebody who can trash the entire worth of a Fortune 100 company, and yet provide silly little freebies for an incompetent senior in Manitoba.

    Long may these magnificent IT leaders last!

    Or then again, from the perspective of anybody who isn’t a miserable cheapskate in Manitoba … possibly not.

  35. kurkosdr says:

    or take people = or take designer people

  36. kurkosdr says:

    Did it ever occur to you that ARM’s interests are different from Intel’s? ARM wants a commoditized market. But it would be silly for the Intel CEO to jump into a pool of commoditization when they can have their own large-enough niche that is called Windows Desktop compatible tablets. Surface Pros sell better than everyone expected, so Intel can have a presence in mobile without having to compete with the likes of MediaTek that don’t even design their own cores but share the commoditized same core with others and hence operate on tiny margins. No *sane* CEO would leave the profitable x86 business or take people from it to chase some ARM tiny-margin market. Or open the x86 market. Scott McNeally would, but no sane CEO would.

    And Intel doesn’t think the world owes them a living. The world voluntarily gives them a living by voluntarily buying Surface Pros and Dell Venues with x86 CPUs. As long as those sell enough to be a large enough nuche, Intel *makes* a living by selling at high margins and have no interesting fighting for ARM breadcrumbs. Deal with it.

  37. oiaohm says:

    https://www.soasta.com/blog/load-performance-testing-best-practices

    Average utilization should be around 65%
    And that is for shared instances on cloud providers. Not exceed value in shared is 75%
    As a rule of the thumb, under normal load, a server must not exceed 30% load.
    This is completely wrong by todays standards Deaf Spy please stop guessing idiot what numbers are or if you are using 30% you are well and truly out of date. As cpu have got faster the amount you need to allocate for hardware events has reduced. On bare metal as long as load always remains under 90% it is fine. So a load sitting at 85% constant is perfectly acceptable if it spikes over 90% for any reason time for a bigger server..
    Those numbers are for any cpu over 1Ghz.

    Robert Pogson spikes to 100 percent can run into issues on some hardware needing to do stuff in background like intel chips System Management Mode need to run to speed up or slow down computer fans due to heat triggers does consume a few percent of cpu time and there are other examples found in power,arm… basically other types of chips and that happens when you are needing 100 percent. 10 percent space is enough to allow for hardware special events to happen while still meeting processing requirements. Also 10 percent gives you a little more breathing room to pick up that you are running high before you hit the critical. Critical line in most hardware 97-98% load. Crossing the critical line not where you want to be hitting because what it means is when you need(note the word need as in time critical)100 percent you will get 97-98 percent instead if it happens to align with hardware event processing and that 2-3 percent short can trigger failure. In other words a really annoying failure bug to chase down that is purely random so better not to go there in the first place.

    If we ever get to 5-6 ghz servers being standard the bare-metal allowance moves at that point to 95%. This is why some idiots who have not keep up on training say 30 percent when cpus were slower more percentage had to be left free for hardware events.

    Linux recommend load numbers have been higher than Windows ones. Windows 2008 server recommendations for general server was still 50%. No where for any OS in 2008 was 30% in fact been recommended. Without digging out my books I am fairly sure even 2003 was 50% on recommended hardware.

    Last book I remember seeing 30% was printed in 1996. Basically 20 years out of date. If Deaf Spy had said 50% at least that would be a somewhere near current windows admin. 30% number is just pure incompetence.

  38. Deaf Spy wrote, “As a rule of the thumb, under normal load, a server must not exceed 30% load. Period. If it does, it is time to scale, now. If not, then the system will not be able to meet peak loads.”

    Yes, and Google loses huge share each time their responsiveness slows a bit… [SARCASM]

    That’s not true at all. It depends on how large spikes are. If spikes are 10 times the idling load, that might be true. If spikes are brief and just 100% utilization there’s no problem with GNU/Linux. I get good responsiveness from GNU/Linux even under heavy loads with nearly 100% CPU utilization. Servers are even more forgiving because their clients are machines not people. The web is asynchronous. For beast, normal spikes are not an issue. Typical spikes are loading a new application or opening a dozen windows at once, a few seconds delay at most. Meanwhile users have something useful on screen most of the time.

  39. Deaf Spy says:

    There isn’t any advantage to to have an Intel server at higher price and power consumption idling all day long.

    This, Robert, is where you go terribly, terribly wrong.

    As a rule of the thumb, under normal load, a server must not exceed 30% load. Period. If it does, it is time to scale, now. If not, then the system will not be able to meet peak loads.

    You may not need that, that’s for sure. Businesses, however, do.

  40. DrLoser wrote, “server farms are not tiny. “

    I never said they were. An ARMed server can be tiny, even fanless. Even a large bulky server can run thousands of virtual servers, which is what many do. If a big server cost $40K and runs 1000 websites, each virtual server might cost $40, similar to the cost of a RPI server. I choose not to use a hosted virtual server to replace Beast because of bandwidth and the cost of service. A $300 ARMed server will cost me much less over its lifetime and could be more reliable. My ISP seems to have several longish outages each year. I could work around that with multiple ISPs but then I would have multiple costs. No, a tiny ARMed server is the way to go, and that decision has nothing to do with server farms.

  41. Dr Loser says:

    There’s not a lot of difference between a virtual server and an ARMed server. They are tiny, cost little and use little power.

    Drivel. Utter drivel.

    You’ve never seen a server farm in your life, have you, Robert? Because server farms are not tiny. They do not “cost little.” And the power they consume is conspicuous.

    Don’t make comparisons with things you have no clue about.

  42. … wote, “At todays large server loads, ARM actually may surpass x86/x64 solutions power consumption when measured against equal units of computational output (ie, REAL work loads).”

    Well, my ancient AMD64 server is idling all day long. There isn’t any advantage to to have an Intel server at higher price and power consumption idling all day long. So, my datapoint for a single server surely leads to ARM and I would be there already if there were better choices than vapourware on Lenovator with the RAM, networking and storage that I want. This will change in 2016. I will be able to run my server load from a tiny SoC and a pile of RAM and storage with nothing from Wintel in the mix. That’s a winning solution, paying for what I need, not what Wintel offers.

  43. It depends on the workload and with many servers idling, folks virtualize. There’s not a lot of difference between a virtual server and an ARMed server. They are tiny, cost little and use little power. There is a role for ARM to play in servers particularly at the “low end” and in huge projects where totals matter.

  44. ... says:

    Seems a little biased. How are ARM based servers heading for a bright future? They will be lower power, and not much more. At todays large server loads, ARM actually may surpass x86/x64 solutions power consumption when measured against equal units of computational output (ie, REAL work loads).

    The only place I’ve heard ARM mentioned as an outlet to abandon x86 in the server closet/farm is in LARGE server farms where power consumption can be xpontentially dropped in appreciable levels, but again, this is only in situations where energy is actually rated above RRRs (ie, barely anywhere in the real world).

    So, it sounds more to me like you just want any platform to jump onto other than Intel, and hey who could blame you. But, just say so; don’t mislead the less informed into thinking ARM is heading where it ambitiously tried but ultimately stands no chance…

    If anything, ARM devices are only enabled the ability to exist in multitude BECAUSE x86/64 servers serve the content. It just wouldn’t be the case if ARM serves did the same, not at the same cost anyways.

Leave a Reply