20nm ARM Chips This Year From Samsung

ARM is doing well at 45nm and below. Samsung is expecting 30% power reduction at 20nm compared to 28nm processes with the same level of performance. Its current 45nm Exynos processor is perfectly able to run all kinds of gadgets. Two steps lower in resolution should make for very long battery life. This technology is quite suitable for lower-priced PCs of all kinds.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

11 Responses to 20nm ARM Chips This Year From Samsung

  1. Considering only power consumption the analysis is not simple. As things are made smaller with Moore’s Law marching on, leakage current becomes about as important as energy required to flip bits (charging and discharging capacitance through a resistance). I think that means in the end that the chip with the fewer transistors wins and that would be ARM. OTOH having a simpler CPU can mean RAM has to be larger so the overall power consumption has to be considered. Small cheap computers are at a stage where the memory consumes about the same amount of power as the CPU. I don’t know where it will end but ARM will be competitive. That’s good. I like competition as long as I get to choose the winning technology at a reasonable price. Only a few years ago the latest CPU for desktop/PC came in around $1K. Now we get much more computing power for far less than $100 and we use less power as well.

    Is anyone considering putting a 100 watt CPU in a tablet? I doubt it. This could be the last year that fans will be found in portable equipment. Thin clients, tablets, notebooks, and smartphones can all benefit from lower power consumption. It will happen and I cannot see an x86 chip using less power than ARM unless Intel can go where the rest of the world cannot.

  2. oldman says:

    ““8″ even running natively is likely to suck.”

    I wouldn’t be so sure Pog.

    But given the bloat in features in ARM designs (going from .5Ghz single core processors supporting 256Mb of RAM to a 1Ghz dual core monster with 1Gb RAM) just getting to the first generation android tablet, I am willing to bet as ARM aspires to support more elaborate programs (which will happen if user demand for a mobile device that can double as a home desktop happens) Mores law will take over and successive generations of ARM will bulk up just like AMD/Intel and eventually IMHO will be come no more power efficient than intel.

    Meanwhile, the x86 vendors will work on getting more horsepower in a lower power envelope. Whether they will be able power consumption low enough to meet the challenge of the rising power consumption of the ARM processors remains to be seen.

    We live in interesting times indeed.

  3. My observations have been on just booting the OS and running certain common applications. GNU/Linux boots faster and starts processes much more quickly. Then there is malware. I have seen systems running with dozens of malware soaking up resources. That is a consequence of the design of the OS not the apps. I have seen fairly light installations of XP swapping like mad in 512 MB just booting. What’s with that? An OS that is lean and mean is much better suited to running on ARM. Notice that Android is an interpreted/virtual system and it is snappy on these tiny processors. “8” even running natively is likely to suck.

  4. I don’t think you are right about the cache inefficiency of the NT kernel. As I say, it was designed to run on Pentium class machines, with small caches, and I don’t see why its working set size would have increased so dramatically. The impact of the OS on the cache is really not very large.

    However, if you are talking about services and applications, then you are probably right. These may well have working sets that are larger than typical caches. But the same is true of applications and services on Linux. On either OS we can disable unnecessary services to improve performance, but we cannot rewrite applications to make better use of cache. Unless the applications happen to have been written by Microsoft on the assumption that there will always be many megabytes of cache, then this is not Microsoft’s fault.

  5. I personally saw vista struggling on dual-core amd64 5000 CPUs in desktops with 512MB caches. Celerons with 2MB caches were like rockets in comparison.

    It’s not necessarily the loop size but the number of loops/processes as well. I have never seen GNU/Linux run slower than XP on any hardware and I know “7” is slower than XP. Why then, would “8” be faster on ARM? I expect there will be another Vista-like failure if they don’t do a major rewrite and that will take time.

    I do understand caches. I build GNU/Linux terminal servers and I know what caches do for performance. I also match memory bandwidth with motherboards to make best use of I/O. Caching is much the same as buffering for I/O with RAID. The bandwidth from the cache is several times greater than main memory. Some CPUs have tens of gigabytes/s cache bandwidth and serious processors for large numbers of processes will have huge caches to hold as much as possible. Some of the big server chips have 32MB caches. M$ has relied on Intel chips to carry their bloat for years. I don’t see them rolling bloat back to fit a tiny cache.

  6. Linux Apostate says:

    “On earlier ARM processors the CPU cache, where frequently accessed instructions and data are accessed much faster than from main memory/RAM, was much smaller than on x86. Imagine having to edit every loop in that other OS to make sure it fits in that tiny cache. Some were 64KB. That re-writing is much more than just recompiling. It’s about reducing the number and size of processes and loops/arrays/structures.”

    What are you on about? If your loop code consumes more than 64Kb then your loop will not fit in L1 cache on *any* CPU. Performance will suck horribly, everywhere.

    I don’t think you really understand this topic. Caches are important, in that there is a major drop in performance when the working set size exceeds the cache size, or won’t fit in cache for some other reason (e.g. because of associativity conflicts). But the sizes of cache are still much bigger than the size of typical OS routines, loops and data structures – and that’s true on Windows as well as Linux. Remember that NT was designed to run on Pentium-class machines in the 1990s, which had even smaller caches than typical ARMs today. The largest loops probably consume about 1kb on x86 (and of course slightly more on ARM due to the less efficient RISC opcodes).

    “Presumably, the consultations between ARM and M$ were about caches.”

    It seems… unlikely.

  7. Digital Atheist says:

    Like Linux is gonna fill some kind of gap? Lets be realistic here for a moment. Linux has had lots of opportunites over the years to swipe Microsft’s lunch. Remember Linux being sold at Best Buy and other stores in the ’90s? By the time they took it off the shelf, most retailers were about ready to pay people to carry home a CD. For over a decade now, it has been free, but still hasn’t gained any traction. When computer makes decide to offer it on PCs invariably they wind up taking it back off the market because they can’t sell them. People prefer Windows or OSX because… well they like everything on their computer to work when they turn it on… after they update.. or if the wind changes direction. having tried various distros of Linux over the last few years, every last one has left something on my PCs not working.

    If audio isn’t working, then CD drive isn’t working, or graphics are a pain in the butt… and yes, I’ve reached the conclusion that a change in wind direction can cause these errors.. because nothing else seems to make any sense.

    The only way that Linux will EVER make any progress is for them to get over having hissy fits everytime someone doesn’t like a set of icons or windows manager and running off to make another fork. One distro and no more… let others be hobby OSs if that is what is needed but put all the big talent in one pool. And yes, the really good developers aren’t gonna be the basement dwellers who will do it for free… it is gonna be the developers who have talent and expect a check at the end of the day, so they can eat. donations are not the answer, and neither is selling support. At some point a price will have to be set. Does that mean that it has to cost the same as Windows or OSX… probably not, but until theLinux community unifies its effort behind one distro, it will always be a hobby OS with bits and pieces scattered around. Until then, don’t count on ARMageddon or XOOMsday to rescue it. Linux has proven over and over that they don”t have the resources to do anything in the market.. and no, Android don’t count. It has been gutted and had so much changed that it is no longer the Linux you want to push.

  8. Yes. It makes little sense to pay for a $100 licence to use that other OS on a $30 processor. ARM will be running these wonderful quad-core CPUs on $100 devices this year. M$ can cut prices as they did with XP on netbooks but that will kill the monopoly eventually. I cannot see marketing undoing the huge lead of Linux.

  9. On earlier ARM processors the CPU cache, where frequently accessed instructions and data are accessed much faster than from main memory/RAM, was much smaller than on x86. Imagine having to edit every loop in that other OS to make sure it fits in that tiny cache. Some were 64KB. That re-writing is much more than just recompiling. It’s about reducing the number and size of processes and loops/arrays/structures. When XP was replaced by “7” in netbooks we saw things slow down terribly and that was with Atom with 512KB caches.

    Some of the later ARM processors do have larger caches but still nothing like the Core i7 with 8MB caches. Tegra 2 is 1MB. I have seen some stories mentioning Tegra 3 will have 4 MB caches. Presumably, the consultations between ARM and M$ were about caches.

    Still most of 2011 will be a free-fire zone for ARMed processors running Linux. The earliest I expect M$ to come out with “8” will be later in 2012. That’s an eternity of a lead and then there are those applications we are told are so essential to M$’s ecosystem. How many will be ported to ARM? It took M$ years to develop Vista and years to fix the bugs and call it “7”. I cannot see it taking less than years to port to ARM. Assuming one year has passed since the decision to go was made, it is at least one year before release. Rumours are beta testing in 2011-9. Doesn’t it take M$ a year to beta-test anything? That puts release late in 2012 by which time ARM will be all over personal computing. The “7” beta-testing period was 9 months. That would be about 2012-6 at the earliest. Further, does anyone expect a new release to a new architecture for M$ to be anything but bug-ridden. There will likely be many months before major uptake.

  10. Dan Serban says:

    Just because you CAN do something doesn’t mean you SHOULD.
    Having Windows on ARM, although technically feasible, kind of defeats the purpose of going with ARM in the first place.

  11. Not a loon says:

    Pog’s why are you betting on ARM so much?

    Microsoft already declared that they work on a Windows version for ARM. And it’s not like NT isn’t portable – NT4 ran on Alpha, Windows Server 2008 still runs on Itanium and I think XBOX 360 runs some NT derivate, and the XBOX 360 has a Power PC architecture.

    Windows Phone 7 doesn’t exactly run on x86 either. I just don’t get it why you think that Microsoft just can’t create an Windows version for ARM. All the evidence shows otherwise.

Leave a Reply