Atom PC – Future PC

I’ve long proposed that small cheap computers are the future.“Atom PC is an easy to setup and use Mini Desktop PC, which is powered by Quad core Cortex-A17 1.8G processor. It supports Dual OS (Android & Ubuntu OS). It’s not only a Computer, but also a Home Media Center, Gaming Box, Portable Linux Workstation, Skype & Video Conference tool etc.” It looks like 2015 could be the year this concept goes mainstream. IndieGoGo has a modest project to begin mass-production of a nice little ARMed computer which should satisfy most needs. It’s got enough computing power, graphics power and memory to be useful for all the kinds of tasks folks use a smartphone or tablet but it’s definitely a desktop-PC form factor. It has the instant supply of Android apps and the usability of a GNU/Linux desktop all in one package.

There are some limitations though… It needs more RAM if folks are going to go crazy browsing the web and having a bunch of windows open. 2gB is plenty for the OS but marginal for browsers like Chrome. With FireFox it should be OK. Storage is where it’s lacking as a desktop PC but that can be fixed with an external USB-drive of some kind. The internal Flash drive is a good start out of the box. Presumably, an external USB drive would be where a user could plunk lots of downloads, backups, local copies, multimedia files etc. It’s not the latest and greatest ARM processor but likely that will be available in similar boxes before long. ARM is good enough to run a good desktop and should be quite competitive with ChromeBooks which are accepted by the market. I can see such devices multiplying like rabbits in 2015 leaving that other OS and its cumbersome hardware in the dust. There might still be a place for a Wintel PC in the market by the end of 2015, like working on huge data/multimedia but that can mostly be done on a server, somewhere out on the network. A desktop should be a cozy and quiet place where this PC belongs.

See Atom PC – High Performance Android&Ubuntu Mini PC.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , . Bookmark the permalink.

77 Responses to Atom PC – Future PC

  1. oiaohm says:

    DrLoser Please explain why I should even both responding to a moron like you who has to cite insults to attempt to get me todo what you want.

    Wake up what you call me I will be. You call me a lier and a cheat I have no reason to play fair any more. You call be a buffoon without ground and I have no reason to find you cites any more because you will not believe them or read them anyhow. So why waste my effort.

    If anyone is the ignorant slut it you DrLoser. You always ask me for crap my cites have already covered.

  2. oiaohm says:

    Whether or not that memory is mediated via an MMU is completely irrelevant
    DrLoser where is your cite that this is part of Von Neumann architecture. Please find one. You demard cites out of me all the time.

    Mesh interconnection network and Hypercube interconnection network is not part of Von Neumann architecture at all but ever single MIMD one or the other. In fact you cannot implement them and be Von Neumann architecture.

    MIMD to work correct is a different architecture design to a Von Neumann architecture. All MIMD CPUs are based on Harvard Architecture with most been Modified Harvard Architecture to be able to emulated the features of old Von Neumann architecture. Emulating features of old Von Neumann architecture does not mean picking up its bugs.

    DrLoser you are an ignorant buffoon to think MIMD system has the same problem as Von Neumann architecture system.

    The MMU must be able to correct directly to 4 other MMU in a mesh architecture.

    Von Neumann architecture only has in and out. A mesh supporting cpu has in and out for cpu messaging and 4x in and out for moving memory connected to the MMU. Arm processor has this.

    The fact that arm cpu is not Von Neumann architecture is why it can scale so well. Not all CPU are Von Neumann architecture most are not. x86 is also not Von Neumann.

    By the arm and x86 are technically Modified Harvard Architecture. Multi cpu Arm is Modified Harvard Architecture plus 1 of the following memory management Bus-based, Hierarchical and then plus 1 of the following Hypercube interconnection network, Mesh interconnection network. That is a lot more options than your general x86.

    HyperCube memory is a cray super computer. Interesting enough all arm cpu memory designs match with doing MIMD. Hypertransport in AMD cpus are HyperCube and mesh but one connection short for proper mesh 3 instead of 4 yes 4 is required for proper mesh..

    Intel manages to screw theirs completely with QuickPath Interconnect requiring data written to local memory controller before it can be sent to remote memory controller. Only 2 interconnect connections this means you are mega screwed. AMD and ARM CPU both can go directly into the transport system to write into a different memory controller owning to a different cpu.

    AMD cpu you might stand a chance of linear scaling because it memory supports it. Note the Blender benchmark.

    DrLoser the reality we don’t use Von Neuman Architecture at all. So issues of Von Neuman Architecture don’t really apply to anything we use in a phone,tablet,laptop or desktop.

    DrLoser I see you go to the effort of citing insults yet you did not cite a single thing to back up you case. If you had of you would have found that you were completely wrong. I gave you the MIMD term is it really that hard to look it up on Wikipedia to find out that MIMD is not just instructions but CPU design.

    Von Neuman Architecture excuse it common thrown around to by idiots to explain x86 under performance because they are too much of a idiot to know that a x86 is not Von Neuman Architecture so the problem cannot be Von Neuman Architecture unless of course some INTEL designer has been stupid and embedded a Von Neuman Architecture inside a Harvard Architecture(what would be a screw up of screw ups).

    Harvard Architecture does not include how to cluster cpus to operate effectively. So MIMD includes extensions to design. How you implement MIMD depends how well your cpu with multi threads scales.

    DrLoser ever bit here is conformable if you do your research.

    DrLoser here something funny the cite over performance on blender in fact is a sort. The basic tile handling system in blender is a qsort for multi processors. Yes sorting if this object is viewable in X frame or not. BSP tree DrLoser is a sort.

    So you asked me for example of source of a sort that linear scaled I had already given you one. Blender on Linux on AMD Opeterons.

    Linear scaling happens get use to it.

  3. DrLoser says:

    Giving me this kind of fluff as a response simply won’t achieve anything except making me and the more knowledgeable readers laugh at you silly.

    That is entirely unfair of you, TEG.

    What makes you think that there are any “more knowledgeable readers” on this site?

    I mean, what are you smokin’? Have you been paying attention to the likes of Dougie over the last few years?

  4. That Exploit Guy says:

    Android/Linux can indeed run multiple simultaneous users if GNU/Linux is installed beside. You can even have multiple instances of Android if needed in GNU/Linux.

    This is quite frankly the stupidest argument I have been given this week, and considering the mountain of GamerGator garbage I have already read through beforehand, that really says something.

    Seriously, by your logic, my old Pentium machine that I use to multi-boot several instances of DOS with was multi-user as well, although I have feeling that you would rather die than to agree with that.

    That other OS, for the longest time had DOS underneath which had little or no concept of security nor multiple users. NT did but because backwardness was forced on NT, that was whittled away.

    I have seen newspaper horoscope with more tangible statements of facts than this bit of nonsense.

    Look, Pogson, there is no shame in admitting that you don’t have the faintest clue about the subject matter in question. No one knows everything, and it is part of IT professional ethics that you be honest about what you know and what you don’t know. Giving me this kind of fluff as a response simply won’t achieve anything except making me and the more knowledgeable readers laugh at you silly.

    Try harder next time.

  5. DrLoser says:

    Fifi?

    You always have been, you still are, and apparently you have not the remotest interest in becoming anything other than …

    an ignorant buffoon.

    But perhaps you’re exhausted by confronting me, TEG, Deaf Spy, oldfart and others on the subject of IT. Which would be fair enough. You deserve a break.

    Why not confront Robert on matters Physical such as, say, MH17?

    As I recall, you looked particularly foolish when you tried that. No matter. I’m sure you have a tin-foil hat theory cooked up and ready to be unleashed on an unsuspecting world!

  6. DrLoser says:

    I am as a magpie amongst golden babble.

    Memory locality is a problem of the MMU design.

    No, it’s a problem of any von Neuman architecture whatsoever, Fifi. Whether or not that memory is mediated via an MMU is completely irrelevant, you ignorant slut.

    How effectively can it move memory around…

    It doesn’t “move memory around,” Fifi, you ignorant slut.

    … without the cpu having to stop.

    A very profound observation. If a CPU stops on a motherboard, can the memory hear?

    Drivel, Fifi.

    Yes this does relate to cachelines.

    Well, one out of four ain’t bad. We’ll make a Professional out of you yet, oiaohm, if the Lure of the Lamp-Post doesn’t beat us to it.

  7. DrLoser says:

    There are particular things ARM processes can do very well. The design allows linear scaling.

    Classic oiaohm. It’s impossible to deny the first proposition.

    The second one? It’s bollocks.

    And it’s going to be really, really, difficult to refute my assertion that “the [ARM] design” does not, in any way specific to ARM designs, “allow linear scaling,” isn’t it, Fifi? At least without a cite to that effect.

    Which would never surface in any event. Because, Fifi, you don’t have a clue what “linear hardware scaling” actually means, do you?

  8. DrLoser says:

    Sorry now no more cites for you ever.

    What, No Soup For Me?

    I am distraught, oiaohm. As, also, will everybody else on this site who relies on your hilariously inept and ragged selection of “cites” to alleviate the sheer monotony of being forced to plod through the sheer monotony of your endless walls of gibberish.

    Ah well. C’est la vie.

  9. DrLoser says:

    Well, one of my batch-processing sorting jobs with 5 magnetic tapes and a bunch of data got into the “A” queue on a main-frame once.

    That’s an interesting and, I think, instructive example, Robert. Unlike certain quoted papers from 1988 with no real-life significance, it actually speaks to the question.

    Now, to start off with, we’re not talking about some trivial qsort example (the type that oiaohm is focussing on, because it allows him to escape reality). Your job involved five mag tapes and was therefore (I assume) a merge-sort. Which is an important and (again) instructive difference.

    Why? Well, it demonstrates “locality of reference” on a very early memory model (ie limited RAM/disk, dependency on tertiary memory). Page in, page out, page in again. At a stretch, you could compare this to the memory cache issues with modern parallel processing: it’s important to pre-organise and pre-structure the data in order to minimise cache flushes (tape dismounts).

    It also demonstrates Amdahl’s law (unsurprisingly). Even with individual algorithms like a merge sort, there’s going to be an irreducible amount of serial processing — in this case, swapping tapes in and out.

    Now, as to qsort (or the Latrobe equivalent) and Amdahl’s law … ignoring the real-world organisational complexity (which we shouldn’t: cf Robert’s example of mag tapes), this is the sort of algorithm that is traditionally described as “trivially parallelisable.” (Yup, it’s a real technical term.)

    Trouble is, nobody pays you to do nothing else but sort data all day. Almost everything in IT is basically a pipeline: take input, apply pararllel algorithm A, pipe through linear decision tree X, apply parallel algorithms B, B’, B”, pipe through linear decision tree Y … rinse, wash, repeat … produce output.

    Amdahl’s law points out that X and Y are quite important, and theoretically irreducible. Modern computer architectures (and, pace Fifi, it doesn’t matter one whit whether they are x86 or ARM or whatever) still impose a “memory model” tax of some sort (including test-and-set atomic operations at a bare minimum) for A and B.

    It turns out that even test-and-set (counter-intuitively) comes with a direct expensive hit in cache flushes. oiaohm hasn’t shown any interest in my offer to cite Herb Sutter on parallel processing in general, so I confidently expect him to lack any sort of intellectual curiosity whatsoever on this one, too.

    But if he asks nicely, I can direct you to Joe Duffy on the subject.

  10. DrLoser wrote, “point me at a piece of code on any architecture you care to name that can sort a million data points (N) over a hundred parallel “processing engines” (P) in O(N log N/P). Or choose a suitable P and a reasonably large N.”

    Well, one of my batch-processing sorting jobs with 5 magnetic tapes and a bunch of data got into the “A” queue on a main-frame once. The operator was so irritated she threw a chair across the room. It was on rollers and covered some distance. I used Assembler so the CPU time was minimal and “chained scheduling” so the I/O counts were tiny… Normally such jobs were done after midnight on someone else’s shift… I only tried that once.

  11. oiaohm says:

    DrZealot
    You are an ignorant buffoon, Fifi.
    Sorry now no more cites for you ever.

    Since a buffoon should not know where cites are.

  12. oiaohm says:

    DrLoser other times that happen to had the mesh memory lay out was items like cray supercomputers.
    Prove me wrong and point me at a piece of code on any architecture you care to name that can sort a million data points (N) over a hundred parallel “processing engines” (P) in O(N log N/P). Or choose a suitable P and a reasonably large N.
    There is another white paper running on a cray supercomputer that does exactly that in fact way more than a 1000 processes and it was 1 billion records. Same result as the 1988 paper. So every time the 1988 paper method has been tested it has worked.

    Reality is it works. What was does in 1988 on the right hardware can be done today. Exactly why do you think HP and AMD are so interested in ARM processors. There are particular things ARM processes can do very well. The design allows linear scaling.

    Locks true can be a issue and design lockless code is hard. Memory locality is a problem of the MMU design. How effectively can it move memory around without the cpu having to stop. Yes this does relate to cachelines.

    Cachelines help in arm design but hinder in older intel x86 design. I don’t know who thought it was a good idea to have L1 and L2 inside a intel x86 processor to have two different Cacheline lengths. Yes Cachelines is one of the reasons you don’t stand a chance of decent sorting speed on a x86 if you use multi processors. So you have to push 2 cachelines out of L1 to L2 that becomes 1 cacheline in L2 and pull 1 cacheline from L2 to 2 cachelines L1 inside a intel x86. This can waste 50 percent of your internal cpu bandwidth. No bandwidth no performance. i7 someone at Intel finally saw common sense and fixed this. Yes this is one of the thing required fixed so mesh memory for moving stuff around can work effectively. Because you need to be able to transfer cache lines between cpus in a mesh memory system to sort effectively.

    DrLoser basically there are other issues that intel still has to fix. But I can be sure one day Intel will fix them.

    DrLoser why do I have to go out and reprove a known fact. Wait its because you did not know it.

  13. oiaohm says:

    You still have to distribute the set of data to be sorted across the MIMD architecture, Fifi.

    Go ahead. Tell us how to do that on an O(n) basis.
    DrZealot you need to look back at that 1988 documented hardware.
    Mesh interconnection network MMU without a stack of extra crap on it. First appears in the hardware used back in 1988 for that paper.

    ARM has Mesh interconnected MMU. X86 does not except for in rare prototypes. Yes all that x86 pipelining and caching crap destroys the memory management so ruining the possibility to linear scale by adding cpus.

    Its in fact better than O(n) basis due to the fact the CPU and the MMU operate independently to each other in the mesh setup. So there is zero time cost. Yes there is a bandwidth cost but that is it.

    Mesh interconnected MMU make multiplexed memory transparent to the cpus.

  14. DrLoser says:

    So yes a result of 0(n log n) on a multi processor system is linear performance scaling with extra cpus being added.

    1) Amdahl.
    2) Locks.
    3) Cache lines.
    4) Memory locality
    5) You are an ignorant buffoon
    6) Prove me wrong and point me at a piece of code on any architecture you care to name that can sort a million data points (N) over a hundred parallel “processing engines” (P) in O(N log N/P). Or choose a suitable P and a reasonably large N.
    7) You can’t, can you?

    You are an ignorant buffoon, Fifi.

  15. DrLoser says:

    Onward and forwards to a discussion with somebody who evidently has a functioning brain.

    That other OS, for the longest time had DOS underneath which had little or no concept of security nor multiple users.

    Completely wrong, Robert. DOS had no concept of security. DOS had no concept of multiple users (saving my putative example of IRQs: see below).

    I’m rather surprised at your generous assessment here. You must really have loved DOS.

    NT did…

    And still does. It’s baked in. It’s what we computer professionals call “Architecture,” Robert.

    …but because backwardness was forced on NT

    “Backwardness” in terms of what, precisely? Were we talking about multiple users or a security model? Because nothing “backward” like that was “forced” on NT, Robert. A cite, please.

    … that was whittled away.

    You can’t “whittle away” something that wasn’t there in the first place, Robert.

    Although you raise an interesting point here. When are the basic, irrefutable, fundamental, inherited flaws of the *nix Security Model going to be “whittled away?”

  16. oiaohm says:

    DrLoser
    1) It’s a purely mathematical algorithm. As anybody who has tried to translate a purely mathematical algorithm to a representational computer program would know, there is a certain, shall we say, “lossage” in the translation. Specifically in this case, a lossage of efficiency.
    But it was tested on real hardware. The lossage in translation is very small if you have the right cpu instructions.
    2) Even failing that, we’d be talking about 1988 hardware. Things have changed. Pipelining, caches, etc, anybody?
    The CPU was http://en.wikipedia.org/wiki/NS320xx
    Even after 20+ years the instruction set design of a NS320Xx and a ARM are very related in particular areas.

    The wheel is older hardware but the method of a wheel has never changed. Pipelining caching….. All really don’t help a sort that much. Most of the new changes in the last 20 years don’t improve multi processor sorting. Yes we hit our peak in 1988 for sorting.

    3) In the best possible case, that algorithm translates to O(n log n). Which is not “linear,” no matter how much Fifi bleats that it is.
    O(n log n) is as good a the best sort algorithms do single processor. So its no difference between 20 cpu running it or one very big cpu.

    http://rosettacode.org/wiki/Sorting_algorithms/Quicksort Basically DrZealot here is attempting to be a smart ass before looking up the topic. Quicksort is between O(n log n) and O(n cubed)

    So yes a result of 0(n log n) on a multi processor system is linear performance scaling with extra cpus being added.

    I really wish DrZealot would stop butting in on topic he absolutely knows nothing about. Really you absolutely know nothing about performance maths on sort algorithms.

    So not a single thing you just challenged me on is correct DrZealot.

  17. DrLoser says:

    My statement is the absolute truth.

    No it’s not, Fifi. It’s absolute imaginary bullshit in precisely the same way that every other statement you have ever made is absolute imaginary bullshit.

    Care to argue that point with “formal debate method?”

    Because, Fifi, I am up to that challenge. Are you? And no Gish Galloping whilst you’re at it, I think we can agree.

  18. DrLoser says:

    MIMD?

    You still have to distribute the set of data to be sorted across the MIMD architecture, Fifi.

    Go ahead. Tell us how to do that on an O(n) basis.

  19. DrLoser says:

    Minor correction (and the “purely mathematical” objection still holds):

    O(n log n)/p.

    Sorting a million data points is going to be a whole lot of fun with that one, isn’t it?

  20. oiaohm says:

    Fifi outdoes himself:

    Wait you think I would be doing this on a multi core x86. Yes there is a issue in a X86 moving around memory in a multiplexing way. It is possible in arm solutions.

    Wait who has out done himself. DrZealot of course. My statement is the absolute truth.

    DrLoser multipexing CPU instructions has a name and it was in my IEEE cite .

    MIMD (multiple-instruction-multiple-data-stream) multiprocessors Arm supports this perfectly well and x86 does not as yet as intel only started implementing this in 2010. So are ARM cpu can linear scale a sort but a x86 cpu cannot. A Arm cpu can linear scale a database query and a x86 cpu cannot.

    Yes the list of things MIMD instructions change is massive. Why does x86 lack it MIMD was under patent protection so Intel could not implement until 2008 and they have not implemented MIMD perfect yet. Worse since it only first appears in prototype x86 processes in 2010 and production 2013 so MIMD is still buggy. Yes MIMD to function correctly requires CPU memory controllers to play ball even slightly wrong will cause a massive speed hit to something using MIMD.

    Basically intel has implemented MIMD but its the finer points of moving the multiple-data-stream around without causing delays that the x86 systems still lack. CPU that have had MIMD for longer have worked these issues out. One day x86 will sort this out. Then sort will be able to linear scale on everything.

  21. DrLoser says:

    Forgive me, Robert, for I have no wish to grind oiaohm into little tiny shreds of his own minuscule competence. (The poor little lad is quite capable of doing that on his own.)

    But I did at least take the trouble to follow up his 1988 cite from Francis, R.S. ; Dept. of Comput. Sci., La Trobe Univ., Bundoora, Vic., Australia, which summarises the algorithm as follows:

    This sort, when applied to data set on p processors, has a time complexity of O((n log n)/p)+O((n log p)/p) and a space complexity of 2n, where n is the number of keys being sorted.

    Three entirely “irrelevant” points to make here, I suppose:

    1) It’s a purely mathematical algorithm. As anybody who has tried to translate a purely mathematical algorithm to a representational computer program would know, there is a certain, shall we say, “lossage” in the translation. Specifically in this case, a lossage of efficiency.
    2) Even failing that, we’d be talking about 1988 hardware. Things have changed. Pipelining, caches, etc, anybody?
    3) In the best possible case, that algorithm translates to O(n log n). Which is not “linear,” no matter how much Fifi bleats that it is.

    In fact, all that “p” stuff in there, specifically when you consider the fact that there are two terms added together?

    What a surprise! It’s Amdahl’s Law All Over Again!

  22. oiaohm says:

    DrLoser Windows implementations have a very high “unavoidably serial”.

    “things” (cores, threads, whatever) this is correct. Every OS design/build has a limit to how many threads it scheduler can successfully manage.

    If you have a scheduler than can only manage 12 threads successfully and then you it to perform on a 24 core system its not going to.

    DrLoser go look at the performance benchmark I referenced. It fingers Windows 2008 for having a major problem.

    Blenders Amdahl’s Law values. “unavoidably serial” in rendering stages 0 in merging 0.01 Or in other words basically nothing. The default number of tiles value in blender alters the N workers value.

    So blender is a program that should show almost exactly pure linear speed increase as cpu cores are added as long as blender configured correctly with the right default number of tiles value. So blender is a very good solid benchmark for locating particular scheduler issues.

    http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/7

    Shows the problem. Windows 2008 is showing a N Workers issue. Other benchmarks using other programs like blender with “unavoidably serial” at basically zero in software design show the exact same problem on Windows.

    DrLoser it would be really useful to know All the Windows OS N Worker limit because it basically its pointless owning a windows system with a cpu setup larger than that.

    DrLoser blame Deaf Spy for me picking on Windows it his cite that does.

  23. DrLoser says:

    Fifi outdoes himself:

    Wait you think I would be doing this on a multi core x86. Yes there is a issue in a X86 moving around memory in a multiplexing way. It is possible in arm solutions.

    Wait I think you would not be doing this at all. No issue does not exist in a whirligig moving around memory in a raging torrent, flooded with rivulets of thought cascading into a waterfall of creative alternatives. No is possible not as such in arm elbow or leg solutions have not attempted solar plexus yet looks promising may bite me bum-wise.

    What a comprehensively ridiculous misunderstanding of every single underlying concept, whether that concept be a) mathematical b) cpu-based c) cache-based or d) lock based that is, Fifi.

    You’ve really outdone yourself there!

    Now, back to the frilly knickers and the flickering lamp-post! Time to earn money again!

  24. DrLoser says:

    Please note the date of that document. Dec 1988 linear increase in speed of sort is nothing near new.

    This one deserves a) highlighting and b) utter contempt.

    Let’s ignore cache-line issues. (Real Computer Scientists do not.)

    Let’s ignore overhead to do with organisation of the threads/whatever. (If we’re talking about a properly organised parallel sort, Real Computer Scientists are aware that the Omega and the Theta for this algorithm are largely unaffected.)

    But let’s not ignore locks. Or even interlocks, aka atomic operations. Because, as you scale up in threads/cores/whatever, locks still bite you. Even with work-stealing.

    I can quote Herb Sutter to you on this particular subject, Fifi. But I’m not going to waste my time. Why not?

    Because, in this particular case, you are clearly an ignorant doofus. Don’t just rely on my testimony: ask ram. Now, there’s a man who understands massively parallel processing. He does it for a living.

    Not you, Fifi. Not now. Not ever.

  25. DrLoser says:

    Well, I purely hate to deprive somebody of the opportunity to formally refute a proposition in a well-ordered formal debate. I will therefore present the relevant equation:

    S(N) = \frac{1}{(1-P) + \frac{P}{N}}

    With luck, this will get through the WordPress interface. I can’t attach a class reference to it, nor can I figure out the relevant “maths” tag. But it’s fairly simple and obvious and should be able to stand as the starting point of a formal discussion.

    Which will, of course, never happen. Because on this topic, oiaohm, you are thoroughly clueless, aren’t you?

  26. DrLoser says:

    What are the Amdahl’s Law key values of Windows the OS itself.

    Well, since there are only two “key values” to Amdahl’s law, Fifi, to whit: the proportion of a program that is unavoidably serial, and the number of “things” (cores, threads, whatever) that can run everything else in parallel …

    I think it’s fairly clear that, in fingering the Windows (sc. NT) OS here, you’re merely being a simple ignorant bozo.

    Although I don’t wish to leave this formal argument without allowing you a formal response.

    Care to show us the equation that proves your point?

  27. DrLoser says:

    Even a smartphone is a multi-user system, say root and the user and several services.

    I’ve referenced that rather than quoting it, Robert, because it goes beyond peradventure to the Land of Noddy. It’s extraordinary. What on Earth does it mean?

    Why you consider “root plus user” to be “multi-user” rather than, say, a set of security levels (not especially good ones, but hey, only eight bits to play with!) is completely beyond my comprehension. But I’m sure you can explain, no doubt with the help of Webster’s 1913.

    Which, stressing a point to beyond all reasonable tolerance, is sort of fine in its way.

    But, services? Daemons?

    May I politely suggest that you have flipped your lid? Nobody else would consider a set of services/daemons to be “multiple users.”

    Why, next you’ll be asserting that DOS was a “multi-user system” (obviously a borked one) because it supported several IRQs at the same time.

    Wow. Just … Wow.

  28. oiaohm says:

    Deaf Spy your performance quote of AMD vs Intel proves you don’t understand the problem.
    DrLoser was so correct on this go and read http://home.wlu.edu/~whaleyt/classes/parallel/topics/amdahl.html

    What are the Amdahl’s Law key values of Windows the OS itself.

    Also apparently you did not read everything you were citing. There is a critical page.
    http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/7

    Apparently by this not good in Windows. Why blender is an basically identical application on Windows and Linux so you are now benching the OS core and hardware.

    Yes Linux double the cores double and almost performance with blender yet with Windows double the cores gain almost nothing. So you can drop every test that shows Xeon beating Opteron that is windows based because its a OS issue because the scheduler is falling all over itself.

    Note the two Xeons are running at 2.93Ghz clock speed in Linux Blender test then one Opteron 6 core is running at 2.6 GHz and the fastest one the 12 core is only running at 2.2 Ghz. You can fairly much bet that if the two Opterons had been running at exactly the same clock speed you would have seen a perfect linear performance increase on Linux. Double the cores Double the performance happens a lot in Linux benchmarks.

    Due to that being duel cpu is 3 12 core systems vs 1 24 core system. Yes the 24 core system does show linear speed improvement.

    The Difference between Xeon and Opteron in the Blender test also shows what OS you choose makes a huge difference if it worth your time attempting multi-core.

    Linux itself has a few scheduler problems but nothing like Windows. Windows go past its internal supported number of cores and you are fairly much wasting your time and money.

    Also the blender example points out that configuration of application can also be key if it will or will not take advantage of the extra cpus.

    I find it funny that its your own cite this time that undermines the arguement that you have been putting up Deaf Spy.

    Also there is something to worry about with Linux in the 2010 test. In 2010 there was a bug in a few Linux kernels that caused Linux kernel to report on particular Xeons that they were max speed yet they were not truly at max speed. So the difference between Windows and Linux Xeon speed results might be this as X5670 is one of the effected chips by that issue.

    Yes Linux users throwing more CPUs at problem is the correct thing in a huge number of cases. Zero serial in Amdahl’s Law happens quite a lot on Linux.

  29. DrLoser says:

    Still seems to be stuck at $2,251 with eleven days to go, Robert. I presume you revisit these time-clocked sites religiously, so I’m preaching to the choir here.

    Tell me: why do you waste so much of your valuable time, expertise, and intellect on things that are never going to happen in a million years?

    Non-blinded, non-decapitated frogs will gladly give their lives up via an unanticipated thermocline before anything as brain-dead as this sees the light of day.

  30. oldfart says:

    “What about 2004 when AMD introduced the world to 64-bitness ahead of Intel? ”

    What about it. Intel recovered, implemented 64 bit long mode instructions and has kept its position since. Non geeks don’t care about this.

  31. oldfart says:

    “Android/Linux can indeed run multiple simultaneous users if GNU/Linux is installed beside.”

    Two problems Robert.

    1) the result is no longer Android, but some hacked up “hybrid” (to put it kindly)
    2) The resulting system is one that no vendor will accept or sell commercially. You can of course treat your smartphone as hardware and hack it to your hearts content, but the result has no warranty – You are on your own.

    And frankly no non geek that I know is going to willing void the warranty on their working phone just to get – what? – multi user capacity?

    I don’t think so.

  32. Deaf Spy wrote, “since I don’t like speaking without proof”, and quoted from 2010.

    What about 2004 when AMD introduced the world to 64-bitness ahead of Intel? What was that about? Real multi-user systems drooled over the prospect of tons more RAM being available. When did Intel get rid of it’s off-chip memory controller? Years later… So, there’s two instances where Intel had to be dragged kicking and screaming into the 21st century. Both of these architectural changes were far superior to Intel’s clock-speed ramps. It does users no good to have infinite clockspeed if the bottleneck is to RAM.

  33. TEG wrote, of Android/Linux, “A large portion of the OS is so obviously designed with the assumption of a single user it’s outright laughable to call it “multi-user”.”

    That’s not the case here. My wife and I both had accounts on my smartphone. I did have to reset to factory to remove her account however. That was awkward. I care about simultaneous users. Android/Linux can indeed run multiple simultaneous users if GNU/Linux is installed beside. You can even have multiple instances of Android if needed in GNU/Linux. Of course, on a small screen, two people can’t do anything. It’s about their processes.

    That other OS, for the longest time had DOS underneath which had little or no concept of security nor multiple users. NT did but because backwardness was forced on NT, that was whittled away. It got really bad around 2003 or so when waves of malware just walked right in and set up shop. Vista was supposed to be a rewrite of a rewrite but it still had vulnerabilities copied from Lose 3.1.

  34. That Exploit Guy says:

    Even a smartphone is a multi-user system, say root and the user and several services.

    Have you actually tried sharing, say, an Android tablet with someone else? A large portion of the OS is so obviously designed with the assumption of a single user it’s outright laughable to call it “multi-user”.

    That other OS is as well but the tendency to assume a single user is still there to the detriment of security.

    This statement is silly to the extreme, not just in the sense that it betrays a complete lack of understanding of the inner workings of Windows NT, but that it also features the amateur mistake of confusing the purpose of a multi-user design with the purpose of application sandboxing.

  35. Deaf Spy says:

    This was the greatest failure of Wintel, not to appreciate the utility of multi-user systems

    Really? Perhaps then you will be surprised, Mr. Pogson, that it is one of the areas where Xeons shine, and outperform by a large margin AMD chips with twice the number of cores. And since I don’t like speaking without proof, here you are.

  36. Deaf Spy says:

    Even a smartphone is a multi-user system, say root and the user and several services

    Pogson, please. This is ridiculous. Had it been said by Fifi, I would have laughed. Not you, please.

    P.S. What about trying a multi-threaded quick sort? It is really easy, honestly. Try it for yourself, the results are instructive.

  37. Deaf Spy wrote, “We do not speak of multi-user environments. We speak of personal computers and devices. On a PC / smartphone / tablet, you do not run multiple processes for multiple users. You run only a few, for a single user.”

    This was the greatest failure of Wintel, not to appreciate the utility of multi-user systems. Of course neglecting that field helped sell licences but the world is moving on. Even a smartphone is a multi-user system, say root and the user and several services. A GNU/Linux operating system is layered with processes from many users all running at once, filling and emptying buffers or pipes. That other OS is as well but the tendency to assume a single user is still there to the detriment of security. Beast has ~200 processes running at the moment and I have stopped some that are rarely needed. w shows 3 users with sessions. Beast is a PC, BTW. Most smartphones shipped these days run Android/Linux and they can often be “rooted” and run GNU/Linux alongside. The limitation to a single user is not a limitation of the PC or the software but Deaf Spy’s imagination.

  38. oiaohm says:

    Please note the date of that document. Dec 1988 linear increase in speed of sort is nothing near new.

  39. oiaohm says:

    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=9738&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F12%2F496%2F00009738
    The above document proves Deaf Spy is idiot.
    You fail to realize that you cannot implement easily a parallel algorithm that does improve the performance. Try for a change something simple, you can do it in Pascal. Implement a multi-threaded quick sort, it is fairly simple. Do not go for more than 4 threads, stop parallelizing there. Test and record the results. Show them here and we will talk again.
    We don’t have to IEEE has done all the testing on sorting on multi core systems and how todo it properly. Now if you don’t follow the IEEE way its your stupidity.

    They managed to come up with methods for sort that have Zero percent serial. So yes sorting is something in measured testing the speedup near linear increase if you have used the right algorithm. If sorting is not linearing scaling with each extra core it can have you have a poor sorting algorithm or a hardware issue.

    Wait you think I would be doing this on a multi core x86. Yes there is a issue in a X86 moving around memory in a multiplexing way. It is possible in arm solutions.

  40. Deaf Spy says:

    If your CPUs is idling, put more processes on it. Throughput rises as a result. It’s trivial to do on an idling system. There are no bottlenecks. OTOH, if a system is maxed out, adding more cores may not help because the bottlenecks could well be RAM or traffic amongst caches etc. If you add more processes and the rate of idling declines, add more cores. Again throughput rises. It’s really that simple.

    Nice fairytale, Mr. Pogson. If only it worked like this in real-life… Please don’t move the goal post. We do not speak of multi-user environments. We speak of personal computers and devices. On a PC / smartphone / tablet, you do not run multiple processes for multiple users. You run only a few, for a single user.

    You fail to realize that people buy fast CPUs not because they need their power 100% of the running time of the CPU. They need it, like 20% of the time only. But these 20% are the most important ones, because they translate to many hours of increased productivity for the user.

    You fail to realize that you cannot implement easily a parallel algorithm that does improve the performance. Try for a change something simple, you can do it in Pascal. Implement a multi-threaded quick sort, it is fairly simple. Do not go for more than 4 threads, stop parallelizing there. Test and record the results. Show them here and we will talk again.

    In the light of all this, you still fail to explain how ARM will compensate their per-core performance disadvantage.

  41. DrLoser wrote about, “even a scintilla of knowledge about scaling to a massively parallel architecture”.

    No one here is talking about massive anything. We’re talking about a few PCs and servers. We’re talking about one core idling a fair bit and two or more cores idling a lot. We’re talking about the real world where loads can vary in real time and folks don’t care until things get serious, far below the absolute capacity of the system due to some bottleneck of imbalance. Remember, real people were fairly impressed by P I and most people were quite happy with P III. Almost any smartphone these days blows those away: clockspeed, pipeline, cache, RAM, storage, access time, network speed, … Don’t believe me? Read M$, your God. That’s from 1997, the P III era.

  42. DrLoser niggled over “gawk”.

    By gawk, I mean to look at the screen, say, reading or analysing an image. Otherwise, IT is rather useless except for listening to audio or packet-switching… Humans can read ~few hundred words per minute or get an impression of an image in a few seconds. Actual reflection/response may take longer to analyze some problem, fill in the blanks and to synthesize a response. The screen that can be written at 60Hz is useful to users only at a fraction of 1Hz. That’s in an educational setting where words and images have important meanings. In gaming, a user may get points for twitching more rapidly but that’s not the general life-cycle of information in education when users presumably are getting new information rather often and reviewing it a few times before moving on. Students come in a variety of sizes and shapes too. Some can’t read 100 wpm. Some can’t interpret images very well. We have to teach them all. That takes time. Typically, most students won’t get anything unless it’s presented about 3 times in various ways according to different learning styles. I was blessed to teach mostly objective subjects like maths/science/computers where a little bit of teaching and a lot of practice goes a long way but all kinds of teachers/students need to use IT. A clever teacher can teach several concepts simultaneously so that they fit together more naturally in students’ minds. Other teachers often don’t complete the curriculum to the detriment of the students.

  43. oiaohm says:

    DrLoser it is Amdahl’s Law that Deaf Spy has no clue of by this statement.
    N cores never translate to N-times increase in performance.
    Zero percent serial into Amdahl’s Law gives a N-times increase in performance.

    (B*N)+(1-B) Yep B=0 so the under line becomes 1. So 100 percent speed up.

    Before you say this does not happen I will point out a few cases where it does.

    Modern network cards support multi stream multipexing. So each cpu gets its own packet stream. You can see N-time increase in performance with anything network if the solution has a Zero percent serial by Amdahl’s Law. Sending static web-pages can have a Zero percent serial even some dynamic pages can have a zero percent serial.

    Video cards can in fact also be multiplexing so again each cpu with its own data stream and able to act perfectly independently to each other. So rendering a webpage or a document to screen can see close N-times increase because the percentage serial can be very small. Remember images and so on all can be processed independently.

    Only 25% increase due to adding 2 more cores that is a sign of a design problem for multi core either in hardware or software.

    DrLoser the calculator on the site you cited no longer works not like you need it the maths is simple.

    The first thing to successfully get a N-times increase in performance was a amiga computer. N-times increase requires the right hardware with more cpus with the right software. That somethings is hard does not make it impossible.

    In fact DrLoser you need to go back and read and understand what Amdahl’s Law says. Because its Amdahl’s Law that proves Deaf Spy has been saying crap.

  44. DrLoser says:

    There are varieties of office productivity but the “point, click and gawk” stuff most common in schools definitely benefits well from more cores.

    Far be it from me to usurp the role of an instructor in how to educate the delicate minds of promising child students, Robert; but I would imagine that the lack of cores is quite low on the pedagogical list of limitations.

    Is there a correlation between encouraging your students to “gawk” and, say, an SAT score?

  45. DrLoser says:

    Well, nobody apart from Deaf Spy appears to have even a scintilla of knowledge about scaling to a massively parallel architecture. (And for the sake of argument, “massive” starts at 8 cores. Honestly.)

    May I suggest you all go away and read up on Amdahl’s Law? I chose that particular cite because it has a handy calculator for you, just in case the thing isn’t as blatantly obvious as it would be if you were actually a practising professional in the field.

  46. Deaf Spy, not getting it wrote, “why would you double the throughput on a “nearly idling” system?”

    Throughput is what you get out of the whole system of IT in an organization. If your CPUs is idling, put more processes on it. Throughput rises as a result. It’s trivial to do on an idling system. There are no bottlenecks. OTOH, if a system is maxed out, adding more cores may not help because the bottlenecks could well be RAM or traffic amongst caches etc. If you add more processes and the rate of idling declines, add more cores. Again throughput rises. It’s really that simple. The old model of having one powerful processor per user or per process is really wasteful because there will huge intervals of idling. On the server, with larger numbers of processors and more local RAM and more local storage, it’s easy to add cores and increase throughput. Even if there is some idling it’s less wasteful because there are far fewer cores idling than on quad-core per user systems.

    The first time I saw this I could not believe it either. My students and I built a 1.8gHz 1.5gB system with a 32-bit AMD CPU. It easily ran all the processes for 24 students word-processing and browsing. The applications were cached in RAM so very little I/O had to be used logging in and starting applications and the CPU was rarely above 50% utilization. The students’ login times dropped from a minute or so to just a few seconds compared to the usual Lose ’98 behaviour of seeking all over a hard drive. One guy who had a habit of tilting back his chair and wait-wait-waiting for that other OS was so startled, he fell off his chair to great amusement of his classmates. Since then, I’ve used dual and quad-cored servers and they rarely max out at the CPU under even heavier loads. Servers I installed 8 years ago are still meeting the needs even though software has bulked up a lot. Those servers cost about $1200 and could easily handle 50 simultaneous users, about $25 worth of server per user. Today, with Moore’s Law, the cost would be much lower, offset somewhat by software-bloat. Largo, which goes top of the line commercial server, spends ~$40K on a terminal server that can please 400 simultaneous users for an office suite, about $100 per user. I built my servers from parts to optimize the capital cost and did not pay for support/warranties. Compare that with the cost of a Wintel PC box back in the day, ~$500 per user. The incremental value of one core in such a system is huge. It scales well.

  47. oiaohm wrote, “I moved Office productivity”.

    There are varieties of office productivity but the “point, click and gawk” stuff most common in schools definitely benefits well from more cores. The trick is to put more users on one powerful server with thin clients. A server maxed out while having plenty of RAM and cores is very efficient for this type of computing. Largo, FL, uses a single server to handle hundreds of sessions of an office suite. One busy user is another users “idle loop”. The total bandwidth for I/O is rather low and is mostly taken care of by DMA. Applications in word-processing don’t need to process a whole file because a document is basically a list. Transactions are localized to the current page mostly. There are exceptions like changing a frequently recurring phrase from one form to another in a huge document but the vast majority of documents are just a single page. Those working on larger documents usually segment them to chapters or even paragraphs.

    e.g. on my Beast, the median size of ODF text documents is 16 blocks of storage. The largest is 7660 blocks but there’s only one and the next largest file is one-tenth that size so an office suite is not stressed at all the way I work. OTOH, the browser is the heaviest load and, with Chrome browser, sometimes too heavy.

    More cores definitely improves responsiveness with office suites particularly if there are lots of users doing their random things. Old Man used to complain the server would be swamped but people are random creatures and they are not going to do a coordinated attack on their server even by accident. They click and take a sip of tea, they read what’s actually on the screen, they type, they point, and they hardly ever have to wait except for I/O which has little to do with cores. In schools, the biggest spreadsheets I ever saw recorded attendance or the data for report-cards. Typically only the current page was ever modified and perhaps a summary page. That’s a trivial load on even one core. Printing the document would have been heavy but that’s only done once or a few times in a year, again, not anything to worry about on a daily basis. One starts the process and takes a break. Humans do that. Cores don’t need to.

  48. oiaohm wrote, ” To get N-times increase due to N-times more cpus equals more careful software design.”

    That implies hardware-dependent software something completely ignored since the development of operating systems for stored-programme computers. The whole idea is to free the developer from the details of the hardware platform. That kind of vertical integration was one of the main reasons that other OS was such a stinker. M$ dictated to hardware manufacturers and everyone was stuck with really inefficient hardware.

    On the other hand, one can do quite well if individual CPUs have large enough caches and software is simple/compact, rather like GNU/Linux environments. HPC also does well because they code for many multiple identical parallel processes. The rest of us are a mixed bag of CPU-bound, I/O-bound and randomly I/O-bound processes. The old Intel model evolved to handle all of those fairly well once they copied AMD’s memory controller concept. ARM still has too small caches but that’s gradually changing as the mix of applications being run on ARM becomes more diverse. ARM has the huge advantage of smaller/cheaper cores so multiprocessing is quite feasible with many more cores. So far, there aren’t lots of good ARMed motherboards for general-purpose computing but they are inevitable with more CPUs stuck on and more memory sockets. Folks who spend less on CPUs can spend more on RAM and get high-performance from a network of moderate-performance CPUs. If processes are somewhat independent total throughput can scale well. It’s when one process or another becomes a bottleneck that scaling fails badly. In my experience, with DMA, one core can easily handle a bunch of point, click and gawk users (most of us) and a rather small number of cores can handle anything a consumer is likely to throw at it. As there are many more consumers than producers, ARM has a bright future. The OEMs are actively hunting for a way forward now that Wintel is crumbling and ARM allows them to take control of their future. It will happen sooner rather than later.

  49. oiaohm says:

    N cores never translate to N-times increase in performance.
    This is also false. It depends how the N cores are configured.

    If N more cores equal N more independent systems you see a N-Time performance increase. N-Time increase is possible under particular set of conditions. To get N-times increase due to N-times more cpus equals more careful software design.

  50. oiaohm says:

    Deaf Spy you made another set of mistake.
    Fourth, while many cores benefit certain tasks (web page rendering, multimedia), it has absolutely no benefits for other tasks – games, office productivity.
    This is wrong it should read.
    Fourth, while many cores benefit certain tasks (web page rendering, multimedia,office productivity), it has absolutely no benefits for other tasks – games, .
    Why have I moved Office productivity. Document rendering is Document rendering. Office suites are using multi threaded rendering so more core help. Databases when doing a query search across records more cores help. Calculations in a spreadsheet again more cores help. LibreOffice 4.2 Calc OpenCL not only throws work load off to every cpu you have but also dumps it on the GPU.

    Alex Katouzian is a part quote with out including what it refers to:
    “Smartphones, like the LG G2, and computers can use multi-core CPUs.”
    Alex Katouzian, Qualcomm’s vice president of product management, notes that going from one core to two can increase performance by 50 percent, but going from two to four only nets an additional 25 percent increase in speed. Multi-core CPUs allow multiple threads of processing to happen at the same time. The best way to take advantage of all the cores on your CPU is to do a lot of things at the same time.

    Basically without redesign our applications there is no way to get performance advantage out of a 4 core let alone a 100 core.

    So the quote is about android 4.2.2 and its giving instructions on what Android and other OS developers and application developers for those OS’s need todo to fix the performance problems. Remember its Android 4.4 that adds proper support for a 4 and more core cpu.

    Items like opencl is making it simpler to write applications that will exploit massive number of cores.

    Deaf Spy really it would pay for you to find the full write up about the bit you quoted incompletely.

  51. Deaf Spy says:

    Real folks need double the throughput on systems that are nearly idling. Double the cores allows doubling interrupt rates, context switching rates and throughput, so more users/processes on a system.

    Now, this is silly. First, why would you double the throughput on a “nearly idling” system? Second, small-factor computers, namely laptops, tablets and phones, are personal devices. Third, multi-tasking (more processes) is a sure thing to kill power efficiency, because at any time there is a process that keeps the CPU from falling asleep, or sleep gets interrupted often (now here is where Ohio can spit his usual non-sense about CPU sleep) . Fourth, while many cores benefit certain tasks (web page rendering, multimedia), it has absolutely no benefits for other tasks – games, office productivity.

    Here is a quote I have for you: “According to Qualcomm vice president of product management Alex Katouzian, upgrading from a single-core CPU to a dual-core processor yields 50 percent better performance, while upgrading from dual-core to quad-core increases performance by just 25 percent.”

    Let me summarize my point. Many cores compensate the inability of a single-core CPU to go any faster, but at the high price of difficult programming. N cores never translate to N-times increase in performance. ARMs are losing on the performance per core to Intel at any time, and Intel is increasingly bridging the power-efficiency gap.

  52. Deaf Spy wrote, “I already referred to an academic paper which explains why “double the cores” does not and will never lead to “double the throughput”.”

    That’s just being silly, Deaf Spy. Everyone knows doubling the cores doesn’t double the throughput when a CPU is maxed out but that’s not what folks in the real world need. Real folks need double the throughput on systems that are nearly idling. Double the cores allows doubling interrupt rates, context switching rates and throughput, so more users/processes on a system. More importantly, doubling the cores allows more throughput for less power. It’s not about doubling maxed-out throughput.

    An analog is a simple RC-circuit. The power dissipation varies as the square of the frequency. If you double the circuits, you can cut the frequency to $latex \frac{1}{\sqrt{2}} $ and get twice as many circuits running at half the power consumption (the same total power consumption) with $latex \frac{2}{\sqrt{2}} = \sqrt{2} $ times as much throughput. Combine that with Moore’s Law or other physical changes and you win big time. e.g. on Beast I have ~200 processes with just I and some services running and now the Little Woman is running her processes here with very little power consumption. Obviously increasing the cores has increased the throughput. I have run more than 20 users simultaneously on a single chip many times. It works well. I used to do that on a single core where that did max things out but no longer. Meanwhile there has been little or no increase in clockspeed. Beast runs at 2.5gHz with 4X and my first edition ran one core at 1.8gHz.

    There are physical processes that prevent doubling throughput maxed out like other sources of dissipation, leakage etc. and bus-contention/saturation within the CPU and on the motherboard. On Beast, Chrome browser prevents scaling drastically as Chrome tries to use caching with all available RAM as far as I can tell. That’s just fine for one user but is a dog with even two. We’ve switched to FireFox. It’s much more polite.

    The makers of smartphones have admitted that smartphones with 4-8 cores have a lot of idling cores but the max throughput and overall power-consumption are greatly improved.

  53. oiaohm says:

    Deaf Spy you have a fairly big mistake.
    http://www.theregister.co.uk/2006/04/27/intel_pentium_ee_965_5ghz/
    Yes 2006 and a intel x86 process breaks the 5Ghz clock speed.

    http://www.forbes.com/sites/davealtavilla/2014/06/03/intel-announces-devils-canyon-core-i7-4ghz-quad-core-cpu-for-enthusiasts-and-overclockers/

    Deaf Spy you also missed this chip of 2014. Yes this is a i7 running at a real 4ghz before general operation over-clocking. Yes it supports general operation over clocking. Max is 4.4GHz right. Yes 2014 Intel broke the 4Ghz barrier.

    http://www.kitguru.net/components/cpu/anton-shilov/intel-core-i7-4790k-devils-canyon-overclocked-to-6ghz-with-all-cores-active/
    Here is the same chip after someone insane gets hold of it. Yes 6ghz. You see others getting to 6.6 and 6.8 and one really mad 7ghz.

    Arm is a different design different thermal profile. So that arm is getting close to 4ghz might not mean anything. Silicon has been known operational up to 7.5Ghz. The problem is can you deal with the waste heat. So 7.5Ghz silicon no longer switches fast enough with current materials. 4Ghz idea comes from a heat limit and the amount of heat generated is based on your silicon chip design.

    5Ghz x86 chips we may see in the next few years with the move to 12nm and the reduced heat production.

    Deaf Spy something also interesting there are other white papers with arm about doubling through put by making a cpu do instruction optimization. Yes doubling cpu allown will not double through put. But how those extra cpus are used might.

  54. Deaf Spy says:

    Pogson, Pogson. It saddens me that, as academician, you fail to refer to the sources given. I already referred to an academic paper which explains why “double the cores” does not and will never lead to “double the throughput”. Not only you disregard academic sources. If you had ever tried concurrent programming, you would have known why.

    Intel are already at 14, but they never beat the 4GHz barrier. And never will. Then don’t even try to anymore.

  55. Deaf Spy wrote, “ARMs are reaching their GHz barrier”.

    That’s nonsense. ARM is all about optimizing power-consumption. If you can double throughput by doubling cores at the same clock and get lower power-consumption by Moore’s Law, why would they even think of increasing clock-speeds? BTW, there are 3gHz ARMed CPUs. Back in 2013, ARM was getting 2.5gHz at 28nm. They have working devices at 16nm today. With such a change they can either increase the core-count or the clockspeed or both they can also reduce both and sell a less-expensive chip. What do you think folks will do with that?

  56. ram says:

    I think the race between ARM and Intel Atom chips is too close to call. My company benchmarks them regularly. Overall they are neck and neck. Both are backed by organizations with vast resources.

    One thing that is absolutely clear is that BOTH are almost exclusively running Linux and all their software development kits are designed for Linux. As far as software, and even hardware (board level) developers are concerned there is not much difference between them — they use the same external memory, have similar power supply requirements, similar display and network interfaces, and similar thermal design requirements.

    At the beginning of 2015 the race is still too close to call.

  57. oiaohm says:

    Deaf Spy
    http://www.cnx-software.com/2014/10/26/applied-micro-x-gene-64-bit-arm-vs-intel-xeon-64-bit-x86-performance-and-power-usage/

    64 bit arm chips are looking to be lined up to give x86 a good old shove. Remember the 64 bit arm core size is half that of a x86. Arm chips biggest issue is nm of production vs x86. The smaller the nm you use the faster the chip can go using less power.

    Deaf Spy there are two limits Ghz and nm. X86 is up on both limits. Arm still has many more nm options to work through. Once you are up on both limits the next problem then becomes how to reduced transistors and how to reduce connection lengths.

  58. Deaf Spy says:

    Can we stop pretending Intel has a game in phones and tablets?
    Perhaps you can, Kurks, but I would beg to differ. This is still a new market to make any long-term predictions.

    Old as I am, I recall times when AMD was kicking the hell out of Pentium 4. All Intel-haters were gloating, AMD was crowned the new CPU king, and all. These joyous times lasted for, hm, about 5 years. Then Intel produced PentiumD. Since then, no one is even considering AMD a serious contender in the CPU market anymore, at any end. AMD have their meager positions at low end, but big laptops, and some very cheap home DYI PCs. Servers – poor’s man CPU, no threat to anyone.

    If you pay attention to what Intel are doing now, you will be more cautious with your statements.

    Certainly, inertia is a thing, but this is not AAA game market we have. This is the $2 per copy market for lousy games for kiddies. Any AAA title makes more, much more than any mobile game.

  59. Deaf Spy says:

    Does anyone care as they add cores?
    No one does. 3 x 1 GHz < 1 x 3 GHz, ref. here. ARMs fail to compete with Intel performance-wise, fact. ARMs are reaching their GHz barrier, hence linear performance, fact, unless they suddenly come up with some ingenious improvement.

    Intel are catching up on the low-power front. As a result, we see things like this.

    You are perfectly right, Mr. Pogson, that desktop is declining when it comes to usage, and mobile is on the rise. This is correct. However, saying that mobile is an Intel-free zone is, hm, premature statement. The game is there to play. You see tablets for less than $150 with Windows 8.1 with Bing. I don’t have global data, but locally where I live they are the new thing, and outnumber Android tablets sales. You know why? Because they can run Office and Photoshop.

  60. Deaf Spy wrote some nonsense, “ARM’s performance is going basically nowhere recently, as they are hitting the GHz limits.”

    Does anyone care as they add cores?

    As you can clearly see, performance is growing rapidly every few years. The smartphone I have is ~1gHz, 32-bit, single-core and it’s certainly fast enough to be usable for browsing and lots of content-consumption. The new ones are mostly idling.

    Deaf Spy, also wrote, “if you look at the article more closely, you’ll see that no one is caring about Android. Windows is the wor(l)d.”

    A billion units shifted in 2014 with no slowdown in sight… I call that caring. I just heard on CNN, that folks are willing to state on national TV that they don’t feel cool any longer running iOS and no one mentions that other OS. I expect in a couple of weeks we shall see that other OS facing further decline. “7” took share from Vista but XP was mostly eaten by Android/Linux in the last two years. That’s a huge loss of share for that other OS. I know businesses went from XP to “7” largely, but consumers went to Android/Linux in a big way. There is no crowd around legacy PCs at my local retailers but the Androidian smartphones sometimes have folks piled up three deep.

    In my own family, the last PC purchased was a Mac. The last two GNU/Linux PCs to die were repaired and one of those died again and has not been repaired because there’s no need of it. In my extended family, ~100 folks in Winnipeg, no one has boasted of buying this or that Wintel PC in recent memory but everyone compares smartphones at every meeting. Apart from typing, which folks can do easily by adding a keyboard to a */Linux on ARM PC, there’s no need of Wintel any longer. I know Intel ships some powerful chips and I like that but I don’t know anyone who actually needs one and could not do what they do without. I can build a kernel on an Atom of which I have two so I know I could do it on ARM. Beast might just be retired. We might use an Atomic system as a server but that’s just because I don’t see many ~ATX motherboards to fix one of the old chassis. The only thing ARM lacks for me are chassis with 10 drive-bays but I don’t need a hair-drier to power such chassis.

  61. gamer88 wrote, “Smartphones are not PCs. This is: http://en.wikipedia.org/wiki/IBM_PC_compatible

    IBM has not made PCs for ages so IBM-compatibility is an archaic term. I like to use the term “legacy PC”. The smartphone is apparently the modern personal computer in terms of units owned, individuals owning, even $ sales volume, however you measure it. If an 8-bit PC with a 1MHz clock was a PC, a modern smartphone with 8 64-bit cores and gigabytes of gHz+ RAM must be a super-computer. Just about everything about the PC has been upgraded multiple times since those early days except perhaps the keyboards. There is now more choice in keyboards and we can do a lot without one and connectors have changed but the low end keyboard is scarcely different from those days. Mice were scarce then but now almost all legacy PCs have something as good or better than an optical mouse plus wheel and buttons.

    I don’t think it’s appropriate to define a personal computer as something that can’t be available 24×7 and usable while walking. The smartphone is way more personal. Those legacy PCs are more closely related to what we called mini-computers before the microcomputers showed up. Those mini-computers were slow, heavy, noisy, expensive, etc. Because they now are faster and cheaper does not make them any more personal. They are still slow/heavy/expensive and, yes, the more powerful ones tend to be noisy. Meanwhile, many smartphones are so close to the bodies of their users that they run nearly at body-temperature. That’s personal.

  62. kurkosdr says:

    Now, I know about ART, but it won’t replace real native. Because with real native, the code can be compiled beforehand (not in the user’s device) and hence lots of optimization can happen.

  63. kurkosdr says:

    Can we stop pretending Intel has a game in phones and tablets?

    Let me give you a hint: Android is the #1 OS (by volume) for phones and tablets (for better or worse), hence you can’t sell your phone/tablet SoC unless it runs Android apps and games well. Now, most Android games contain lots of native ARM code, which ARM CPUs execute natively, but x86 CPUs have to emulate.

    There is this benchmark where some Android games on an Intel SoC either were slow or just plain crashed.
    http://www.theregister.co.uk/2014/05/02/arm_test_results_attack_intel/

    Sure, benchmarks, which come both in ARM and x86 versions, will report good numbers for Intel SoCs, but real-world performance is different. Emulation penalites.

    And the ARM Nexus 9 beats the Venue 10 anyway, even in benchmarks.
    http://www.androidbenchmark.net/passmark_chart.html

    Sure, Intel is a powerhouse and such, buy by the time they match the latest from Nvidia and broaden their lead from Qualcomm a bit more, the industry will have standardized so much on ARM it won’t matter. Intel will be stuck with having to deal with emulation penalties and the occasional crash (Intel software, you see).

  64. gamer88 says:

    “Smartphones became pretty fair representations of PCs, at least the monitor/screen was larger.”

    Smartphones are not PCs. This is: http://en.wikipedia.org/wiki/IBM_PC_compatible

  65. Deaf Spy says:

    Btw, Pogson, I feel you’re placing a bet on the wrong horse again. Look here. Pricing starts from $109 per 1K units, and for a chip that will swipe the floor with any ARM out there. Intel spews out newer and more power-efficient chips quite on spot with their tic-toe cycle, while ARM are already showing signs of slowing down. ARM’s performance is going basically nowhere recently, as they are hitting the GHz limits.

    Ah, and if you look at the article more closely, you’ll see that no one is caring about Android. Windows is the wor(l)d.

  66. kurkosdr says:

    “How can you expect an advertising agency to create a solid OS with high user experience?”

    Nooo… Quality is a bad word nowadays. All that matters is being FIRST!!!. Paper launches like the launch of the latest Nexus devices also count as been first.

  67. dougman wrote, “Android + Linux desktop ==> ChromeOS”

    A “bulk purchase” of ChromeCast dongles were given in the family this Christmas. Smartphones became pretty fair representations of PCs, at least the monitor/screen was larger. The “desktop”-cast didn’t work but that’s still “beta”. It’s in the pipe. It was very cool to see a smartphone bypass our multi-media PC and tap directly into the TV, almost like a TV with a “dock”for projecting the browser window, YouTube etc..

  68. Deaf Spy says:

    Dougie, Dougie… How can you expect an advertising agency to create a solid OS with high user experience? It purchased even Android from another company. Do you still believe in Santa Claus?

  69. oiaohm says:

    Samsung all TV with be Tizen then LG all TV will be Web OS. Yes both Linux based operating systems.

    Custom vendor only OS solutions are still around. Custom vendor OS solutions with unique kernel are disappearing.

    This is feature creep. How long until TV have enough USB ports and network ports that some maker thinks it cool on small TVs to include as bonus feature thin client.

    This AtomPC may be doomed due to the fact TV may just simply have more power.

  70. dougman says:

    Android + Linux desktop ==> ChromeOS

    Once Google gets this worked out, then M$ is no more. Looking even further, would be stuffing everything into a handheld device, and your desktop or laptop would just be a docking station for your phone then.

  71. DrLoser says:

    Come to think of it, you could even finance this sort of operation on your credit card.

  72. DrLoser says:

    In all honesty, you could go to a Venture Capitalist, or even an Angel, for this sort of trivial amount. Y-Combinator might even listen.

    The fact that this is a Kickstarter suggests to me that either a) these guys are ignoramuses who haven’t even tried or b) they tried, and got told they were hopeless divots with an unsalable product.

  73. DrLoser says:

    Well, as Wolfgang points out, this little beauty has so far raised $2013 out of $10,000, with 17 days left. So, why do they need funding?

    We have finished the first prototypes. Moving forward to the next stage of manufacturing requires a lot of funds for raw materials & components purchasing once mass production starts.

    Presumably the hardware engineers and the “systematical” test engineers are working for Ramen. OK, let’s cost the mass production. Let’s guess $25 for the “raw materials & components,” which seems a little low to me.

    That means you can source the giblets of 400 of these things for the full $10,000. (“Some assembly required.”)

    That’s not gonna get the component cost down very far. Maybe 5% and no P&P? (I’m guessing here.)

    This is not going to happen. But in an attempt to be constructive, you know what these folk should have done? They should have actually gone out and found their first customer. Take Largo, FL, for example: there’s the sort of small government operation that would happily pay several thou for thin clients. Largo already has theirs, of course, but they’d be able to recommend other leads. And you’d have to scale it up to $50,000 — based on the “rewards,” this is the wholesale price — but maybe you just need a few more initial customers. Four or five might do.

    Ah, dreams based on other people’s money. Whatever happened to the idea of selling your cherished ’67 Mustang to finance dreams based on your own money?

  74. kurkosdr says:

    Can we stop this “Android on the desktop” nonsense? I have used Android on the desktop (with my ODROID-U3) and although it technically works, the UI isn’t made for mouses and keyboards. You have to right-click to go back and long-press and item (such a file in filemanager) to open the context menu, some games (NFS:HP and NFS:MW) don’t work even with the help of TinCore KeyMapper, and the menus are huge for a mouse. Oh, and Chrome for Android is ridiculous on the desktop.

    I have used Android on the desktop, and those are my experiences. How many of you have used it too? You just sit around with your Desktop Linux distros thinking using Android would be about the same, and then you fantasize about Android on the desktop, even though Google clearly doesn’t intend Android to be used on the desktop.

    And Chrome? It has “secondary PC” written all over it. A threat to Microsoft sure, since it takes away from sales of Windows “secondary PCs”, but it won’t dethrone Microsoft as the leader in PC sales.

    But what I find particularly funny is the idea that Android and Chrome OS are related to traditional Desktop Linux distros in any way, although there is nothing common between the ecosystems. Android and Chrome OS might share some ecosystem someday (apps, cloud movie, books and music purchases, and maybe bookmarks and email). But not with traditional Desktop Linux. Not any more than they are going to share with Windows.

  75. ram says:

    BAD example! Intel owns the Atom trademark in conjuction with PC’s. Shuttle already makes a fanless (Intel Atom based) PC with Linux preinstalled. The appearance is similar. This is just an attempt to hijack someone else’s trademark, product, and reputation — kinda like Apple ripping off Apple Records (The Beatles) to go into the music and media industries.

  76. wolfgang says:

    … It looks like 2015 …

    maybe when pigs fly. company begging for cash to continue. means regular investors not interested. silly erdnusspfeife computer have no hope.

  77. dougman says:

    With the coming convergence of Android and the Linux desktop to ChromeOS, I think we have a real winner in the long-term.

    People want simple, quick and easy to use.

Leave a Reply