Want Permission From M$ To Use Your Hardware To Its Maximum? M$ Just Says “No”.

“We’d like to close this item as declined to give your votes back to use on specific topics. While moving to 64 bit helps to scale memory better for large solutions, it could bring other performance degradations. So rather than effort and opportunity cost, the primary reason of our rejection is performance.”
 
See Make VS scalable by switching to 64 bit
You see, M$ wants customers who are slaves. Why build better software when M$ can sell more copies and make more money instead? It’s not in their interests to provide what you need. This is the 21st century and everyone, including ARM, makes 64-bit solutions to IT problems, except M$. They couldn’t be bothered.

Want to use your hardware to its maximum capability? Use GNU/Linux, the operating system that works for you not M$. I’m a retired old teacher and in my home the only software that has to be 32-bit is an old driver for an old printer and the operating system for our thin clients. Everything else is 64-bit. If I can have 64-bitness, 12 years after GNU/Linux started providing it, why can’t you Visual Studio users? Heck, even a lot of smartphones have been using 64-bit processors and memory for years now. If M$ won’t give you what you want and need, maybe you should choose another provider of software like Debian.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , , , , . Bookmark the permalink.

109 Responses to Want Permission From M$ To Use Your Hardware To Its Maximum? M$ Just Says “No”.

  1. oiaohm says:

    Dr Loser sorry converting from x86 CISC that is 8 to 48 bits wide per instruction is way more complex than 32bit fixed or 16 bit fixed arm uses. No requirement in arm to look at instruction to work out how long it is.

    Please note arm is exactly risc any more.
    http://www.edn.com/design/systems-design/4440662/ARM64-vs-ARM32-What-s-different-for-Linux-programmers
    ARM64 is still quite RISC (though the cryptographic acceleration instructions do lead to raised eyebrows in a RISC architecture).
    Floating point, Neon and many places in Arm64 that are close to CISC than RISC. Just most people don’t look.

    And? I’m waiting for the other glass slipper to fall, Princess.
    Yet straight through your idiot head. The reality is arm started off as RISC and has added some interesting CISC features. Like a single arm instruction after decoding in cpu turning into multi internal instructions. 3 instructions commonly turn into 6 but arm has 8 slots. Some 32 bit instruction on arm64 are in fact 3 internal operations this can include adding and doing a memory address operation commonly given as a example of CISC. So arm64 instruction set is somewhere between a fixed width RISC and a fixed width CISC cpu instruction set.

    Biggest mistake with arm64 is think it a RISC without having a closer look at it instruction set to find out it has CISC and RISC behaviours.

    The universal thing is fixed width instructions equals lot simpler and faster decoder. The fact arm64 has CISC style instructions means some of the RISC overhead of needing more instructions all the time is not there. RISC tradition to add data at two memory addressed would be load then add then store as split instructions like 5 this is not ARM as that can be a single instruction. ARM is somewhere between RISC and CISC. MIPS is pure RISC. Its why ARM at same clock-speed is faster at performing particular tasks than x86. Yes there are cases X86 gains no advantage from having CISC so then ends up losing due to the overhead of variable width instructions and attempting to be too smart on branch prediction for own good. Fun part about ARM vs x86 sometimes it CISC fixed length vs CISC dynamic length yes this is fairly much arm integer and floating point vs x86 integer and floating point. Now of course arm instruction set is more risc in other places.

  2. Dr Loser says:

    No mea culpa on this, I see, Bob.

    How very pathetic. You get caught out on a blatant lie and you don’t even bother to defend yourself.

  3. Dr Loser says:

    I did rather like “it’s like having to clock another cache,” though.

    Inspired! Have you contemplated yet a fifth career writing fiction for profit, Robert?

    Your writing skills are utterly inadequate — cf that wall of text with no paragraphs, just below — but your ability to think, write and imagine total fiction has to be a significant asset.

    Fifty Shades of Floss, here we go!

    (A friendly word of warning. Most people are not interested in tooth decay. Try deviant sex, instead. Check out Richard Stallman and parrots as a starting point.)

  4. Dr Loser says:

    The real difference is x86 wastes huge resources compiling its instructions a step ARM can leave out of the pipeline.

    I repeat, Robert: total gibberish.

    When will you ever bring yourself to be honest, and admit that you are just spouting nonsense?

  5. Dr Loser says:

    Arm 64 bit mode is 32 bit with instructions 32 bit has two modes has two sizes either 16bit or 32 bit size instructions. Now x86 CISC is 8 to 48 bit all mixed with each other.

    And? I’m waiting for the other glass slipper to fall, Princess.

  6. Dr Loser says:

    No chance of you responding to my suggestion that your OP is complete bollocks, Robert?

    I’d like to admit that your refusal to do so is unusual. Unfortunately, it’s the sort of totally spineless behavior that we have all come to know and love on this blog.

  7. The Wiz wrote, “It would be better Just say you are too cheap to spend the extra money and be done with it Robert Pogson.”

    It would be better to say that I value price/performance ratio. Redundancy is nice and it’s essential for very important data but I know my data and it’s not so important or mass-produced that a periodic backup will not take care of it. Further, I don’t see the value of using an extra drive for redundancy when I can use an extra drive and get double the value, storage and performance. That’s more important to me than redundance. Further, RAID 1 is not exactly the same as a backup. It’s possible a bolt of lightning can strike a motherboard and fry two or more disc-controllers making the data in jeopardy. Having a backup off-line in another room is better insurance. I’m retired. I don’t create enough MB of new files to warrant RAID 1 at the moment. Beast used RAID 1 for ages because it was often used as a terminal server for many other users. While one individual’s daily output may not be significant, a school full of students is another matter and RAID 1 was worthwhile. As well with so many users simultaneous reads was occasionally of value. At Easterville I used four drives in RAID 1 on every server. That was fun setting up. We bought a whole case of 16 512MB hard drives for the file servers and about a dozen 40gB hard drives to boot the multi-X thin clients. The only drive that failed was the one essential for booting the file/auth server. I got the bootloader copied to all drives but the mobo had an erratum (inserted in the manuals that arrived 10 days before school started) that only Drive 0 would boot… and that’s the one that failed… So RAID 1 did not save my system although it did not lose any data. Typically, folks count on about 1% of disc drives failing per annum. I, with just a few, am pretty safe even with RAID 0. Actually, many failures would not cause significant loss of data, like one bad block or such. fsck fixes that. The odds are high that such a bad block would not be in anything critical, especially if backups render most things uncritical. Organizing backups is a bigger problem than lack of redundancy of RAID 0. OTOH, for larger files, RAID 0 gives significant improvement in transfer rate. Many system files are ~1MB so there will be some benefit booting, logging in or starting applications that are not already cached. That’s more important to me than redundancy. I may well store TLW’s files on RAID 1 because she does some business still, but most of her files are written once and left spinning for years before being read again. Backups are probably sufficient but there’s nothing wrong with taking greater care. I can arrange the file-system any way I like.

  8. oiaohm says:

    The real difference is x86 wastes huge resources compiling its instructions a step ARM can leave out of the pipeline.
    Over all this is power and dia size not performance. Robert.

    Absurd. Leaving aside the fact that x86 in its modern form is basically a hybrid RISC-on-CISC system, only a total dimwit would care about the “size” of the instruction set.
    Arm lower power per core comes from from a few facts.

    Arm 64 bit mode is 32 bit with instructions 32 bit has two modes has two sizes either 16bit or 32 bit size instructions. Now x86 CISC is 8 to 48 bit all mixed with each other.

    The CPU only looks at the individual instruction in front of it, you know.
    Not true on ARM with A57, A72 and A73 at all. In 64 bit mode A57 and A72 looks at 3 instruction at a time in 3 instruction blocks with all of that used per clock cycle and A73 looks at 9 instructions at a time in 9 instruction blocks with 4.5 of that used per clock cycle. This is only possible simply with fixed size instructions. Arm 64 bit cpu looking at individual instructions in front of it is a A53. Those sizes are not the pipelines of those cpus.

    The CISC to internal format conversion that Intel and AMD x86 uses is to get to a fixed width instruction set and the internal format in Intel and AMD is handled very much like how A57,A72,A73 handle arm instruction set. Arm skips out a complete section of silicon by starting out with a fixed width instruction sets..

    http://www.phoronix.com/scan.php?page=article&item=btrfs-raid015610-linux41

    “RAID 0 is similar to RAID 5, but RAID 5 also provides fault tolerance. ”
    Wizard Emeritus this from Microsoft is not true. Ok I have provided Linux benchmarks but it shows out even on Microsoft benchmarks.

    RAID 0, RAID 1 and RAID 10 beats RAID 5 in performance always. RAID 5 is totally not competitive on less than 5 drives.

    Also you never ever recommend RAID 01 there is no performance advantage over RAID 10 but there is a massive data lose risk increase.
    http://www.thegeekstuff.com/2011/10/raid10-vs-raid01/

    I also hate how Orcale Database documents on what raid to use is wrong most of the time 0+1 = RAID 01.
    http://www.dba-oracle.com/t_grid_rac_disk_striping.htm << The table quoted here is taken straight from the Oracle Database provided manual.

    For someone who liked claiming to be professional you believed what a person wrote not benchmarks?????? Wizard Emeritus you should have picked on Robert for that.

    Stuff writing by Microsoft and Oracle about Raid lot of the time is absolutely wrong this leads to their trained consultants getting it wrong. Yes Microsoft and Oracle trained consultants use RAID 01 then when it fails quite simply go and use RAID 5 taking the performance hit and it all because they did not know raid in the first place.

    Here’s a Red Hat blogger on those topics. (Dateline 2014, and therefore reasonably relevant.)
    Dr Loser go and read some phoronix.com performance stability of file systems is done at least every 3 months on Linux so 2014 is very old for talk about file systems. You have just made the same mistake as deaf spy did of attempting to make a point with complete out of date information. RAID performance is done about every 2 years. So yes my Raid quote is getting old but not out dated yet choice for performance metrics is 2015 , 2013, 2011 yes those are the years when RAID performance on Linux was fully benched.

    What is current on Linux has different cycles of validity. Filesystem data is very short.

    Dr loser
    By whom, the developers of BTRFS?
    https://en.wikipedia.org/wiki/Btrfs really what are you smoking.

    Stable doesn’t mean “usable,” in any case. It might just mean that the developers have given up trying to make it work properly.
    In fact not true at all. Making the disk format stable does not mean developers cannot add extensions just framework to add extensions is defined to work in a backward compatible way.

    Sorry Dr Loser this is you just making stuff up that could have been simply prevented if you had gone read the wikipedia page on btrfs or the makers site at kernel.org. Lets be a idiot and just blindly attack again with absolutely no base to your point. You could research to find an out of date redhat blog entry yet you could not research what had in fact been done with Btrfs.

    What Dr Loser and Wizard Emeritus are is dumb leading dumber.

  9. Wizard Emeritus says:

    I wish Robert Pogson, that you would read your cites more carefully.

    1) lets start with your “oracle” cite”

    “See, for example, Oracle. They recommend RAID 0 for databases. They don’t recommend it for control files or logs, stuff that must be archived.”

    First of all this is not a cite from oracle itself but from a dba’s blog. However the table that this person cites does NOT say that RAID 0 is recommended – it sais that RAID 0+1 is recommended. raw RAID 0 is just given an “OK” whatever that is.

    The Oracle DBA consultant is less ambiguous in his assessment of RAID types:

    “For the money, I would suggest RAID0/1 or RAID1/0, that is, striped and mirrored. It provides nearly all of the dependability of RAID5 and gives much better write performance”

    2) And then here is his example from an HPC that starts..

    “Here’s an example from HPC:..”

    You go on to quote from this site extensively, yet you leave out this from the very beginning of the cite.

    “All data Raid servers on HPC are configured with either Raid5 or Raid6 (most being Raid6) for data redundancy in case of a disk failure so that data will NOT be lost.”

    Note the last 7 words: so that data will NOT be lost.

    3) Then we have your microsoft cite, which purports to show how microsoft “blesses” RAID 0. When in fact all that the microsoft tech doc amounts to is a discussion of the fifferent possible types of RAID. ANd even here under RAID 0 we find the note

    “RAID 0 is similar to RAID 5, but RAID 5 also provides fault tolerance. ”

    There is something so pathetic to watch someone who goes through elaborate contortions rather than to acknowledge his slavery to money.

    It would be better Just say you are too cheap to spend the extra money and be done with it Robert Pogson.

  10. Dr Loser says:

    They’ve built it, Pog. See your cited quote on the top right hand side.

    when M$ can sell more copies and make more money instead?

    Not only do they make precisely the same amount of money from the 32-bit Visual Studio as they do from the 64-bit Visual Studio, Pog — It’s the same bloody product!

    It’s not in their interests to provide what you need.

    Except that your cite explains that what the customer probably needs is to target development to a 64-bit platform. (For informational purposes, my present company targets the “Any CPU” .NET platform. Works for both.)

    Apparently, some tardy customers prefer targeting a 32-bit platform, Pog.

    And in their case, Microsoft is providing precisely what they need.

    (With, I might add, a very clear upgrade path to 64 bit, when they get around to it.)

    This is the 21st century and everyone, including ARM, makes 64-bit solutions to IT problems, except M$. They couldn’t be bothered.

    Except when they can be bothered, Pog, as here.

    You really don’t bother to read your own cites, do you? Or even think about them.

    Since this is actually the entire thrust of your OP, Robert, I think the least you can do is to respond to it.

    That, or retract your asinine assertions as regards 32/64 bit Visual Studio.

  11. Dr Loser says:

    The real difference is x86 wastes huge resources compiling its instructions a step ARM can leave out of the pipeline.

    Gibberish.

    Then there is the extreme size of CISC instruction set.

    Absurd. Leaving aside the fact that x86 in its modern form is basically a hybrid RISC-on-CISC system, only a total dimwit would care about the “size” of the instruction set. The CPU only looks at the individual instruction in front of it, you know.

    Had you thought to focus on register sets, you might have a more reasonable line of discussion. You’d still be completely wrong, but at least you’d be looking at a meaningful comparison.

    It’s like having to clock yet another cache.

    Now, that is unquestionably gibberish. I can’t even begin to guess at what you mean, O Recently-Minted Certified Electronics Engineer.

  12. Dr Loser says:

    The development of Btrfs began in 2007, and by August 2014, the file system’s on-disk format has been marked as stable.

    By whom, the developers of BTRFS?

    Stable doesn’t mean “usable,” in any case. It might just mean that the developers have given up trying to make it work properly.

    And if it does work properly, why hasn’t it eaten the lunch of Ext4 and/or XFS?

    Here’s a Red Hat blogger on those topics. (Dateline 2014, and therefore reasonably relevant.)

  13. oiaohm wrote, “The secret is something so simple arm completely unguided branch predictor so running code where you cannot calculate out what parts are going to be popular or not have less damaging effects because the core is always predicting every branch has equal odds. x86 cores hate unpredictable workloads like javascript just happens arm cores love that stuff.”

    There are many cases where simpler is better. That’s the whole idea of RISC. The piplining is about parallel processing to speed things up. There isn’t that much difference between x86 and ARM there. The real difference is x86 wastes huge resources compiling its instructions a step ARM can leave out of the pipeline. Then there is the extreme size of CISC instruction set. It’s like having to clock yet another cache. It’s like Linux vs TOOS. Keeping design principles as simple as possible makes Linux a better kernel than TOOS which has to implement salesmen’s “talking points” at the lowest level.

  14. oiaohm says:

    Anyway, be prepared to have your ARM server perform thin clients sessions slower than your current Beast.
    Deaf Spy really need to look up the CPU of Robert current beast and compare to the numbers of the AMD arm64 I think you will be kinda shocked in every metric it will be weaker than the AMD arm64 system. So this statement is you still being an absolute idiot. There is not a single amd or intel x86 processor from 2008 that is faster than the the weakest A1100 chips. Robert told you want year the Beast II CPU was bought and that is 2008 and its mentioned in the Beast III post by Robert. 8 year old systems are not that powerful of systems.

    You load it, you can cache it, fine. But, when you execute it, the CPU goes nuts.
    In what section Deaf Spy causes the cpu to go nuts. Something was reported odd a few year back that you have completely overlooked.
    http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/

    It was interesting a few years back x86 native code was 50 times faster than javascript on x86 yet for some reason on arm the difference was only 5 times faster being native compared to javascript.

    So there is something different about the arm core design that means JavaScript and other JIT systems negatively effects arm cores way less. The secret is something so simple arm completely unguided branch predictor so running code where you cannot calculate out what parts are going to be popular or not have less damaging effects because the core is always predicting every branch has equal odds. x86 cores hate unpredictable workloads like javascript just happens arm cores love that stuff.

    Problem here Deaf Spy you have incorrectly presumed x86 issues with javascript in fact applies to arm cores. Yes the difference in the arm core design means anything like javascript/python/php jit has reduced hit. 1/5 on arm is still quite a hit but it massive better than 1/50 x86 running javascript hits you with at times.

    Most people don’t have a clue how much javascript and other jit just beat the living heck out of x86 systems yet then only minor harm arm systems.

    So anyone raising javascript as something that will give an x86 an advantage over arm cores is a under researched idiot. Yes a few years ago arm core clock speeds were not where near as high as they are now.

    Its not like the difference in JIT effect on arm vs x86 is new information. Java on arm always performed better than java on x86 for the same effective processor power value.

    The effects of x86 branch predictor screwing up are massive.

  15. Deaf Spy says:

    Quit wasting time telling me I don’t.

    Well, you well pretend then. 🙂

    FLASH kills. Animated GIFs too can be a problem.

    And you still don’t follow what goes on in the Web. Now you don’t need neither of these to produce animations: CSS does it. Unless you come up with some hardware acceleration, CSS animations can really kill your remote session, too. JS is not a problem only on loading. You load it, you can cache it, fine. But, when you execute it, the CPU goes nuts. Details: in the link you seem to still haven’t read.

    Anyway, be prepared to have your ARM server perform thin clients sessions slower than your current Beast.

  16. oiaohm says:

    Mind you the cooling issue should have been in face when there are phones out there with identical soc chips, android versions yet completely different web browser performance metrics.

  17. oiaohm says:

    Deaf Spy there is a fun reason why browsers on phone and tablets are so bad. It called cooling. If you watch clock speeds carefully you will notice they drop speed to as temp goes up. So its not that the cpu is arm why they cannot do it.

    Now proto boards with the same arm or x86 processors as what were in phones with heat-sinks different beasts even the rasbbery pi 3 is a different beast with and without heat-sink

    In fact something from a web browser point of view is more harmful when we are talking about 2000 Ghz 8 core A57 processor at 28 nm is not the javascript that is hitting the integer side of the cpu mostly but in fact JPG files that had been hitting the floating point side.
    http://phoronix.com/scan.php?page=news_item&px=Libjpeg-turbo-1.5
    Recent change over to neon helps with that a lot.

    Basically there are weakness to pick Deaf Spy. You first were claiming power usage issue and that was because you were looking at 40nm chip instead of 28nm so were completely wrong there. Now you attempt javascript and that is a integer workload that is not where the weakness is in the CPU against any non Intel x86 or any Intel x86 running at the same clockspeed. Then it gets better you have been recommending atom chips that are not even close to the 40nm A57 in performance. Sorry Deaf Spy in this area you have shot yourself out the game.

    Deaf Spy as yet you have not given a benchmark for the right processor basically don’t post again until you in fact have a benchmark with a 28nm A57 in it. Maybe then you can have a correct point of view. Benchmarks with 40nm A57 is not the right item.

    Pointing to writeln robert always new I was a dominate C programmer so making mistake with Pascal was always possible. Sorry to rip the rug out from you there its never been anything useful picking on me over that and ever time you use it all it says I have lost badly and I am so desperate to safe self I have to use this..

  18. Deaf Spy wrote, ” Google Apps, for instance, are so infested with heavy JS code, that they can hurt even a desktop CPU.”

    While JS can be painful, I stumbled upon another more serious problem, a huge web-page out there with very interesting/useful data but with an animation well down from the top. On my thin clients when I scroll down to it, the display freezes, needing to refresh a whole large screen over 100 mb/s. Gigabit/s will help that but so will killing the browser from a terminal. FLASH kills. Animated GIFs too can be a problem. This stuff just should not be doled out in important web-pages. JS just tends to slow down loading of pages. Very few give me trouble. TLW, however, has one page with a memory leak. She can leave it idling for a few hours and the system begins to swap. FireFox goes nuts with memory. Chrome is better.

  19. Deaf Spy wrote, ” Your approach leads to unacceptable downtime, and potential data loss for the time between the last update and now. Unless you do real-time mirroring, which is actually RAID 1.

     
    Robert, you don’t realize how a modern browser works. You don’t understand how a modern CPU works. You can’t tell between a data-processing and data-storage server roles.”

    I’ve been doing electronics for decades and I’ve designed a motherboard and programmed in assembler and higher level languages. Of course I know all this. Quit wasting time telling me I don’t. RAID 0 is used for speed. It relies on hard drives instead of redundancy. I will take the risk. I can keep important data on RAID 1 but stuff I want quickly can be on RAID 0. It’s not a problem. It’s an acceptable risk.

    See, for example, Oracle. They recommend RAID 0 for databases. They don’t recommend it for control files or logs, stuff that must be archived. I don’t need to archive Debian’s mirrors. I don’t need to archive stuff that is old data and already backed up. Most of my disc-writes are cached files for the web-browser. Who cares about that? The original is out there on the web. The few files that I do create are here on mrpogson.com or in /home and I can RAID 1 that and or back it up. It’s an acceptable risk, about twice the failure rate of using one drive. One loss of 15 minutes of data in five years instead of ten. OK by me.

    PS: Even M$, the hero of some slaves here, supports RAID 0 for SQL server… There’s nothing wrong with redundancy/error correction but there are times when it slows you down for too great a cost. This is one of them.

  20. Deaf Spy says:

    Robert, aren’t you ashamed to rely on support from something that can’t comprehend how simple things like Pascal’s writeln() work?

    His latest text is so full of crap that it is not worthy discussing it. If you are actually interested in modern web and JS, please follow my link.

  21. Wizard Emeritus says:

    “That is not 100 percent true. ” Whatever. it is true enough to make you comments irrelevant.

  22. oiaohm wrote, “you are just being insulting because you are incompetent and you think insulting me will cover this up.”

    There’s a lot of that going around as if these folks are imitating Trump.

  23. oiaohm says:

    Deaf Spy this shows how much of a idiot you are. Most JavaScript JIT processing operations are integer so you are looking at kDhrystone.
    http://wccftech.com/amd-8-core-arm-cpu/
    Notice something here other than intel chips everything else in the market loses to the AMD ARM chip include AMD own x86 processors.

    Each new generation arm has caused performance change. So current day arm64 at 28 nm that Robert is looking at is ahead of AMD x86 and VIA x86 offerings and ahead of intels atoms and some of intel entry level chips for javascript.

    If you define desktop cpu as intel then arm64 from AMD is not as good. If you count AMD x86 and VIA x86 as desktop it kicked the living heck out of those at running JavaScript. There are areas AMD Arm64 is weak running javascript is not one of them. Its the same reason why A57,A72 and A73 do so well in php,ruby,python web-server benchmarks. Please also remember to win the intel has to run 500Mhz faster and at a lower nm of production.

    Google Apps, for instance, are so infested with heavy JS code, that they can hurt even a desktop CPU. That is the reason why companies go native on mobile.
    That is not true. HTML hell to make a UI on a moible phone.

    Apps in javascript on lower performing arm cpu than current ones for firefox os have been done. Something shocking to most is in thin client setups firefox is at times run straight on the native arm64 cpu of the thinclient.

    At least you’re better that Fifi. You can compile a kernel, write some DHCP configuration, even write a small Pascal program. Fifi, unfortunately, can’t do any of these.
    I code in C most of the time. Pascal is not my common language. In fact I have examples where I have build Linux kernel and wine chancing particular defects. So claiming I cannot do these things really you are just being insulting because you are incompetent and you think insulting me will cover this up.

    Really you cannot pick the correct benchmark to see what 28nm arm does. I did so I knew that for what you just put up. I provide the correct benchmark and you have not even read it before posting another around of complete stupidity.

  24. Deaf Spy says:

    Robert, are you developing reading-comprehension issues? Your own source says:
    ” need very fast temporary disk I/O” (emphasis mine).
    Do you understand what temporary means? In this case it is obvious that SSDs are used as additional RAM.

    Nowhere in this world, where data storage is a goal, you won’t see RAID 0. You may see RAID 0 + 1.

    There’s no danger at all in RAID 0. If there were a failure, I could swap a drive, restore from backup and be good in a few minutes. There is no problem.

    Of course there is. Your approach leads to unacceptable downtime, and potential data loss for the time between the last update and now. Unless you do real-time mirroring, which is actually RAID 1.

    Robert, you don’t realize how a modern browser works. You don’t understand how a modern CPU works. You can’t tell between a data-processing and data-storage server roles.

    At least you’re better that Fifi. You can compile a kernel, write some DHCP configuration, even write a small Pascal program. Fifi, unfortunately, can’t do any of these.

  25. Deaf Spy says:

    I would still like to hear how ARM’s increased cache and improved networking bandwidth would help for faster processing of JavaScript. Google Apps, for instance, are so infested with heavy JS code, that they can hurt even a desktop CPU. That is the reason why companies go native on mobile.

    Robert, you may cache your pages on disk, in memory, wherever. But, JS will still need to be interpreted. I am inclined to bet that your current Beast will fare with JS better than your ARM server candidate.

  26. oiaohm says:

    http://www.cnx-software.com/2015/02/09/hikey-board-64-bit-arm-development-board/
    Wizard Emeritus hikey is a sub-brand of hi-silicon its out of the range of the D02 and D03 boards are the higher end of their products that are hard to get from english suppliers and the D02 and D03 have the means to put decent amount of ram into them. The hikey board a pure A53 and made at a way smaller nano meter (yes if you look up the first hikey board cpu released back in 2015 that is TMSC 16 nm when TMSC was working out how to make 16nm in volume so lets make a toy board to reduce cost of prototyping). The hikey chip boards are slower than the AMD A1100 series A57 yet sucks down way less power if you are doing performance per watt if you are doing usability no so much at only 1.2 ghz/3=0.4 Ghz compared to the AMD A1100 series A57 2ghz (yes that is the maths you have apply when comparing A57 vs A53 on clock). Multi threaded workload using the 8 cores in the hikey board is ok most desktop workloads are still single thread. Mind you the new coming hikey board is lot like Raspberry Pi 3 . The new board is 1 G of memory like the Raspberry PI 3 the Rasperry Pi 3 has a network port the hikey has 8 A53 and the Raspberry pi 3 has 4 A53 same clock speed yet in overall cpu watt usage the hikey 1/2 that of the Raspberry pi 3 its all about 40nm vs 16nm. Now if the hikey had a network port it would beat the Raspberry pi 3 in everything bar price. Yes Raspberry pi 3 remade as 16 nm would cut power usage to about 1/4. This dropping power usage is confirmed with 10 and 7 nm. 5nm power usage drop is suspected again but not enough complex chips to confirm it yet.

    http://www.96boards.org/specifications/

    Please also note only have 2 ram slots and one network port is the 96board EE specification then take careful note on the power draw limts of the EE specifications and the power a 96board has to provide to third party parts. You would call this the limbo bar of hardware production 45 watt drawing CPU put you over the 96board specifications limit when you add on other parts. 32Watt draw cpu is the upper limit on imposed by 96board specifications. This is part of my problem 96boards give up a lot like extra ram slots multi network cards even some cases faster networking all to stay under the watt limit.

    Couple this with the SSD caching feature in LVM, and you have your performance.
    Wizard Emeritus not just a feature LVM as such under Linux.
    http://blog-vpodzime.rhcloud.com/?p=45
    bcache is general block devices and lvmcache is lvm partitiions both basically do the same thing performance wise.

    The reality is Robert Pogson naked RAID 0 is never used in producion.
    That is not 100 percent true. Scratch drives(temp data storage) for 3d render farms will in fact raid 0 SSD because SSD is too slow. Cases were you need to speed up read and write and you don’t care if the data is lost RAID 0 is used in production.

    If you care about the stored data Raid 0 is really not on. Something Wizard Emeritus has not mentioned is using a SSD as a cache under Linux next to spinning media in fact reduces data loss risks. Why because the SSD cache is written to faster and anything that in the SSD cache that is not written to the spinning media after power is restored after a lost ends up correctly written. The other interesting point here is when a SSD fails lot of cases data recovery cannot be performed this is another place where the hybrid of spinning and SSD can come into it own as long as you are not using encrypted spinning.

    So there are a lot of things to consider when protecting data and seeking performance there is no black and white answer but there are answer that are risky RAID 0 is risky.

    Mind you there is a fun thunderx2 benchmark done by Intel recently where the Xeon loses under web load until they look at performance per watt and notice the thunderx2 board is eating down over 400watt more than 4 times as much power as the Xeon board next to it. Cause was and was not the CPU.
    1) Thunderx2 core supports twice the ram slots as the Xeon and both the boards had their slots fully populated so the Thunderx2 had twice as many ram sticks it could access.
    2) Thunderx2 core can drive each ram stick twice as fast compare to Xeon worse the Thunderx2 can do this sustained so max ram read and write speed from it is does not have to be intermittent spikes but can in fact sit there for hours on end something the Xeon cannot do.(hello possible more 4 times higher power consume ram x2 and speed of drive of ram x2+)
    3) Intel had populated both boards with a fairly large nm ram sticks that are fairly cheap ok idea on a Xeon but a really bad idea on a ARM due to how hard the ram is going to be pushed.

    The ability to drive ram bus twice as fast as x86 chips is not unique to thunder x2 it also applies to A1100, X-Gene 2/3 and other server class arm cores. With arm server class chips you have to be way more careful in ram selection because ram can be consuming way more power than the cpu core or any of the connected items other than ram.

    Basically presume arm server boards are weak and using poor quality ram they will kick you in the teeth so fast it not funny. This is also something you have to watch with those benchmarking arm boards did they use ram that is truly suitable to be used with ARM or not because wrong ram the low performance per watt can trace to the ram.

  27. dougman says:

    The development of Btrfs began in 2007, and by August 2014, the file system’s on-disk format has been marked as stable.

  28. dougman wrote, “Just use two drives, with BTRFS.”

    BTRFS is not very mature and people still tweak it a lot. I like JFS but XFS is supposed to be a bit better for what I do. I will have a period where I can check out different techniques before this thing becomes my working system. I will start with a couple of clients of Beast and then go for the ARMed server.

  29. Wizard Emeritus wrote, “naked RAID 0 is never used in producion. As you have finally noted what was really used in cases where the highest performance storage was needed is a combination of RAID 1 (mirroring) and RAID 0 (Striping). You really should be careful with your words – you make yourself look more foolish that you may be.”

    1. Debian backs up my software on hundreds of mirrors, so I don’t need to back that up.
    2. My other data is valuable and I explicitly back it up. I used to do it on CDs but now hard drives are large, cheap and reliable. SSDs are not sufficiently reliable for backup. They make great caches however.
    3. RAID 0 is used by folks working on large volumes of data which they need to access rapidly. If the data is reproducible, it makes perfect sense to use RAID 0.

    Here’s an example from HPC:
    “When you need very fast temporary disk I/O access on Solid State drives, you will want to use the HPC /ssd-scratch file system. All users can access the /ssd-scratch file-system from any node on the cluster at:
     
    /ssd-scratch/$USER
     
    The HPC /ssd-scratch server is a specialized and expensive node that uses 11 solid state disks (11x 500MB/s RAID 0), over the HPC Infiniband network.”

    See? A real server with real data on RAID 0 because the data is huge and folks want it fast. They tell users to back it up if it’s important. I will do that. I’m retired. I don’t compute for a living any more. Anything I write is copied somewhere safe. There’s no danger at all in RAID 0. If there were a failure, I could swap a drive, restore from backup and be good in a few minutes. There is no problem. In my case, much of my data comes from the web and I can get another copy if I need it. It’s not particularly valuable or irreplaceable but I do want it as fast as possible. RAID 0 will do that.

    I am careful with words. Someone wrote that RAID 0 is not used on production servers and I pointed out that is not true. There are places where RAID 0 is useful, getting data sooner. If I were handling >100 people’s data as I used to in schools, of course, I would prefer RAID 1. That allows parallel reads for instance at the cost of slower writes but that’s rarely an issue. RAID 0 will play a useful role here. I can also have some RAID 1 partitions and some RAID 0 partitions. It’s all good.

  30. Dr Loser wrote, “I am 100% certain that Robert has never once verified his backups.”

    You do know what md5sum or sha512sum do, eh? I do. I use them regularly. You make the sum of every file in some file-system, copy the files, then verify the copies:
    “md5sum -c, –check
    read MD5 sums from the FILEs and check them”

    I’ve been doing that since the dial-up days.

  31. Dr Loser says:

    So, I will use RAID 0 if I choose. It will work for me. RAID 0 is used on servers exactly for such purposes.

    I suggest, given the very helpful advice from the professionals who post here, Robert, that you choose otherwise.

    Assuming you still desire a functioning tinker toy server, of course.

  32. Dr Loser says:

    So you live with the possibility of obsolete copy of your data (you do back up regularly and verify your backups, don’t you?) if your drive craps out.

    A pure guess here, but I am 100% certain that Robert has never once verified his backups.

    Too busy spinning up another futile Linux kernel point version, y’see. It doesn’t seem to have occurred to him that this is a total waste of time.

  33. Dr Loser says:

    Just use two drives, with BTRFS.

    Why not?
    It’s not like a FLOSS enthusiast has any important data worth entrusting to a production-quality file system.

  34. Wizard Emeritus says:

    “SanDisk X400 2.5” 512GB SATA III TLC Internal Solid State Drive (SSD) SD8SB8U-512G-1122 – for $124.99

    “I’ve been using RAID 1 a lot for redundancy and parallel reads.
    Now I want the transfer rates of an SSD from cheaper and larger hard drives.”

    The reality is Robert Pogson naked RAID 0 is never used in producion. As you have finally noted what was really used in cases where the highest performance storage was needed is a combination of RAID 1 (mirroring) and RAID 0 (Striping). You really should be careful with your words – you make yourself look more foolish that you may be.

    “” Hard drives are sufficiently robust that many users of TOOS run around with just one naked hard drive… So shut up!”

    The fact that many people lack storage redundancy on their computer systems is true whether they are running Windows, Linux or OS X. IN your case however, it is clear that none of your data is move valuable that your money. So you live with the possibility of obsolete copy of your data (you do back up regularly and verify your backups, don’t you?) if your drive craps out. That doesn’t sound like a very smart way of working , but hell its your data!

    The irony is that for those of us who value our stored information more than our money, it doesnt take much of an investment to get a lot of protection.

    my data is protected by the combination of a Kingston HyperX Savage 256Gb USB 3.1/3.0 Flash Drive ($14.95) for the really important stuff and a LaCie 6TB 2big Quadra USB 3.0 2-Bay RAID Array for ($399.00) running in RAID 1 mode for the bulk of my valued information.

    And if you still want speedy storage, a 512Gb Solid state drive like thhe
    SanDisk X400 2.5″ 512GB SATA III TLC Internal Solid State Drive (SSD) SD8SB8U-512G-1122 – OEM can be had for a little as $124.99 – well within your budget.

    Couple this with the SSD caching feature in LVM, and you have your performance.

  35. dougman says:

    Just use two drives, with BTRFS.

  36. Dr Loser wrote, ” Why not buy two 2TB SSD for your filthy piece of crud? With two 2TB SSD, you still have quite a lot of capacity — and you also have a broad range of RAID choices. Many of which fit the concept of “a server.””

    This should last a decade. Bit-rot may claim the SSDs long before that. This will be much more than a file-server. I regularly use MariaDB, Apache, SSH, PHP, TFTP, DHCP, CUPS and other services from Beast. By running many applications on the same system as storage I eliminate network lag/congestion and have more effective caching of files and web-pages.

  37. Dr Loser wrote, “You’ve been using RAID0, apparently. Do you know what percentage of server users depend upon RAID0?” and blathered on about SSD.

    1. I’ve been using RAID 1 a lot for redundancy and parallel reads.
    2. Now I want the transfer rates of an SSD from cheaper and larger hard drives.

    So, I will use RAID 0 if I choose. It will work for me. RAID 0 is used on servers exactly for such purposes. More usually, it’s RAID 10, multiple RAID 0 devices mirrored or multiple RAID 1 devices striped. Hard drives are sufficiently robust that many users of TOOS run around with just one naked hard drive… So shut up!

  38. Dr Loser says:

    Then again, I live to serve.

    I’m proposing a 4TB SSD for your server. Which you will butcher, by not using standard RAID technology.

    Here’s another suggestion. Why not buy two 2TB SSD for your filthy piece of crud? With two 2TB SSD, you still have quite a lot of capacity — and you also have a broad range of RAID choices. Many of which fit the concept of “a server.”

    I think RAID5 is a bit of an overkill, what with your very minimal requirements for IT in general, but even that would work if you’re prepared to buy three 1TB SSD for the Cello.

    Which would be an excellent and cheap alternative … if only the Cello supported three SATA drives.

    Are you quite sure you have thought this one through, Robert?

  39. Dr Loser says:

    I’ve been using RAID for a decade and have only needed to sync drives a few times.

    You’ve been using RAID0, apparently. Do you know what percentage of server users depend upon RAID0?

    Go on, Robert. Guess.

    If this level of data stability will satisfy you, you don’t need a server. You need a Tinker Toy.

    Which is very lucky, really, because a Tinker Toy is precisely what you propose to purchase.

    At $295, which no matter which way you cut it is still twice my quoted price for a 4TB SSD.

    How deep is that hole right now, Robert?

  40. Dr Loser blathered on and wrote, “syncing your drives up once a day.”

    I’ve been using RAID for a decade and have only needed to sync drives a few times. One could view making a backup as syncing but it certainly isn’t necessary at the level of drives but file-systems. I doubt a daily backup will require writing more than a few hundred MB per day. GNU/Linux has many tools for that task: ls, tar, xfsdump, systemrescuecd, rsync, find, …

  41. Dr Loser says:

    Surely you understand that a 48KB smartphone cache is a bit small for a multi-user desktop system, eh?

    Surely I never even suggested such a thing, Robert.

    Surely you are pulling loony comparisons out of thin air.

    Surely you have some sort of definition of “reasonable cache size” (for your requirements) that doesn’t require you dribbling over yourself? Do feel free to share.

  42. Dr Loser says:

    Robert Pogson says:

    The Wiz: “RAID 0? On a system functioning on a Server? I assume you know that if one of your disks in the RAID dies, you lose all your data!”
    Pog: “I will do daily backups.”

    That, of course, is how RAID systems have worked since time immemorial, Robert. They’re not too finicky about parallel reads/writes and journaling and stuff. That involves the sort of SATA bus management that might spike your dismal little crud motherboard at an unaffordable 20+ Watts or so.

    Far better to do it your way, and invent, well, let’s call it RAID-minus-minus. At a hardware level, the SLA should obviously be syncing your drives up once a day.

    I really can’t see why any reputable data center these days shouldn’t take your wise advice on this one, Robert.

  43. Dr Loser says:

    $150 X 2, one for each client is $300 and no server.

    What sort of rancid lunatic would buy two 4TB SSD drives, one for the server and one for the client, Robert?

    And let us posit that said rancid lunatic is hopelessly enamoured of the 1970s concept of Thin Clients.

    What sort of even more rancid lunatic would insist on a 4TB SSD drive for a thin client?

    Why don’t you just admit that you had a temporary brain-fart and got your sums wrong?

    If you’re in a hole … stop digging.

  44. The Wiz wrote, “You could use the USB 2.0 ports to add gigabit eithernet and USB 2.0 connected storage.”

    I looked at all those options and decided on the Odroid-C2 for clients. I think I will order a couple next week. Hanging storage on one of the clients might give backup some isolation from the server too. I think everything but the Cello fits in this month’s budget. Damned pension annuity is based on last year’s capital in the midst of my rolling up a bunch of small pensions from all the schools where I worked. Starting January 2017 my annuity could increase four-fold so I might do something fancier then but this will work for me now. Gigabit networking makes a huge difference. The house is wired with CAT-6 but many legacy devices are slow Ethernet. From now on I won’t buy anything that won’t do gigabit. The A1120 will actually do 10gb-e but I would need a second server or fancier switch. Unfortunately, the Cello won’t do 10 gigs. Good fun.

  45. Wizard Emeritus says:

    ARM = AMD

  46. Wizard Emeritus says:

    “That suggests that this device is a decent replacement of Beast’s current processor and competitive with Intel’s offerings in performance. In short it’s good enough, ”

    Now all you have to hope is that the ARM developer board market develops in such a way that an developer board with high speed storage attaches (i.e.. SATA ports) gets delivered. But IMHO if ARM themselves cant deliver their “inexpensive” developer board, then what makes you so sure that the little guys are going to do any better.

    Frankly, I’m surprised that you haven’t gone for the Lemaker Hi-Key.

    http://www.lenovator.com/product/90.html

    The Hi-Key actually has some great potential for the kind of just right computing that you aspire to. You could use the USB 2.0 ports to add gigabit eithernet and USB 2.0 connected storage. It it works you would actually be able to reduce your IT footprint to the size of a credit card. It would also give you hands on experience with the tech that you are committing to.

    What say you Robert Pogson.

  47. Thanks for that benchmark, oiaohm: “Let’s start with the first chart which shows the performance on a single core at a normalized clock speed. Taking a quick look at the integer benchmark “kDhrystone” we can see that Hierofalcon AKA Seattle actually does incredibly well here besting all of AMD’s desktop processors and inching closely towards Intel’s Sandy Bridge based offerings.”
    That suggests that this device is a decent replacement of Beast’s current processor and competitive with Intel’s offerings in performance. In short it’s good enough, not Intel, and I can run Debian GNU/Linux on it. It’s just about perfect for me. I’m still hoping the HuskyBoard will arrive on the market sooner rather than later for the enhanced storage options but the Cello board will certainly work for me.

  48. oiaohm says:

    http://wccftech.com/amd-8-core-arm-cpu/
    This is a benchmark of the processor Robert is looking at. Notice something interesting that if you are buying an AMD processor in most metrics the arm processor is the best in some areas in fact in some areas level the intel processors in dust in power per watt in some areas. So this is a question of workload.

    You have to remember these amd chips are A57 cores not the faster A72 or A73 cores.

  49. oiaohm says:

    This is why its not easy talking about arm there is not just 1 core. A73,A72,A57 big can be used with A53 littles for 64 bit. A15,A12,A9 bigs are used with A7 littles for 32 bit.
    Typed 63 instead of 53 in there. Arm is a very diverse place. AMD is not up at the best end.

  50. oiaohm says:

    For those that don’t know the A53 is the second best performance per watt arm processor. Its the best performance per watt arm processor at 64 bit its just not fast.

    The best performance per watt are A7 based and limited to 32 bit.
    http://www.armtechforum.com.cn/attached/article/Cadence_Shenzhen20151210134012.pdf
    The arm chips at 10nm that have had test chips made A73,A72, A57, A53, A15, A12, A9, and A7.

    This is why its not easy talking about arm there is not just 1 core. A73,A72,A57 big can be used with A63 littles for 64 bit. A15,A12,A9 bigs are used with A7 littles for 32 bit.

    Then you have nm of production all can be produced at 40nm or smaller with smaller like 10nm and 7nm being more power effective and able to run at higher clock-speeds.

    So when comparing arm stuff you need to watch the Arm core/s and the nm being used very closely. Failure to notice that something is like 40nm when current at that time was 28nm means that you end up complete miss interpreting performance.

  51. oiaohm says:

    As for (2), it’s fairly clear that the Xeon-D can happily coexist in the same idling power range as the Cortexes of this world. It’s a tad more power-hungry, but so what? In five minutes time, it will just shut down and return you to the base state.
    That is not exactly true. a Xeon-D 4core 14nm vs a 8 core 28nm Cortex anything has about the same idle a 14 nm Cortex 8 core A57/A72/A73 is about half the idle power of a Xeon-D. These are chips with around the same performance benchmarks. Now if a Cortex core system happens to include A53 core that are little cores for idle state running even at 28 nm Cortex vs Xeon-D 14 nm Xeon-D idle is massively heavy. Remember due to A57/A72/A73 and A53 have exactly the same instruction set transferring running applications between them is fully possible basically A57/A72/A73 are all design to run at full load but A53 are designed for light loads at max power effectiveness. The other thing that is not measured in most benchmarks is speed to return from suspended or the fact most multi core Cortex processors have a cortex-m class processor for power management that is in fact a full 32 bit arm OS controllable so able to be lot more selective on what triggers brings cores out of sleep.

    This is the thing that is odd. A53 core is between 1/3 to 1/5(because its 1 instruction per cycle instead of 3-4.5 of the others) of the performance at full speed of a A57/A72/A73 uses half the silicon area of a A73 yet even flat out A53 uses half the power of an a73 idling that is the most effective big cortex at idle. Its all about number of transistors active. A53 is that low in active transistor count that you would need to run 8 of them to equal one xeon core sitting in idle active transistor count. This is the thing you can either design a core that is power effective under full load with high processing speed or a core that is power effective a full load with a low processing speed. Instructions per second with complexity of instruction set directly align to number of active transistors that have to be on so instructions can be processed. Arm instruction set less complex so less active transistors so processing instructions per cycle 3-4.5 instructions at a time have less active transistors than a x86 xeon processing 4 per cycle instructions only reason xeon has won is smaller transistors using less power at the same nm it loses every single time. In idle Xeons have never won against big.little solutions.

    Deaf Spy no point looking at those 12 month old benchmarks putting a 40 nm arm core vs a 14 nm Xeon thinking that the chip robert is looking at is 28 nm. You need to find benchmarks with X-gene 2 that is 28 nm.

    http://www.openserversummit.com/English/Collaterals/Proceedings/2016/20160413_SA102_Sankaran.pdf
    Scroll down notice X-gene 2 runs at 2.5 Ghz. Was release 2014 guess what the benchmark you are quoting garbage and it shows because they show a X-gene 2.4 Ghz what is a X-gene 1 there is quite a big difference between those two chips. Because X-gene 1 top speed is 2.4 Ghz and X-gene 2 can turbo to 2.8ghz. The reality is it runs twice as fast as is predecessor. So yes X-gene 2 matches up to entry level Xeon chips in 2015. The generation after starts beating into Xeon-D.

    Deaf Spy basically the Ghz of the X-gene tells you they are benching the wrong chip. Then remember 40nm X-gene are power hungry compared to 28 nm ones. Some of the benchmark sites you have to think have gone out of their way to make arm cores look bad.

  52. The Wiz wrote, “RAID 0? On a system functioning on a Server? I assume you know that if one of your disks in the RAID dies, you lose all your data!”

    I will do daily backups. If 1 drive fails I might lose one iota of my data. MTBF of modern hard drives is huge. Despite having over the years dozens of hard drives, only a very few have failed in the last 20 years. Further, in schools where I’ve worked I’ve only seen one failed drive. I was a maths teacher then and the computer guy took off the cover so students could see the drive spin and seek. There was a very bad visible groove in the platter. Of the four 512gB drives on Beast, one has some bad blocks. They’ve travelled through dozens of airports with Beast in a cardboard box. tar czf is my friend. I’m also considering XFS instead of JFS. I don’t anticipate loss of data but will keep Beast whole until the migration proves itself.

    It’s a good opportunity to delete a lot of useless files and perhaps look at some “document management” instead of relying on search. Most useless stuff ends up in “~/Downloads”. I might take the advice of the tidiest teacher I ever met and automate the deletion of files in Downloads daily or use /tmp for routine downloads …

  53. kurkosdr says:

    (and he is not btw. He is not the one who uses an OS which started mediocre attempt to copy Unix, then MacOS X, then who knows what).

  54. kurkosdr says:

    being technically illiterate

    *being technologically illiterate

  55. Wizard Emeritus says:

    “Transfer rates are about the same, 200 MB/s X 2 for RAID 0, 400 MB/s which is four times the speed TLW and I have been enjoying for years.”

    RAID 0? On a system functioning on a Server? I assume you know that if one of your disks in the RAID dies, you lose all your data!

    Are you sure your don’t mean RAID 1?

  56. Dr Loser, being technically illiterate, wrote, “Define “reasonably sized caches,” please.”

    Surely you understand that a 48KB smartphone cache is a bit small for a multi-user desktop system, eh? The AMD A1120 has 8MB L3 cache and 1MB per core L2, so faster inner loops and context switches, just what one needs. It makes a decent server or thick client.

  57. Dr Loser, being mathematically challenged wrote, “I quoted a price for a 4TB SSD at $150. You quoted a price for a total piece of dreck motherboard at $295”.

    $150 X 2, one for each client is $300 and no server. I get the server and two clients get the benefits of a nice hard disc RAID, and we don’t have to worry about bit-rot. Transfer rates are about the same, 200 MB/s X 2 for RAID 0, 400 MB/s which is four times the speed TLW and I have been enjoying for years.

  58. Dr Loser says:

    I take back what I said, Robert. It might appear that I hold your views on servers and performance and such in utter contempt, and that I cannot imagine you conjuring up anything that could possibly be more feeble-minded, uninformed, and generally ignorant that those views.

    Once I finally got around to reading your response to a cite on Visual Studio, I realized I was utterly wrong. I humbly apologize for this inadvertent slight on your character.

    There are really no depths of bigoted ignorance that you are unprepared to plumb, are there, Robert?

  59. Dr Loser says:

    Well, we’ve just happily diverted ourselves away from your original topic, Robert, in order to prove conclusively that you have no idea whatsoever about:
    1) Running cars, 24×7
    2) The relative benchmarks of the Cortex family and the Xeon D
    3) How a modern computer uses L2 cache
    4) What the difference in wattage is between the sleep state, the idle state, and the full-power state implies
    5) What constitutes a server, rather than a re-purposed desktop
    6) What the difference between $295 and $150 is

    … I could go on, but I hate to see you suffering in a self-inflicted cess-pit of your own ignorance. You don’t even sound as smart as Fifi right now — a difficult bar to fall under, but I have to congratulate you on your single-minded determination to do so.

    Anyway, none of this was the topic of the OP at all, was it? Let’s return to your original words of “wisdom:”

    You see, M$ wants customers who are slaves.

    Saying it over and over again does not make it true, Pog.

    Why build better software

    They’ve built it, Pog. See your cited quote on the top right hand side.

    when M$ can sell more copies and make more money instead?

    Not only do they make precisely the same amount of money from the 32-bit Visual Studio as they do from the 64-bit Visual Studio, Pog — It’s the same bloody product!

    It’s not in their interests to provide what you need.

    Except that your cite explains that what the customer probably needs is to target development to a 64-bit platform. (For informational purposes, my present company targets the “Any CPU” .NET platform. Works for both.)

    Apparently, some tardy customers prefer targeting a 32-bit platform, Pog.

    And in their case, Microsoft is providing precisely what they need.

    (With, I might add, a very clear upgrade path to 64 bit, when they get around to it.)

    This is the 21st century and everyone, including ARM, makes 64-bit solutions to IT problems, except M$. They couldn’t be bothered.

    Except when they can be bothered, Pog, as here.

    You really don’t bother to read your own cites, do you? Or even think about them.

  60. Dr Loser says:

    Apart from all that other rubbish that you have just confused yourself with, Robert — rest assured, you won’t confuse anybody else — let’s just check out that last assertion, shall we?

    One server costs about the same as one big SSD so I get a bargain.

    I see you have progressed from not being able to read your own cites to not even being able to read the premise behind my cites. Does old age creep up on you that fast?

    I quoted a price for a 4TB SSD at $150. You quoted a price for a total piece of dreck motherboard at $295 … on advanced purchase, which as Dougie wisely points out is not a good sign.

    Do please explain how the two costs are “about the same.” Particularly since the purchase of some sort of hard disk storage is non-optional if you want to run a server.

  61. Dr Loser says:

    But, to return to the salient points at issue.

    The new system will likely be faster

    Considering that your baseline is Beast III, Robert, I should bloody well hope so. Otherwise you have just bought a useless piece of dreck motherboard at $295 for no gain whatsoever.

    OTOH, however much faster it is, you have still bought a useless piece of dreck motherboard at $295. So, cost/benefit wise, it might not be the way to go.

    And of course your entire proposition here has nothing at all to do with the relative merits or demerits of a top-end Cortex solution and a bog-standard Xeon-D. Beats me why you bothered to bring it up, really. Did you really think we are all so stupid that we wouldn’t recognize diversionary tactics when we saw them?

    … and the CPU is no concern because it has reasonably sized caches …

    1) Define “reasonably sized caches,” please.
    2) Explain the “sweet point” on the two dimensional graph that represents the trade-off between “utterly crappy underpowered CPUs” and “reasonable sized caches.”

    Caching web pages on a proxy server in an underprivileged school in Northern Manitoba on behalf of thin clients is one thing, Robert.

    Caching data on L2 is … well, you’ve suddenly discovered your new vocation as an Instant Expert In Electronic Engineering. You tell me.

    It’s not really the same beast at all, is it, Robert?

  62. Dr Loser wrote, “you are completely wasting your time buying a server in the first place.”

    Nonsense. A server gives me great value just storing files or caching them so I don’t need multiple copies around the system. Then there are databases and web-applications and networked services, none of which needs a lot of power to please just a few users. One server costs about the same as one big SSD so I get a bargain.

  63. Dr Loser says:

    The other way to cheese-pare on power consumption, Robert — and I leave aside, say, the WiFi hub, because somehow I don’t see you doing without that little power-crazed luxury — is to purchase nothing but solid state disks.

    I suppose you could use your hard drives — are you still running with five-platter Winchesters? — as a sort of flywheel, the better to help the alternator on your Tractorlet cough into action every so often. But other than that, they’re just gobbling watts at an outrageous rate. Even when idling?

    Always willing to be of assistance, I have taken the liberty of assuming your server storage needs at a notional 4TB. (You can always swap in and out to memory sticks/cards, which are good value for 64GB these days, particularly in bulk. Which is good. You’ll need bulk. I also recommend the excellent rsync for this purpose.)

    Now, here, you’re onto something. You can get a SATA 4TB drive for around $150. I strongly recommend that you do that.

    Of course, it would also work with the Wintel equivalent, so it’s a bit of a wash in power saving, really.

  64. Dr Loser says:

    Deaf Spy wrote, “ACPI sleep”, as if that mattered.

    It obviously matters if the comment, from some semi-anonymous fool in Manitoba — masquerading as you for the moment, Robert, because obviously you will deny a connection between the following comment and Deaf Spy’s riposte via ACPI — asked the following nitwit question:

    So, you leave your car running 24X7?

    I think we can all, including you, Robert, answer that semi-anonymous fool with the resounding response: No, and what does that have to do with the price of fish in any case?

    I regularly suspend PCs.

    Good lad. I’d pat you on the head if you weren’t about 5,000 miles away. Well done!

    Let’s step through this gently, for the ancient and the bigoted and the hard-of-thinking. (These are not necessarily orthogonal candidates.) Oh, and just in case that semi-anonymous nitwit turns up and masquerades as you again, Robert — for him, too.

    1) ACPI idling. I believe Linux has finally caught up by now (and I believe it was more a political issue, as usual, than a technical issue in the first place), but this is ubiquitous. And here’s an ancient PC World link to prove that Wintel does it at least as well as anybody.
    2) System idling whilst in State One, or whatever ACPI calls the basic operational state. Here’s a representative Anandtech cite that shows Xeon-D idling in the very low 30s.
    3) A server doing Actual God-Damned Work.

    I think we have disposed of your weaselly arguments concerning (1). Incidentally, this is the base state. Do nothing with your server for, say, the hours whilst you are asleep, and it will spend its time in this base state. I hardly think a kilowatt every 100 hours of darkness is going to rupture even your moth-ridden brittle old piggy bank, Robert.

    So much for that 24×7 car analogy, btw, just in case the quasi-anonymous nitwit is still online.

    As for (2), it’s fairly clear that the Xeon-D can happily coexist in the same idling power range as the Cortexes of this world. It’s a tad more power-hungry, but so what? In five minutes time, it will just shut down and return you to the base state.

    Now, it so happens that almost nobody at all runs a server of any kind in the expectation that it will spend significant time in (2), Robert. Which means that, for the sane portion of the server-owning world, this particular figure hardly matters at all. It’s a rounding error.

    Your server workflows are different, however. Presumably you stutter between a minute of idling, a minute of spinning up a new Linux kernel, a minute of idling, a minute of consulting your badly-indexed MySQL database, a minute of idling … and so on.

    In that case, you might almost be the perfect target audience for a piece of worthless dreck like the Cello. Maybe the only target audience.

    Problem is, it’s still underpowered by however many factors Deaf Spy brought up, when working at full power as in case (3).

    And if you’re not interested in doing any work at full power as in case (3), you are completely wasting your time buying a server in the first place.

  65. Wizard Emeritus says:

    “The new system will likely be faster and the CPU is no concern because it has reasonably sized caches.”

    Whatever. Lets come back to Mr. D’s question:

    “Again. Why do you believe that x86 is somehow inherently inferior?”

  66. Deaf Spy wrote, “ACPI sleep”, as if that mattered. I regularly suspend PCs. I can wake them up by turning on my wireless keyboard, touching a power button or sending a magic packet on the LAN so I don’t even have to go to it. I run Debian GNU/Linux.

    The idling to which I refer are the frequent pauses humans take: reading, examining or reflecting on what’s on the screen, focusing on some music playing or responding to some distraction or call of nature. That can actually drop utilization to ~1% or so, even on my old stuff. I regularly build kernels while surfing with little/no impact on performance. CPU is not limiting my browsing experience. Heck, I know young people who’ve quit using legacy PCs because there’s no appreciable advantage over their nimble fingers and sharp eyes. I was taken for a ride in the bush on Sunday and the driver stopped at a fork in the “road” (off-road my most measures) called up Google Maps, made his decision in seconds and carried on. I would have made a different decision but he had the complex information he needed on an ARMed device in seconds. What good would a more powerful CPU be to him, me, or most other browsing users?

  67. Deaf Spy wrote, “how exactly will caching and gigabit networking help your server handle the browser rendering and JavaScript it serves for your thin clients? I will tell you – no way.”

    My web apps respond in ~1.5s. Google responds in ~2.5s. Do I need more speed? Nope. BTW, those numbers are to a thin client using standard X over SSH using my ancient setup and 100 mbits/s networking. I have 9 tabs open. I can work in one while another loads if there is slowness. Speed is not a problem. The new system will likely be faster and the CPU is no concern because it has reasonably sized caches.

  68. Deaf Spy says:

    So, you leave your car running 24X7?

    Of course not. But you know, we, in Windows and OSX land, have something called “ACPI sleep”. Something Linux still can’t pull off properly. Perhaps that is why you don’t know about it, and your metaphores are highly exaggerated.

    Again. Why do you believe that x86 is somehow inherently inferior?

    Don’t forget. You need the CPU for your browsing sessions. You can thank Google about that, btw.

  69. Deaf Spy says:

    Not true at all.

    I brought a proof. Where is yours? Or do you just with it were not true? 🙂

    what my server will lack in CPU power it will make up in price, caching and gigabit networking

    And how exactly will caching and gigabit networking help your server handle the browser rendering and JavaScript it serves for your thin clients? I will tell you – no way.

  70. Deaf Spy wrote, “The moment you need some processing, however, ARM goes down the drain.”

    Not true at all. The benchmarks show obsolete ARM stuff is still in the game. If you must have a Cadillac, by all means buy one, but I don’t think anyone needs a Cadillac, certainly not ordinary users of IT who want to find out what’s happening. Google and others do most of the computation. Thin clients in particular need a lot less oomph and what my server will lack in CPU power it will make up in price, caching and gigabit networking compared to our present state which is pretty awesome.

  71. Deaf Spy says:

    Btw, I already stated that ARMs are great for servers that only cache data or serve static pages. The moment you need some processing, however, ARM goes down the drain.

    If you plan to serve only static pages at home, great. But I don’t think so. Don’t forget which is the most resource (esp. CPU) intensive application you are going to have to run to serve your home. The web browser. With ARM, you’re looking forward an inferior experience. Don’t you believe me? Take a word from someone else: http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/.

  72. Deaf Spy wrote, “it is not idling that counts. It is the not idling that is important”.

    So, you leave your car running 24X7? Same for your furnace, TV, lawn mower? Be reasonable. Beast I was perfectly adequate. Dual core would be fine today. I have quad-core today. TLW is using 400MHz/100mbits/s thin client. We both will be upgrading for a very modest price. Odroid-C2 clients: $40 each. Server mobo: $300. For two clients that’s $380, $190 per user. I consider anything over $100 as overkill. You might get two Intel Atom mobos for a similar cost but we’re also getting a nice server and saving a bundle by sharing it’s resources: storage, RAM, networking…

  73. Deaf Spy says:

    Clockspeed is similar. Cache sizes are similar.

    So what. I showed you real examples how clockspeed doesn’t matter. Even you admin “that benchmark does show remarkable performance for Xeon D” (http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion/13)

    What you fail to realize, Robert, is that it is not idling that counts. It is the not idling that is important. I hinted that by mentioning Turbo Boost, but you didn’t get it. You can say that you personally don’t need much speed, but that is a very subjective point, and it cannot be a valid argument for your generalized statements.

    Again. Why do you believe that x86 is somehow inherently inferior? All facts and benchmarks tell exactly the opposite.

  74. Deaf Spy wrote, “Xeon-D has many more times the performance advantage.”

    Facts not in evidence. Clockspeed is similar. Cache sizes are similar. Throughput is similar.

    See this benchmark of Xeon D v X-gene 1
    For web infrastructure they find Xeon D using more power than X-gene 1. X-gene 1, btw, is 40nm tech. Folks are now working on version 3 which is 16nm FinFET.

    Now, that benchmark does show remarkable performance for Xeon D, but then there’s the matter of price… Here’s a Xeon D motherboard from Gigabyte for $759 CAD.

    Do you really think I should spend double or triple the price to get a more capable server with similar performance, likely idling? Quoting the benchmark, “the pricing and power envelope (about 60W in total for a “micro” server) of the Xeon D still leaves quite a bit of room in markets where density and pricing is everything. You do not need Xeon D power to run a caching or static web server as an Atom C2000-level of performance and a lot of DRAM slots will do”.

    Now you may think that a powerful CPU is needed for desktops but that’s not been my experience. Users had a perfectly useful desktop on Beast I, 1.5gB RAM, and 1.8gHz clock. I am proposing 32gB RAM for 2 or 3 users, when TLW and I have been living nicely in 4gB. BTW, Beast I could please 30 simultaneous students and was 32-bit… Admit it. People do read and look at pictures. The CPU is idling most of the time. I am alone on Beast II at the moment and while researching and writing this comment, load average is about 0.14 and 1.3gB of files are cached. You do know that with GNU/Linux I can please users with a load average more than unity, eh? That’s what true multi-user multi-tasking operating systems do. I don’t need more throughput at all at any price.

    The least expensive Xeon D that Intel offers today is $199 USD. I’d rather spend money on RAM rather than Intel, thank you.

  75. Deaf Spy says:

    cores + frequency = POWARR! !

    Let’s take a look at the frequency.

    At single core tasks:
    – Q6600 @ 2.6 GHz performs about 50% faster than Pentium D @ 2.6 GHz.
    – Pentium G620 @ 2.6 GHz performs about 50% faster than Q6600 @ 2.6 GHz.
    – i7 @ 3 GHz performs times faster than Q6600.

    As it has been proven in both practice and academic level, N cores never translate to Nx performance. Usually, they don’t even translate to (N/2) x performance.
    Because most applications are:
    1. Single-threaded,
    2. Spend most their time idling (even when they support heavy multi-threading, like databases, for example).
    Further, writing strongly performing concurrent code that scales well and beyond the overhead, is hard.

    There is a reason why Intel do “Turbo boost”.

  76. kurkosdr says:

    Why do you believe that x86 is somehow inherently inferior, to repeat Dr. Loser’s question?

    Because of script kiddie hardware performance assessment ™.

    cores + frequency = POWARR! !

  77. Deaf Spy says:

    Really, Robert. Why do you believe that x86 is somehow inherently inferior, to repeat Dr. Loser’s question?

  78. Deaf Spy says:

    TDP Xeon-D at 2gHz+ and 14nm, 35-45W
    TDP A1120 at 1.7gHz and 28nm, 25W

    Yep. ARM is more efficient. The A1120 at 14nm would be around 12W.

    No. Because Xeon-D has many more times the performance advantage.

    Pentium at 75 MHz consumes about 8W. In your fabulous logic, Pentium is more efficient than A1120.

  79. oiaohm says:

    Arm advantage is one group writes the instruction set and everyone include nvidia using transmeta tech processes exactly the same instruction set as everyone else with the understand that you cannot expect binaries to do anything special for you.

    x86 you have intel, via and amd unique extensions then you have internal difference in like the intel cpus xeon-phi, xeon server and core cpus have different optimisations and contain different instruction sets.

    Something also easy to forget is neon in arm.
    https://www.arm.com/products/processors/technologies/neon.php
    The mali graphics on arm comes out of neon. All current day big arm cores include a limited amount of gpu style processing with or without a mali processor. This is where the idea that a arm core needs a gpu to process graphics is so wrong. Intels simd instructions were not designed full around doing full graphics processing.

    Neon does not clash with the arm cores floating point processor like old intel simd instructions do.
    https://www.arm.com/products/processors/technologies/vector-floating-point.php
    Then you look at the floating point processor and find out it also includes optimisation for graphics work as well.

    Arm does not have a standard floating point processor but a Vector floating point processor that is suited more to 4d+ Yes 3 dimensions + time than what it is to doing simple 100×100= maths.

    So a xeon gpu has a intel gpu next to so it can perform accelerated 3d operations. Arm64 cpu core has a lot of 3d operations that built into the base instruction set adding a gpu is only to go faster not that it cannot perform gpu style operations with reasonable effectiveness in the first place.

    Arm had the advantage they designed the fpu and simd instructions after intel and others had done them for a while. So intel has a lot of legacy instructions that are pointless that they have to maintain compatibility with for x86 compatibility.

    Modern day arm64 core looks a lot like the very old atari bitter where its a general cpu with graphics processing instructions in the primary instruction set not as some special to access subset.

    Lot of arm differences is the fact it was design as a cpu for mobile phone and could not in fact depend on existence of a gpu to do anything so if the cpu core by itself cannot do it then the user of the device may not be able to do it remember in a item like an mobile phone there is no such thing as plug in expansion card. People want to be insulting because arm are historically mobile processors. Problem is desktop/server processors like x86 have got to be sloppy in design because user could plug in a specialist processing card to deal with the defects.

    The reality is most of the benchmarks done on x86 vs arm are done with x86 is strong not where x86 is weak.

    http://www.linleygroup.com/cms_builder/uploads/x-gene-3-white-paper-final.pdf
    The x-gene 3 chip is still A72 but its 16nm at that is shows basically same performance as xeon-d in performance per watt. Remember arm still has the A73 rabbit move that 1 cuts power usage while lifting performance by 1/3. The basic problem with arm at the moment is getting hands on a decent chip.

    TSMC is working at 10nm today cranking out ARMed chips. they are cranking out 10 nm for mobile phones. TSMC has announces they are working on 7nm for arm server for the middle of 2017. This is a huge drop from where we are.

    The first prototype 5nm was tapped out in Oct 2015. So its just TSMC and other getting their fabs so they can mass produce that and TSMC says sometime before 2020 they will be at 5nm mass production. 3nm is the silicon is very close to production wall. So either chips will work at 3nm or they will not. 3nm is either the last size silicon chip or the first carbon based size chip. 3 nm is only 15 silicon atoms(0.2nm) across is not much space to play with and at 1 nm 5 that is basically not workable. Carbon 0.1 nm so 30 atoms to play with at 3 nm and 10 to play with at 1 nm. I would not be expecting to be able to go 0.5 nm. So race to smaller has to end soon.

  80. Dr Loser wrote, ” Heat dissipation. You appear to be obsessed with this. The Xeon-D proves you wrong.”

    Let’s see:

    TDP Xeon-D at 2gHz+ and 14nm, 35-45W
    TDP A1120 at 1.7gHz and 28nm, 25W

    Yep. ARM is more efficient. The A1120 at 14nm would be around 12W. Guess what? TSMC is working at 10nm today cranking out ARMed chips.

  81. twitter says:

    What is all this Wintel bullshit? Robert Pogson shows us a clear example of how Microsoft screws their own users intentionally and the Microsoft trolls come out to talk about brag about how uncooperative software owners are? It’s true that free sofware can’t do some things, but it’s because software owners are greedy monopolists. Giving money to them is worse than wasting it, it perpetuates difficulty.

    As usual, the trolls claim to represent the free market while insulting a potential customer, something that can only happen in a monopoly. Microsoft is a typical monopolist, with the resources and lack of morals required to get successful competitors thrown out of the market place. They did the same thing with netbooks, because Asus and others had substantial Wintel revenue to lose, but were unable to do anything about Android or Chromebooks outside of software patent extortion. Microsoft is a pioneer patent abuser. It is only in that kind of environment that a company might hire people to abuse potential customers and irritate yet more reading their blogs. Of course, Mr. Posgon and most readers are not potential Microsoft customers but they should be treated that way to avoid turning off the real thing. Evangelism as “war” has got to be the dumbest of the few things Microsoft ever invented.

    So, Mr. Pogson, if you are in the market for new computers, I would recommend checking out the FSF Respects Your Freedom hardware certification program. They are now able to offer complete computer systems, like refurbished laptops and new servers with completely free software from the firmware up.

    A free software computer won’t play Blueray yet, but set top boxes to do that cost about $20 at Walmart if you must have something that can wreck your TV with digital restrictions. You are better off ripping plain old DVDs with DeCSS so you can play your movies anywhere and use clips for any purpose. No need to buy a dysfunctional, $1,000 Microsoft watt burner Media Center. Heck, almost no one has bought those or most anything else Windoze since Vista. I’ve still got one of those from when CompUSA went out of business on the Vista Failure. LOL, Wintel power saving. You trolls just kill me.

  82. oiaohm says:

    4) Recently, the quality of pipelining and branch prediction.
    Dr Loser that is where the big difference is between arm64 vs xeons and openpower.

    Both openpower and xeons are hyperthreading style setups and branch prediction opcode. Arm64 is just throw real cores with constant width instructions and don’t bother having a fancy branch prediction at all.
    https://www.element14.com/community/servlet/JiveServlet/previewBody/41836-102-1-229511/ARM.Reference_Manual.pdf
    The A64 instruction set does not include the concept of predicated or conditional execution. Benchmarking shows that modern branch predictors work well enough that predicated execution of instructions does not offer sufficient benefit to justify its significant use of opcode space, and its implementation cost in advanced implementations.
    So random events arm64 does not suffer from any pipeline flushing issues cause by wrong branch prediction because the arm64 cpu just presumed the fork had equal odds of being taken anyhow. Yes both Xeon(x86) and openpower both alter behavior based on believed odds of a executable path being taken so are open to particular DOS attacks that will not work against an arm64.

    This difference is also the cause of
    3) Some measurement of cache-busting. (Your choice of measurement.)
    Yes Xeon and openpower higher cache-busting rates traces to branch prediction failures that arm64 simple does not suffer from.

    5) Related to (4), the quality of compilers that understand the said behavior of the underlying chip.
    Arm64 chips require the complier to understand less about the chip because the chip does not have a instruction guided branch predictor and instructions are all 32 or 16 bit width no matter the mode so alignment is super simple. Arm64 chips don’t require compliers to match instruction production to underlying chip to improve performance other than provided instruction set differences. 5 goes to arm64 because compliers are less complex. a57,a72 and a73 contain the exact same set of instructions extensions included and the best performing binary for all 3 is absolutely identical this is a big difference to using x86 solutions.

    1) Instructions per cycle. Not especially interesting, but it’s certainly one way.
    http://techreport.com/review/28189/inside-arm-cortex-a72-microarchitecture
    This metric DrLoser does not work that well against a72 or a57 and even worse against a73. Remember arm cores are not hyperthread and you fit at least 2 a73/a72/a57 cores in the silicon space you fit one xeon core.

    Now a72 and a57 do 3 instructions per cycle per core and a xeon at 4 instruction per core sounds like a win lets take this to per thread per cycle instead of per core. Remember xeon is hyper-thread so if you don’t disable hyperthread it per thread of 2 and the arm a72 and a57 chips are double the number of cores so per thread is 3.

    There is a catch here. If you read my link prior you should notice a bug in the a72 and a57. 3 instruction decode into 5 internal cpu instructions on arm. Guess what is odd here a57 and a72 have the means to do 8 internal cpu instructions yes the 3 extra operation spaces are for anything that happens to be slow that turns out not to be much. So what does the a73 do lets decode 9 instructions at at time instead of 3 don’t change the internal 8 operation limit and run at basically 4.5 instructions per clock.

    Basically a57/a72 as long as you doubles cores and the xeon cores are running hyper-threaded they are behind arm solutions in instructions per cycle. Now A73 the next gen just in mobiles now at 4.5 instructions per cycle per core a non hyper-thread xeon is behind.

    2) Heat dissipation. You appear to be obsessed with this. The Xeon-D proves you wrong.
    Moonshot smaller heat sinks required on the arm chips due to lower heat production than even Xeon-D at same performance levels.

    6) The ability to use redundant SKUs on a die as extra GPUs. I’m only mentioning this because it is relevant to the future, Robert. We are all aware that you are mired in the dim, distant, past.
    What the hell are you talking about SKU don’t exist on the die. Stock Keeping Unit is what SKU means and it what you use to order a Intel chip not part of the silicon. This shows you how much DrLoser loves making crap up. I guess you mean cores Linux has software rendering any cpu core can be used as a GPU assistance nothing special about something being x86 arm or power for this.

    And a few other things, involving the sort of modern buses and a multitude of RAM slots and SATA connections and so on that the Cello doesn’t have a prayer of matching up to. Not to mention a modern OS that takes full advantage of my points (1) to (8). But let’s leave those challenges to one side.
    Basically you should have stopped at 3 by not include 2,3,4 and 5 because those are differences in arm64 advantage. 6 is pure invalid garbage.

    The kind of valid ones by really limited.
    1) Instructions per cycle. Not especially interesting, but it’s certainly one way.
    Is a toss up depend on hyper-threaded or not due to that case it comes clear that Ghz is the deciding factor more than anything else. 6 is bull because it makes very little difference on Linux where prime processing sharing in future allows sharing any cpu power with gpu.

    8) Scalability. now against what robert is looking at might be a issue but over all arm64 moonshot under mines that along with other arm64 solutions.

    7) Power management of any single chip. (The Xeon-D is very good at this.)
    The advantage here is fairly much nm. A57, A72 and A73 made at same nm as Xeon-D power wise beat the Xeon-D. Notice how I have been complain that what robert is looking at is too high of nm because high nm does not matter how good the power management is designed on the chip it just cannot be great. The silicon design of power management due to the fact arm comes from the mobile phone world is many times more advanced than what intel has of course this can be fully undone by using a large nm. Xeon chips winning here is nothing really great.

    So Dr Loser you 8 points are mostly all invalid.

  83. The Wiz wrote, “you overlook the presence of the Mail GPU on the chip in your vaunted ARM board.”

    The AMD A1120 SoC which rides on HuskyBoard or Cello, has no GPU. It’s purely a server chip. One can add video by PCI-e. There is no extra video chip onboard and no video connector. I don’t need one as I will use X over gigabit/s Ethernet.

  84. Dr Loseer wrote, “Do you have a substansive reason to plonk down $295 on a wretched, outdated motherboard that can only support 16GB (on two slots, both DDR3) for the price you are prepared to pay for RAM?”

    The Cello can use much more than 16gB. I will limit it to 2X16GB merely because that’s my estimate of maximum future need, 2X2X2 Beast’s current 4GB.

  85. Wizard Emeritus says:

    “This is for a server. There won’t be anyone to view a screen on Beast if there were a screen. Beast will be in a rack in the basement. ”

    You are avoiding my point. You have objections for paying for a Xeon-D because it has a GPU that you “don’t need”, but you overlook the presence of the Mail GPU on the chip in your vaunted ARM board.

    So how come its no problem buying an ARM chip with a GPU but a problem considering a Xeon-D with a GPU, eh?

  86. The Wiz wrote, ” You want to leverage the ARMSoc market to get you r next generation beast, you get a GPUwhether you want it or not. the fact that you lack the programming skills or software to utilize it is nobodys problem but yours.”

    This is for a server. There won’t be anyone to view a screen on Beast if there were a screen. Beast will be in a rack in the basement. The unit comes with a storage device with a working GNU/Linux, so I can use that to bootstrap my own installation, and it is bootable over the network si I have two means of setting it up. I’ve run a lot of PCs with displays, sI could put one on the PI-e slot if I wanted.

  87. Dr Loser says:

    I think we’re agreed on your limit of 16GB RAM, once you acquire this devastating Cello of yours, Robert. One other small hardware-related question, then.
    You will, of course, be investing in a solid-state disk drive.
    What is your projected capacity for said solid-state disk drive?
    Oh, and another question. How many SATA connections will remain for other use on your Viola?

  88. Dr Loser says:

    x86 is an inefficient design…

    Ah, so you’ve finally settled on your Fifth Career as an electronics engineer, Robert. Bravo!

    Inefficieny can be measured in several ways.

    1) Instructions per cycle. Not especially interesting, but it’s certainly one way.
    2) Heat dissipation. You appear to be obsessed with this. The Xeon-D proves you wrong.
    3) Some measurement of cache-busting. (Your choice of measurement.)
    4) Recently, the quality of pipelining and branch prediction.
    5) Related to (4), the quality of compilers that understand the said behavior of the underlying chip.
    6) The ability to use redundant SKUs on a die as extra GPUs. I’m only mentioning this because it is relevant to the future, Robert. We are all aware that you are mired in the dim, distant, past.
    7) Power management of any single chip. (The Xeon-D is very good at this.)
    8) Scalability.

    And a few other things, involving the sort of modern buses and a multitude of RAM slots and SATA connections and so on that the Cello doesn’t have a prayer of matching up to. Not to mention a modern OS that takes full advantage of my points (1) to (8). But let’s leave those challenges to one side.

    Pick any one of those eight points, Pog. You are, I believe, a Born-Again Self-Confessed Electronic Engineering Hardware Maven.

    Just one of those, Pog. Why do you believe that x86 is somehow inherently inferior?

    There’s also the tiny little question of 9nm fab tech, but hey, what’s something like that between friends and colleagues? Forget I even mentioned that future x86 goal.

  89. Dr Loser says:

    Clearly, AMD is beholden to Intel for x86-AMD64.

    If I want to be free of Intel completely, I will choose ARM or MIPS.

    Those are two wholly unconnected propositions, Robert.

    Have you considered learning first-order predicate calculus? It’s fairly easy. Half a day on the Web and you should be able to master it.

  90. Dr Loser says:

    I don’t even need a GPU in my server. Why pay for one?

    That depends upon the price, Robert. And the requirements. Do the maths or cost-benefit analysis.

    Do you have a substansive reason to plonk down $295 on a wretched, outdated motherboard that can only support 16GB (on two slots, both DDR3) for the price you are prepared to pay for RAM?

    Would that substansive reason be nothing more than being a cheapskate with an inferiority complex after having dropped your Wintel laptop on the tarmac of an airport in the Far North?

    I’m assuming it is. And that substansive reason is absolutely fine, as far as personal justification goes, Robert.

    But it applies to almost nobody else in the whole World.

    Just buy whatever outdated crap you feel like buying, Pog, and quit with the insubstantial preachiness.

  91. wizard emeritus says:

    Since when do you need 16 cores Robert Pogson? Of course when you look at the
    The Xeon-D 1520 w. its 4 2.2ghz cores, your cost is ly $199.00, you even have your choice of motherboards with all the rtrimmings that you wish, including ECC RAMsupport. And you don’t have to wait until the board manufacturer collects enough pre-orders to actually do a board run.

    Oh, and as far as the GPUis concerned, I’d like to see you come up with an ARM Soc that does NOT have one. a built in GPU is what the ARM market demands. You want to leverage the ARMSoc market to get you r next generation beast, you get a GPUwhether you want it or not. the fact that you lack the programming skills or software to utilize it is nobodys problem but yours.

  92. Deaf Spy wrote, “Take a look at Xeon-D again and come back explaining how that is an inefficient design.”

    Xeon D, 16 cores, $1500, $90 per core
    ARM-57, $25 per core

    Xeon D Broadwell cores have about 320 million transistors per core while ARM has about the same, including caches, GPU, etc. I don’t even need a GPU in my server. Why pay for one?

  93. Deaf Spy says:

    If I want to be free of Intel completely, I will choose ARM or MIPS. ARM is my choice.

    Fair enough.

    x86 is an inefficient design…

    How that? Care to go into details? Take a look at Xeon-D again and come back explaining how that is an inefficient design.

  94. Deaf Spy wrote, “why don’t you buy from AMD? Face it, AMD have chips that will trash ARM any day.”

    Clearly, AMD is beholden to Intel for x86-AMD64. If I want to be free of Intel completely, I will choose ARM or MIPS. ARM is my choice. It’s good enough and offers an adventure I can live with in my old age. I went for a walk in the bush with my kids yesterday. Blood sugar is the best it’s been in months…

    Unlike x86-AMD64, ARM offers a great vista going forward. The cores are tiny and much more amenable to a future with artificial intelligence and parallel processing. x86 is an inefficient design and deserves to be killed off by evolution. ARM is the new mouse surviving the dinosaurs. I also like the price, use of silicon and energy.

  95. luvr says:

    Deaf Spy wrote, “Then why don’t you buy from AMD?”

    Hmmm… Isn’t that what buying an A1100 would come down to?

  96. Deaf Spy says:

    I don’t have any use for their monopolistic practices.

    Then why don’t you buy from AMD? Face it, AMD have chips that will trash ARM any day. Totally no reason to spend more and get less. Even Dougie can’t get your point. And this speaks volumes.

  97. Wizard Emeritus wrote, “continue to leverage the wintel market and purchase the appropriate x86-64 based desktop hardware, slap your favorite cost-free OS (Linux) and FOSS software on top of it, and get back to your gardening.”

    Nope. Why should I prop up Wintel? M$ and Intel should work for a living instead of expecting me to send them money. A decent client, Odroid-C2, is available now and a decent server with AMD A1100 will be available shortly so there is no reason to stick with Wintel. I don’t have any use for their monopolistic practices.

  98. oiaohm says:

    https://wiki.archlinux.org/index.php/Pipelight
    kurkosdr Netflix is stupid. Chrome and Firefox under windows can only do to 720p . Pretend to be a chromebook under Linux as you can due to Linux desktop chrome and chrome book chrome having the same digital rights management.(yes windows and OS X chrome digital rights management does report identically and Netflix decides to reject this) you can have 1080p Netflix. Use pipelight to have Silverlight and you get 1080p with firefox on Linux with Netflix. Basically Netflix is just major level of annoyance not impossible on debian.

    Bluray decoding in vlc and other players under Linux is include what is missing is http://www.labdv.com/aacs/ the key.cfg file and update version of key.cfg is acquirable from the site I just gave. This is a issue with bluray central body. Basically playing back bluray on Linux is just a level of annoyance these days. The stupid part is keg.cfg update information is in fact embedded on blurays for older players why the aacs site has a user upload location for when people get newer discs with newer keys to add to open source key.cfg database. Would be so much simpler if bluray central body just released the key.cfg and be done with it.

    http://bino3d.org/ Stereoscopic 3D output have you note heard of bino3d and others. Yes its supported under Linux. Yes Stereoscopic 3D output its exposed in opengl interfaces under Linux including intel and open source ones these days.

    Does your beloved Debian do switcable graphics, an important power-saving feature?
    This works on some laptops. https://wiki.archlinux.org/index.php/hybrid_graphics

    In fact being able to switch graphics is not power saving feature. Means to switch graphics and turn off the not being used GPU is the power saving. Some hybrid graphics laptops debian can switch GPU no problem but cannot power down not being used GPU. Some hybrid graphics you can turn the more power using GPU off but have to reboot to turn it back on at this stage as well(not helpful that this happens under windows with some laptops as well). This is a case buy compatible hardware or suffer. Some of the samsung laptops that are failing to update to windows 10 have the same kinds of problems with Windows 10 just that they get complete lost and no video card has output once Windows 10 loads. So switchable graphics is not a Linux only nightmare. Really we need switchable graphics standardized at a hardware level instead of each vendor doing there own quirk.

    And nobody forces you to develop on Visual Studio to develop on windows silly.

    What is stupid here is Visual Studio code for Linux is 32 and 64 bit yet Visual Studio Code for Windows is only in 32 bit. What is more horible due to Linux memory maps projects that open in the Linux Visual studio code 32 bit don’t open in Windows edition. Yes you cannot go out and buy Visual Studio full for windows to fix this problem either.

    Maybe the fix to the Visual Studios limit memory usage with be use Visual studio code 64 bit linux under the bash for Windows runtime. So its getting mega stupid.

  99. Wizard Emeritus says:

    ““That’s for “10” retail. Kind of prevents you using it in a school with ~100 PCs or so. ..”

    But you are retired, so whats the problem?

    “That cuts out the nice option of having students practise installation in a virtual machine. ”

    Nope. Microsoft has 180 day evaluation copies of practically everything that they license are just perfect for this kind of thing!

    But none of this of course gets around the fact that microsoft’s desktop OS is licensed for desktop use, not for server use. And as the good Doctor has pointed out, 99% of the users of windows are perfectly OK with this, and no amount of fulminating and name calling on your part is going to change that.

    My own recommendation at this point is simple. The simplest and cheapest way to perform a successful upgrade to your so called Beast System is to continue to leverage the wintel market and purchase the appropriate x86-64 based desktop hardware, slap your favorite cost-free OS (Linux) and FOSS software on top of it, and get back to your gardening.

    Of course to do that you have to live with wintel subsidized hardware.

    Or you can wait until one of these phantom server development boards actually ships with enough real power to make it at least as usable as your beast.

    Its all good.

  100. Dr Loser wrote, “Which particular server function do you believe cannot be set up on an Intel-compatible Windows Desktop PC, Robert?”

    Read the EULA. It will tell you. You may need to hire a lawyer conversant with IT though.

    Here, let me help:“You may allow up to 20 other devices to access the software installed on the licensed device for the purpose of using the following software features: file services, print services, Internet information services, and Internet connection sharing and telephony services on the licensed device.”

    That’s for “10” retail. Kind of prevents you using it in a school with ~100 PCs or so. In case you didn’t know, some schools have reached 1:1 PC:student or even a bit more for folks who have a PC on the desk and a lab or three. So sharing a file, say, the principal’s daily blurb to more than 20 desktops is not allowed, not because the hardware can’t do it but M$ wants to sell server licences and CALs. That’s the kind of stuff I hate when developing an IT-system. I don’t want to be M$’s slave. That limit used to be 10 “devices”, but they raised it to 20 when the limit was of no benefit to users at all. Debian has no such limit.

    Of course, the solution is simple. use GNU/Linux from the beginning and be free of all that crap.

    Further quotes: “This license allows you to install only one instance of the software for use on one device, whether that device is physical or virtual. If you want to use the software on more than one virtual device, you must obtain a separate license for each instance.”

    That cuts out the nice option of having students practise installation in a virtual machine. With thick clients that would require two licences per machine. Just silly. One could wipe the lab’s computers and have them hack away but that’s risky… They are teenagers usually.

    Further:“Remote access. No more than once every 90 days, you may designate a single user who physically uses the licensed device as the licensed user. The licensed user may access the licensed device from another device using remote access technologies. Other users, at different times, may access the licensed device from another device using remote access technologies, but only on devices separately licensed to run the same or higher edition of this software.” What the Heck does that mean? In Debian GNU/Linux, a user or sysadmin generates a key and folks use OpenSSH to their heart’s content. No need to consult a lawyer about it at all if the user can legally access the machine, which if they are my students they can. I even have some SSH-keys for TLW to do things between various clients/servers. No restriction from the GPL and other FLOSS licences at all.

    I could go on but it’s probably a waste of time.

  101. Dr Loser wrote, “It’s important because your link doesn’t work, Robert?”

    Extraneous “/”. Fixed that.

  102. Dr Loser says:

    Which particular server function do you believe cannot be set up on an Intel-compatible Windows Desktop PC, Robert?

    It’s an idle question, and I’m sure I can act as an unpaid consultant to remediate your pathetic ignorance on this, as so many, topics. In fact, I’ll volunteer myself for the task!

    Just pick one, O Miser of the Canadian Plains.

    One single solitary server function that you desire.

  103. Dr Loser says:

    And to deal with that tail-end of a Wall’O’Gibberish … Robert, Robert, please stop emulating Fifi, who is the mistress of this sort of stuff …

    Businesses and schools are buying Raspberry Pis in bulk.

    Schools, god help them, are buying Raspberry Pis for “educational” purposes. My belief is that they are ignorant and misguided, but that doesn’t really matter. Businesses are doing no such thing. You are lying to yourself, Robert. This is shameful.

    FLOSS developers are always alert to new opportunities and there are a few million of them.

    No, actually, FLOSS developers are not. They’re like the rest of us developers … scrabbling for a well-paid job.

    Being a “FLOSS developer” does not make you special in any way at all, Robert.

    There is a market for these things.

    What, Raspberry Pi? Well, obviously, Robert.

    But you’re not going to be part of that market any time soon, are you?

  104. Dr Loser says:

    Well, do the maths. How much does it cost a small factory with some automation to switch to producing such boards?

    What sort of “small factory” with “some automation” are you thinking of, Robert? And can they deal with the marketing channels? And what would you estimate the Return On Investment to be?

    I don’t have to “do the maths.” You do.

    Just because you are a miser who fervently hopes for somebody else to build that “small factory” and have an enormous success selling what is obviously an overpriced useless piece of garbage to the masses, just so that you can glom onto the results …
    … is not going to happen.

    And for why? Do the maths, Old Man.

  105. Dr Loser wrote, “unless they are shared by other people who will willingly pay money or time to fulfil your desires, they remain empty.” and “99% of people do not care”.

    Well, do the maths. How much does it cost a small factory with some automation to switch to producing such boards? They feed in some .DWG files and crank out motherboards and send them to assembly lines where robots drill holes and eager fingers insert various parts. A wave-soldering machine or more fingers and soldering irons finishes the job. Stick them in boxes and ship them out. It might cost ~$100K to do this and they likely make a profit of ~$100 per unit so after 1000 units they start making real money, several $K per day. It works. Just put in the relevant numbers. There are all kinds of production facilities suffering because legacy PCs shipped is declining. There is plenty of reserve capacity. I hope the problem is simply building a sufficient inventory not to run out of stock on the first week as netbooks did. ASUS had to redouble production monthly for several months because they underestimated the market. It’s worth noting that these things can run Android/Linux just like a smartphone so there could be a huge consumer market as well as the nominal “developer” market. What would a user of Android pay to never run out of storage and to have a much faster Internet experience? HAHAHA! Yes, if you live on Wintel, be very afraid of these little gadgets. I notice China has zero problems selling millions of set-top boxes with very similar technology. They can sell hundreds of millions with Android/Linux for consumers and millions with GNU/Linux for people like me. Smallest estimates are ~30million GNU/Linux desktop users. I’d bet many would like a smaller box. Businesses and schools are buying Raspberry Pis in bulk. FLOSS developers are always alert to new opportunities and there are a few million of them. There is a market for these things.

  106. Dr Loser says:

    That Other OS does go out of its way to prevent certain usage like connecting more than N PCs together …

    And 99% of people do not care.

    … or using a PC as a server …

    And 99% of people do not care.

    … or sharing the software with multiple PCs.

    And 99% of people do not care.

    Face it, Robert. No company on the face of this Earth is going to target the Manitoban Miser Market. Why? Because they’d have to be insane. Your desires are elegant, simple, and admirable … but, unless they are shared by other people who will willingly pay money or time to fulfil your desires, they remain empty.

    What’s with that?

    It’s called “the market,” Robert. You have successfully managed to avoid it during your working career, but unfortunately you’re stuck with it right now.

    Oh, and incidentally, it’s trivially simple to add whatever “server superstructure” you want on top of a Wintel desktop. Or you could, you know, pay money for the superior Microsoft product.

    Your choice, basically.

  107. Dr Loser says:

    Well, it doesn’t prevent me from doing that. That’s the important part.

    It’s important because your link doesn’t work, Robert? Or it’s important because the Debian license (to which I believe you were haplessly referring) allows you to take the code and rewrite it to your own satisfaction?

    You’d better get used to doing that, Robert. Because it doesn’t appear that anyone out there is going to help you.

    BWAHAHAHAHAHAHA!

  108. kurkosdr wrote, “Does your beloved Debian do switcable graphics, an important power-saving feature?
    Bluray Movies and 1080p Netflix?
    Stereoscopic 3D output?”

    Well, it doesn’t prevent me from doing that. That’s the important part.

    “We will not object to non-free works that are intended to be used on Debian systems, or attempt to charge a fee to people who create or use such works.”

    That Other OS does go out of its way to prevent certain usage like connecting more than N PCs together or using a PC as a server or sharing the software with multiple PCs. What’s with that?

  109. kurkosdr says:

    Does your beloved Debian do switcable graphics, an important power-saving feature?

    Bluray Movies and 1080p Netflix?

    Stereoscopic 3D output? (which even intel GPUs can do today)

    No?

    So much for using your hardware to its maximum capability.

    And nobody forces you to develop on Visual Studio to develop on windows silly.

Leave a Reply