The State Of Play Of Moore’s Law In China

“TSMC will be the industry’s first chipmaker to have its 7nm process technology certified, said Liu, adding that TSMC’s 7nm currently has 30-40% yield for 128MB SRAM.
 
As for 10nm, TSMC is scheduled to move the node technology to volume production by 2017, Liu indicated. TSMC will fabricate 10nm chips at the Phase-5 and 6 facilities at Fab 15, the foundry’s 12-inch wafer fab in central Taiwan.”
 
See TSMC to spend US$2.2 billion on R&D in 2016, says co-CEO
I’m considering buying IT made at 28nm this year. For some of it I doubt I can wait a year or two for the new improved stuff which will be near the end of the line for Moore’s Law. I’ll probably decide to buy mature technology instead of the newest. I know that will work for me without surprises.

RAM will be a major component of the new server. It’s cheap enough now to afford 16-32 gB RAM. More is better. Cached files have very little seek time.

On the global stage, these smaller resolutions will be state of the art and affordable in five years. This means almost everyone on the planet will have a smartphone and anyone who wants one will have a legacy PC able to run GNU/Linux well. SSDs will be pretty standard. I’m still not buying them because hard drives cached in RAM are more reliable and less expensive.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , , , . Bookmark the permalink.

129 Responses to The State Of Play Of Moore’s Law In China

  1. oiaohm says:

    Dr Loser with the compare between LLVM and Openoffice is about development behaviour. Libreoffice picks up all of best features added by IBM to Openoffice and GCC is picking up all the best features being added to LLVM. If go back you will find most of the best features of Freebsd being embedded in Linux kernel. Basically there is over 4000 examples of this repeating behaviour.

    Once you have a license open enough to embed in a commercial work without releasing source code you also have a license open enough to be over wrapped by another more restrictive open source license so cutting that work off from the features added in the more restrictive license open source work using it source code. This happens over and over again.

    This is what people pushing for more open licenses completely miss. Yes it hard to work with LGPL at times but at least you don’t end up in the location that X group using a more restrictive license now has the completive advantage in performance/features because they are able to keep what they change under a license you are not set-up to handle. Not being able to handle GPL/LGPL licenses basically sets you up as a developer for a fall.

  2. Dr Loser says:

    Just to help you out here, the LLVM License appears to be a thoroughly unobjectionable mashing together of the MIT license and the 3-clause BSD license.

    Only total Freetards would object.

  3. Dr Loser says:

    Really LLVM is the same reason why IBM backed OpenOffice and it facing the same problem. Companies wanting rights to fork off and do stuff closed result is the main line will suffer from this behavour. Result is the same over and over again except for very rare exceptions.

    I missed this choice piece of ignorant idiocy the first time round, oiaohm.

    LLVM and Open Office and IBM?

    I presume you have some sort of reason, however weird it might be, to connect these three disparate things.

  4. DrLoser wrote, “a present-day putrid alternative like the Cello”.

    The Cello is meant as an affordable developer’s board. It is good enough for what I want. I would like better and I might wait a few months, not a few years, to get it, but then I might not. My pension portfolio can earn more than a new Beast would cost in a single day, like today… Trouble is getting the money out for me to spend. The bank promises tomorrow… again. It’s not actually the bank’s fault. The Registered Retirement Savings Plans were a great idea when they were young but all the jurisdictions where I have worked have a different take on it and funds have to be “segregated” into easier to spend and more difficult to spend, for reasons that are beyond me. It must have made sense to someone. We accidentally mixed the two and it’s taken weeks to figure out how to undo that, the original funds having been invested months ago. Anyway, next year I won’t have problems finding dollars to spend on IT and I will have enough this year to execute my plan, just not instantly. So, while I can wait, I am also compelled somewhat to wait. It’s all good. I have just enough patience.

  5. Dr Loser says:

    (Naturally, the “AMD server crowd” should read “the ARM server crowd.”

  6. Dr Loser says:

    AMD surprised Intel by delivering AMD64 first and got to charge ~$1K for the newest chips for a while.

    And they haven’t surprised Intel since. Which is why, to return to my earlier point, the present AMD strategy appears to be to “surprise” the nascent AMD server crowd.

    Have you ever considered changing your mode of thinking, Robert? I mean, the third millennium is sixteen years old at the moment.

    Isn’t it time you brought yourself up to the present day? If nothing else, it might prevent stomach ulcers.

  7. Dr Loser says:

    These may use similar cores to the smartphone chips but with larger caches, clocks and diverse I/O like Ethernet and SATA.

    Then again, they may not. And even if they do, they’re not there yet. And server set-ups that are competitive (on any level) will naturally be priced at a pretty high margin, Robert. They only drop their price towards end-of-life.

    Even assuming that these evanescent possibilities make it into reality, Robert, you won’t see the results until much later at a price you feel worthy of your demanding cost/benefit requirements. 2019: Year of the ARMed Beast!

    Of course, you could always settle for a present-day putrid alternative like the Cello.

    Remind me again, or rather explain for the first time … precisely how much RAM are you planning to cram into those two little DDR3 slots?

    A bit of a first for a server board, that, isn’t it? Only two RAM slots, with backward-facing RAM tech.

    It’s all cheap useless rubbish. Chuckle.

  8. Computerworld reports a lot of ARMed servery is coming down the pipe: Cavium, Marvell, Qualcomm, AppliedMicro and Broadcom.

    These may use similar cores to the smartphone chips but with larger caches, clocks and diverse I/O like Ethernet and SATA. AMD is also bringing out new server chips based on ARM Cortex A-72 but modified by AMD for better performance in servers.

  9. oiaohm says:

    http://mrpogson.com/2016/05/31/armed-servery/#comment-343237

    Deaf Spy all your benchmarks are before the lower nm arm64 chips came out at the end of 2015 kicking the crap out of xeon-d. You need xeon-e to beat 28 nm arm64 chips. What is kinda sad that a 14nm chip losing to 28 nm one. The current moonshot modules arm modules are 28 nm first and second generation arm moonshot modules are 40 nm. Yes you want to be taking first and second generation modules out and inserting third generation. Other than reading board numbers they look basically identical. Yes right down to the same part number just different v1 v2 v3 after it.

    Robert Pogson main reason I don’t like the AMD offering too much is too high of nm at 32nm so not power effective as arm64 can be. Problem here is each step 40-32nm to 28nm halves power usage then 16-14nm halves it again this is due to changes in transistor manufacture tech in the chip at each of those levels.

    It is only nowadays that Intels i7’s can do real-time encoding of Full-HD video. ARMs are totally unable to handle this task without a dedicated GPU.
    Totally not true Deafspy. Turns out arm64 chips can do this. In fact old armv7 with neon extensions could.
    https://github.com/danielrh/losslessh264 4096×2304 is way past what is called Full-HD. Lot of the video play back on Android devices turns out to be pure software. Have you forgoten the android Stagefright bug the the in software video encoder/decoder that lots of android devices with decicated GPU had turned on was and is doing all the encoding and decoding in the device on the arm processor and the GPU sitting there doing nothing one of the fixes to Stagefright was turn the GPU encoding/decoding on by turning stagefright off. armv7 with neon or better with a decent clock speed is required to get real-time video decoding or encoding and decent clock speed normally equals 32 nm or better production. So AMD arm chips are kinda at the upper edge of what you would touch if you are planing lots of encoding/decoding.

  10. Dr Loser wrote, “Since you have no hope of conjuring up either scenario, you are basically just blowing smoke, aren’t you?”

    AMD surprised Intel by delivering AMD64 first and got to charge ~$1K for the newest chips for a while. They were popular because they could address more RAM on huge servers and workstations. ARM may not be as big a splash but they will take hold and Intel may have to exercise their ARM license.

    It’s and end-around play in football. If you can’t go up the middle go around the obstacle.

  11. Dr Loser says:

    No benchmark comparisons, I see.

    Nor any honest attempt to discuss why AMD should be putting all their server eggs in the ARMed market.

    You’ve got nothing here, Pog. Bupkis.

  12. Dr Loser says:

    Perhaps Intel is playing the old monopolist’s game, paying folks not to install AMD.

    And perhaps not, Pog.

    I’m not about to suggest that you are a pathetic old paranoid nutter, but … just in case …

    Keep taking the ginormous purple pills, Robert.

  13. Dr Loser says:

    Well, AMD has done well going into markets neglected by Intel: 64-bitness, ARMed servers …

    Not really, Robert. Unless you can somehow conjure up financials that suggest that AMD made bucket-loads more money out of 64 bit CPUs. Or that the AMD Opteron ARMed range is presently making bucket-loads more money than the Intel server equivalent offering.

    Since you have no hope of conjuring up either scenario, you are basically just blowing smoke, aren’t you?

    Please continue to do so. Ignorant uninformed stupidity is always the best entertainment on the Web.

  14. Dr Loser wrote, “I notice that you didn’t concern yourself with the question of why AMD is going all ARM, all of a sudden.”

    Well, AMD has done well going into markets neglected by Intel: 64-bitness, ARMed servers, …

    That’s the problem with monopoly. Intel makes so much money on the old game they lack initiative to go into new areas whereas AMD is more nimble. AMD does have a problem bringing things to market though. Seattle/AMD A1100 is 2014 tech. They were supposed to get it to market in 2015. I guess Intel’s partners had something to do with that. Perhaps Intel is playing the old monopolist’s game, paying folks not to install AMD.

    Have any curiosity why Intel felt the need to pay people to install Intel and nothing else?

  15. Dr Loser says:

    A1170 has 14 SATA ports and dual 10gBE. Cello cripples even the A1120 with less connectivity

    I suppose I could ask for a benchmark comparison between the A1120 and the Cello, but that would be debating the difference between a flea and a louse, Robert.

    The Cello (and, I think, the HuskyBoard, although we shall have to wait and see) is a truly pathetic little board at an obscenely high price, cost/benefit wise. How much RAM are you expecting to load onto the thing, btw? (My laptop at work has 32GB, and the Cello can only stretch to 64GB.) And of course it’s DDR3. Not a good time to jump into the market, Robert: DDR3 is end-of-life.

    … but I can afford it, it is offered for sale to me and the Xeon D is marginally better performance for a lot more heating and price.

    Be honest, you old miser. You could afford either alternative. And the Xeon D (a chip designed for low-end servers, not for wrist-watches) will beat the crap out of your ARMed alternative.

    It will also do so at ~45W, with a proper and responsive power management system. I imagine you’re asking for ~20W to ~25W on your wrist-watch chiplet.

    This might make a difference to you, Robert, but nobody else would give a rat’s fart for it.

    I notice that you didn’t concern yourself with the question of why AMD is going all ARM, all of a sudden.

    Intellectual curiousity? Apparently it’s not your forte, Pog.

  16. DrLoser wrote, “the Xeon D would piss all over the A1170, a chip which is noticeably absent from your wish-list.”

    The A1170 is not on my wishlist and neither is the Xeon D. So what? The A1170 is a good chip for servers and I bet it costs less than Xeon D. The Cello, which is on my wishlist uses A1120, less cache, fewer cores. See A1100 series

    A1170 has 14 SATA ports and dual 10gBE. Cello cripples even the A1120 with less connectivity but I can afford it, it is offered for sale to me and the Xeon D is marginally better performance for a lot more heating and price.

  17. Dr Loser says:

    Of course, the interesting question here is why AMD has gone down the ARM route with the Opteron A1100 series.

    The devotees of this site would claim that this is because AMD realizes that the only way to beat Intel is to use the Magic Pink Unicorn Fairy Dust Powers of ARM to get better price/performance.

    More normal and sane human beings would point out that AMD hasn’t really been able to keep pace with Intel over the last five years or so on high-margin chips. They do, however, have a well-funded R&D department and a stellar marketing arm. If you can’t beat Intel down, what do you do?

    You beat down the next “big thing,” that’s what. Which, supposedly, is the ARMed server market.

    How nice for all of you out there. If this works, you won’t even have ARM server chips to fall back on — because they will all be the monopoly of AMD.

    Chuckle.

    It’s all good.

  18. Dr Loser says:

    AMD also has an A1170 which would do quite well against Xeon D.

    Actually, Robert, the Xeon D would piss all over the A1170, a chip which is noticeably absent from your wish-list.

    Naturally you have benchmark cites to prove me wrong.

  19. kurkosdr says:

    One frame in a full hd movie takes about 6 MB.

    Nope. Not with 4:2:0 chroma-subsampling, which is the norm for 99% of videos out there.

    Here are the calculations (assuming 8-bit color):
    Y plane: 1920*1080*1byte = 2073600 bytes
    Cr plane: 960*540*1byte = 518400 bytes
    Cb plane: (same as above)

    So, 2073600 + (2*518400) = 3110400 bytes aka ~2.99MB for uncompressed.

    Also, what resolution can be decoded by what procesor is dependent on format and bitrate.

  20. Deaf Spy wrote, “One frame in a full hd movie takes about 6 MB.”

    Garbage in, garbage out, I suppose. HD 720: 720*1280*24-bits comes to 2764800 bytes per frame. Folks don’t need 60Hz. 30Hz is fine and compression is more likely to be 3:1, so 3MB X 30/s X 3600s/h is 308gB certainly doable but not on my hardware. However, I do take bathroom/snack breaks and commercials… There’s also nothing that says it has to be stored in RAM. My PVR does it on the hard drive which can stream 100MB/s. The newer larger drives do better.

  21. I just noticed. That’s last year’s article and it’s quoting FB engineers from two years earlier… Where’s the head to head with the latest tech? The 4-core chip in the Cello is a slacker but it’s only 25W or so. AMD also has an A1170 which would do quite well against Xeon D. That’s also last year’s development. ARM has plenty of new stuff in the pipe.

  22. Deaf Spy says:

    Wizard Emeritus for a Xeon using 45 watts to perform a task a high grade arm64 will be using 20watt or less.

    Except that all benchmarks prove exactly the opposite:
    http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion

    http://vrworld.com/2015/03/10/intel-xeon-d-hitting-arm-microserver-hopes/

  23. Deaf Spy says:

    So, no video existed in the era of single-core 32-bit PCs? [SARCASM]

    It didn’t, Robert. It didn’t exist anywhere beyond 320×200. Even Pal/NTSC resolution required hardware acceleration (a dedicated video tuner, or some glorified chip like ATI’s all-in-wonder). Without it, even the top CPU with all mm extensions would skips frames. It is only nowadays that Intels i7’s can do real-time encoding of Full-HD video. ARMs are totally unable to handle this task without a dedicated GPU.

    When TOOS was recommending 800X600 or whatever it was, single-core non-accelerated video was everywhere. Heck, in those days, I was doing 1024X768 just fine.

    Playing video at 800×600? I don’t think so.

    You don’t understand the various layers of hardware acceleration, do you?

    During these dark ages, people would be happy even to have their BitBlt handled by the video driver (hardware accelerated), and the most well-to-do could even have scrolling and text rendering in 2D accelerated. Nowadays composite managers took this to 3D world.

    With the RAM some PCs have these days, one can decode a whole movie and stream it from RAM to the screen.

    Had to read this twice. I thought it was Fifi first time.

    Now, let’s take the good ol’ calculator. One frame in a full hd movie takes about 6 MB. One second takes 360 MB. One hour takes about 1.25 TB. A standard movie is about 1:40. Without considering stuff like LotR, we look forward 2 TBs or RAM. Sound not included.

    Now, Robert, let’s find this marvelous client workstation with 2TBs or RAM.

    You are beyond hope. You have no idea how video works on a PC. You have no idea even how a graphics stack on a desktop OS works. Neither 20 years ago, nor today.

    I wonder, what real IT expertise do you have, apart from tinkering with Linux kernel, LibreOffice and some basic network configuration?

  24. DrLoser wrote, “Conspicuously not GNU.
     
    I wonder why?”

    Ask U of Illinois. It’s their licence. It’s their code. They obviously have no issues with FLOSS. They can build their code with GNU Make.

  25. oiaohm wrote, “Question is how long before high end server end makes it down to low end server end.”

    Many developers are smallish so they can’t afford a rack of hundreds of densely packed ARMed servers. The guys who push this hardware will have to accommodate those folks. Hence Lemaker Cello is expected out June, 2016. I’ll buy one after the first rush as I don’t want to keep an active developer from getting one, and I’ve already spent this month’s budget on grass seed and other things for the yard/garden. Damned cold wet weather is my current problem. ARM will take care of itself. I think by the end of 2016 there won’t be much question that ARM belongs in servers. It’s just smart to get more done with less when you work for a living.

  26. oiaohm says:

    Question now is how long until Ubuntu phones with wireless charging appear making it fully wireless.
    I should never be joke around idiot Dr Loser.
    On a commercial basis? Never. Next question please.
    Top model Ubuntu phone that is going to first with wireless to screen already include it you non researching doing person before commenting idiot.

  27. oiaohm says:

    Dr Loser and LLVM binaries are slower than GCC produced binaries wonder why.

    What going on here is companies take GCC and extend and give back and LLVM is suffering from the same problem where people are not giving back. Worse is what even LLVM does better to its license GCC can just absorb and has .

    First example of GCC eating LLVM lunch was AddressSanitizer were LLVM developers made that then GCC just absorbed it.

    Really LLVM is the same reason why IBM backed OpenOffice and it facing the same problem. Companies wanting rights to fork off and do stuff closed result is the main line will suffer from this behavour. Result is the same over and over again except for very rare exceptions. Highly permissive licenses mostly does not work. Even the openssl issues you have complained about Dr Loser partly trace to the highly permissive license where people fix X then don’t bother pushing X fix upstream because they had no requirement to.

  28. oiaohm says:

    GNU POSIX? Check. Android doesn’t use that. Nor does Desktop Linux.
    GNU Posix includes glibc that Desktop Linux does use and some applications ported to android also use.
    GNU database stack? Check. No such thing exists.
    https://en.wikipedia.org/wiki/SQLite
    Dr Loser you went 2 too far. GNU database stack becomes SQLite that android does use but its no longer under GPL license.

    Some other ARM server motherboard that has yet to come to Robert’s attention?
    I have already mentioned a different board Dr loser so please stop asking me questions I have already answered or do you like being a annoying idiot. I did state it hard to acquire.

    Dr Loser data-centers have a simpler time getting high grade arm64. You can get high grade arm64 modules for HP moonshot systems really simply these are no use to a person wanting a workstation or single server. It is the same with other bladed arm64 based solutions. High grade arm64 chips are easy for those.

    I should have been more clear getting hands on high grade arm64 in smallish systems is hard. Now if you want a box that takes 70+ boards getting high grade arm64 is fairly simple. High grade arm64 vs Xeon the result is Xeon is losing every single time. So intel being top dog is over. Question is how long before high end server end makes it down to low end server end.

  29. Dr Loser says:

    Question now is how long until Ubuntu phones with wireless charging appear making it fully wireless.

    On a commercial basis? Never. Next question please.

    Have you considered getting a date, Fifi? There must be something more worthwhile in your pathetic little life than this groveling around.

  30. Dr Loser says:

    Ah, the joy of licensing. Here we go again, with LLVM

    Conspicuously not GNU.

    I wonder why?

  31. oiaohm says:

    http://news.softpedia.com/news/ubuntu-touch-ota-11-update-introduces-wireless-display-support-to-ubuntu-phones-504681.shtml

    Question now is how long until Ubuntu phones with wireless charging appear making it fully wireless. So the idea of large screen, keyboard and mouse being a reason for a computer are disappearing. Storage and other things will have to be the focus for the PC.

    This is why I am watching the Chrome os support for android applications coming soon so much if the result of this is Android applications working on Ubuntu phone the battle could get interesting for a while. Also I hope to see Debian on android done by different means also get to the same location. So phone is a light desktop machine when wireless connected to screen, keyboard and mouse. Question is what percent of market can be fully serviced by a device like this.

  32. Dr Loser says:

    Still and all, oiaohm, I don’t wish to denigrate your ability to rate an ARM board. And since Robert is going ga-ga for the things, which would you recommend?
    1) The Cello? (Available now, at least on preorder.)
    2) The HuskyBoard? (Not yet, but Year of HuskyBoard and all that.)
    3) Some other ARM server motherboard that has yet to come to Robert’s attention?

    I mention this purely on a point of public information. Me, I could care less.

    Over to your ineffable genius, oiaohm.

  33. Dr Loser says:

    What is a huge different in power profile but getting high grade arm64 is not easy.

    Which is why nobody in charge of a data center bothers, Fifi.

    And they’re not going to bother until there is a compelling reason to do so.

    Perhaps you should send them a few cherry-picked job advertisements from Oregon? Those were amongst the most convincing cites you have managed in the last five years or so.

    Which isn’t really saying much. It’s a very low bar you have there. Moron-level, in fact.

  34. Dr Loser says:

    GNU video stack? Check. Android doesn’t use that. Nor does Desktop Linux.
    GNU audio stack? Check. Android doesn’t use that. Nor does Desktop Linux.
    GNU networking stack? Check. Android doesn’t use that. Nor does Desktop Linux.
    GNU POSIX? Check. Android doesn’t use that. Nor does Desktop Linux.
    GNU database stack? Check. No such thing exists.

    Once again, Robert, you are left with bupkis.

  35. oiaohm says:

    Wizard Emeritus for a Xeon using 45 watts to perform a task a high grade arm64 will be using 20watt or less. What is a huge different in power profile but getting high grade arm64 is not easy.

    Dr Loser when looking at something no Nvidia you are looking at opencl version number more often than not. instead of CUDA.
    https://en.wikipedia.org/wiki/OpenCL
    Yes Mali current is one version behind.

    Robert. It can be used to offload parallelizable tasks from the video stack — decoding in this case — but since 2010
    This goes back to the first amiga blitter then we go through a PC darkage of dumb hardware before we get this back again and when it first returns is crippled.

    To be exact we are talking TMS34010, released in 1986, is a general purpose 32-bit processor with additional blitter-like instructions for manipulating bitmap data That was used particular amiga models video card as basically the first GPU.

    Notice something here the 1986 amiga gpu took general purpose code like modern opencl and cuda is now accepting.

    Cuda starts 2007 opencl starts 2009 and wacky experiments using glsl starts 2004. https://www.researchgate.net/publication/232626644_Introduction_to_GPU_Programming_with_GLSL
    But it gets worse Dr Loser. Now to use opencl or cuda under Linux until 2016(yes this year) required firing up a X11 server. So we are only just starting to leave the non accelerated dark ages for server workloads. Something to remember Windows in session 0 where services run forbids access to GPU for acceleration and this forbid applies when you are using terminal services as well. So lots of applications run without acceleration.

    Basically Dr Loser we are just coming out of a 30 year dark age of not effectively using hardware acceleration. We are not out of it yet.

    –VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS740 [Radeon 2100]–
    Robert Pogson does quite a bit of acceleration. Issue has been being able to use it. One of the big changes about wayland is having all process server side then picking up what GPU has rendered and sending that to client using RDP or the like. Still we are waiting for GPU drivers to lose the idea for good that we need a real screen connect to render stuff. Yes I know this sounds stupid but a lot of lower Nvidia cards without a screen/or fake screen connected will not process opencl or cuda because they will not init without a screen.

    Basically we are just coming out of a dark age of history.

    Why? Because Intel, unlike you, Robert, understand where to use hardware acceleration.
    Dr Loser that is kinda over stating things until recently Intel has suffered from the same issue no X11 server no acceleration. Lot of ways arm makers is ahead in the idea of actively using acceleration for every task.

    We are seeing arm64 chips appear not only with a GPU but also a advanced neural networks section. That those are interesting beasts. Why advanced neural network sections can use transistors not functioning perfectly they come out of a question how low of power could we go. Turns out if you don’t care about errors we can go a lot lower in power and neural network programming a few errors here or there is not a large problem.
    http://gizmodo.com/5911399/this-imperfect-processor-is-15-times-more-efficient-than-yours
    Instruction set designed imperfect processors currently used is neural network design. On power usage they are over 15 times more efficient than current arm64 or x86 in power usage performing the same task just there results are not 100 percent perfect instead at about 98 percent correct. So it based on task if it suitable to give to a neural network design chip or not and if task is suitable large drop in power usage to perform it. Phones makers are looking at neural network to perform voice and face recognition in phone.

    Basically cuda and opencl are both designed to give perfectly exact results that make them power heavy. But these new neural network designs are design to give imperfect result that make it power light. Its jpeg vs png all over again. Hardware acceleration is going to get very interesting as we are going to need more than 1 type to be effective.

    Which you won’t be able to take advantage of on your nasty cheap little mobile phone Servletto, because you won’t have a state-of-the-art like the Xeon D-series chip, which has something like 27 SKUs
    The nasty cheap mobile phone may have a better neural network processor in it and kick the living heck out the Xeon D at a particular tasks like voice to text. So what one is better Dr Loser will come down to the task user is performing.

  36. Dr Loser says:

    2016: Year Of The Hurd!

    Or possibly not.

  37. Dr Loser says:

    Any idea what license the X stack uses, Robert?

  38. Dr Loser says:

    The idea of using a secondary processor to write areas of screen is well understood in graphics land.

    Followed by a Wall’O’Blather.

    No, that’s not it at all, Robert. You still don’t understand the concept of hardware acceleration.

    Deaf Spy and I have tried and failed to educate you on this fairly simple thing. Clue: it’s nothing at all to do with video.

    Still, I’m wasting my time with you. It makes no difference: you will never be able to use the concept.

    I’d be better off talking to an eight year old kid, wouldn’t I?

  39. Dr Loser says:

    And to forestall your pathetic senile bleating here, Pog: no, if you slap a GNU license on something, it doesn’t automatically become GNU.

    It’s just another bit of software with a stupid license restriction.

  40. DrLoser wrote, “You still seem not to understand the principle behind hardware acceleration”.

    The idea of using a secondary processor to write areas of screen is well understood in graphics land. Transcoding does nothing to increase the framerate unless decoding is the bottleneck. It’s not. I can decode and playback at my leisure. Ever heard of “buffering”? With the RAM some PCs have these days, one can decode a whole movie and stream it from RAM to the screen. I know many chips can do transcoding in hardware but it’s very little to do with hardware acceleration if it speeds nothing up. The human eye and screens can only handle so many frames per second. What’s the point of exceeding such limits? That’s not efficient. It’s a waste of resources. Look at all the businesses with Just In Time processes. They are optimal even in IT. Even my Beast can play full-screen audio and video with very little CPU utilization and zero hardware acceleration. “Acceleration” is a poor term for transcoding anyway. One gains very little by having special-purpose hardware for the task. Ever heard of the microcoded microprocessors. Your blessed x86 is that. Why not use special-purpose hardware for the CPU? Oh, yes, that’s ARM and MIPS you want then…

  41. Dr Loser says:

    They really want us to believe the stack known as Desktop Linux today is the OS Stallman set out to make, instead of just using it’s codebase among other things.

    Well, let’s see. It uses glibc, which is at this point mostly a Red Hat project. And it uses … well, actually it doesn’t really use any other part of GNU whatsoever in production.

    In development, it obviously uses gcc. But even that is close to being made totally obsolescent by LLVM.

  42. Dr Loser says:

    And the funny thing is that originally it was called X/GNU/Linux in usenet announcements

    A cite for that, please, Kurks. Not because I don’t believe you. I do believe you.

    I wonder what GNU would be without the Wonder That Is X behind it?

  43. kurkosdr says:

    “only Stallman and you call Linux “GNU/Linux”

    And the funny thing is that originally it was called X/GNU/Linux in usenet announcements, which makes sense given how much of the stack is X.org code, but Stallmanites decided to jettison the “X” bit from the name, for nerd turf war purposes.

    They really want us to believe the stack known as Desktop Linux today is the OS Stallman set out to make, instead of just using it’s codebase among other things.

  44. Dr Loser says:

    Nope. GNU exists. Admit it.

    So does the axolotl, Robert, but nobody except you would claim that the axolotl is a necessary part of the Android operating environment.

  45. Dr Loser says:

    The GPU, which I thought we were discussing, doesn’t do *coding.

    If that is so (your claim; I’ll take you on your word) it is an almost totally dysfunctional GPU, Robert. Decoding is practically the primary purpose of GPU hardware acceleration, at least up to 2010 or slightly beyond. We’re now entering a more interesting phase.

    As such it is an example of acceleration but not of the usual graphics stuff.

    “The usual graphics stuff?” You still seem not to understand the principle behind hardware acceleration, Robert. It can be used to offload parallelizable tasks from the video stack — decoding in this case — but since 2010 (Fifi will fill in the exact year; I’m just offering a representative one for purposes of discussion) it has been used for much more.

    Here, take a look at the CUDA Roadmap. CUDA 1.3 already brings in the significant goodie of double-precision floating point operations. Beyond that there’s a welter of salivating goodness queued up.

    Which you won’t be able to take advantage of on your nasty cheap little mobile phone Servletto, because you won’t have a state-of-the-art like the Xeon D-series chip, which has something like 27 SKUs (I could look this up, but again I will leave it to Fifi).

    Are Intel using those 27 SKUs for extra cores, Robert? No, they ain’t. They’re using the majority of them for on-die GPUs.

    Why? Because Intel, unlike you, Robert, understand where to use hardware acceleration.

    Funny that. Ten thousand ignorant hardware sheeple working for Intel, and yet somehow they seem to have figured out more about how a modern server should work than the Colossus of Winnipeg.

    It’s all good. Chuckle.

  46. Deaf Spy wrote, “only Stallman and you call Linux “GNU/Linux”. Every other being on Earth calls it just Linux.”

    Nope. GNU exists. Admit it.

  47. Deaf Spy wrote, “One core of these things can easily keep up with decoding…
     
    Except that it can’t.”

    So, no video existed in the era of single-core 32-bit PCs? [SARCASM]

    It’s all about framerate and bits per frame. When TOOS was recommending 800X600 or whatever it was, single-core non-accelerated video was everywhere. Heck, in those days, I was doing 1024X768 just fine. As well as doing multi-core and accelerated decoding, I also have the option of doing the decoding and display in two passes with a buffer. I just don’t need acceleration to video these days. I can’t even see flicker at 10-15 fps, and these accelerated thingies can do 200 fps. Why? What is the benefit of having expensive hardware idling? I’m not idling while using my clunky system.

    VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS740 [Radeon 2100]

    Does this even do acceleration?
    Comment on Tom’s Hardware: “It is a low end GPU not suitable for gaming!”

    That replaced an even more feeble unit but it was on a motherboard that died. I’ve never had any trouble playing video on either, and of course, TLW displays on an ancient VIA chipset. We just don’t need acceleration for setting pixels or decoding. It might be of value for encoding which can be slow but I rarely do that and I’m retired.

  48. DeafSpy says:

    GNU/Linux is Linux

    Seems only Stallman and you call Linux “GNU/Linux”. Every other being on Earth calls it just Linux.

  49. DeafSpy says:

    One core of these things can easily keep up with decoding…

    Except that it can’t. Robert, do you know the purpose of specialized processing units [Sarcasm]? Even the one in the Mali chip is such a specialized unit. These units, either a part of the GPU, or not, are indeed “hardware acceleration”, as they take the burden from the CPU. Hardware video decoding has been supported in Atoms at least since 2009.

    Further, video *coding is exactly the area where parallelism shines at one of its best possible. Limiting it to a single core is as stupid as it can get. GPUs and these specialized units, Robert, are highly, very highly capable of parallel calculations. More than any multimedia instructions on any CPU.

    Robert, you obsession with the “many cores almighty” is only bringing you mocking. Educate yourself.

  50. The Wiz wrote, “that same paper consistently refers to the OS itself As Android, not Android/Linux. The references to Linux are only to the kernel that Android uses.”

    So, GNU/Linux is Linux but Android/Linux is Android? Makes no sense to me. Take a consistent position, Wiz. Should we refer to GNU/Linux as GNU? That would be consistent but ignores reality. GNU didn’t create Linux and neither did Google. The proper thing is to give credit to both major components.

  51. Wizard Emeritus says:

    “Samsung certainly knows Android/Linux rides on Linux and tells the world about it. Samsung has thousands of Linux specialists.”

    But that same paper consistently refers to the OS itself As Android, not Android/Linux. The references to Linux are only to the kernel that Android uses.

    The name of Googles commercial mobile OS is Android. Android/Linux only exists in the mind of FOSStards like Robert Pogson. By referring to the commercial OS know as Android as Android/Linux, FOSSTardia hopes to make it sound like the Linux has somehow succeeded among the larger consumer public, when that is definitely not the case.

  52. Deaf Spy wrote, “Android. Only. No one even knows about ARM, and you are only human being on Earth to call it “Linux”.”

    Nope. See An Overview of Samsung KNOXTM

    “Samsung KNOX utilizes SE for Android (Security Enhancements for Android) to enforce Mandatory Access Control policies to isolate applications and data within the platform. SE for Android, however, relies on the assumption of OS kernel integrity. If the Linux kernel is compromised (by a perhaps as yet unknown future vulnerability), SE for Android security mechanisms could potentially be disabled and rendered ineffective.”

    Samsung certainly knows Android/Linux rides on Linux and tells the world about it. Samsung has thousands of Linux specialists.

  53. Deaf Spy wrote, ” All modern SoCs include hardware decoding and encoding, the former being very important.”

    That’s true but I wouldn’t include that in “hardware acceleration” with respect to video output. One core of these things can easily keep up with decoding. Heck, I do it on an old Atom just fine. Even TLW’s thin client can do partial video satisfactorily. It’s single-core, 32-bit, 400MHz.

    The Mali system in ARMed SoCs has a separate unit for processing video. The GPU, which I thought we were discussing, doesn’t do *coding. As such it is an example of acceleration but not of the usual graphics stuff.

  54. The Wiz wrote, “which is still not fully functional”.

    Well, it worked last year and is ready to go this year. I was going to fire it up today but rain happened. It did a great job on the weeds last year. You can easily tell the difference between areas done with it and not. I’ve repaired the broken tine and fired it up on the weekend as a test. It cranks and runs electrically. I’ve never been able to spin it fast enough with the hand-crank. It just comes to a halt at TDC or starts running in reverse… I think the increased inertia of the alternator would help that situation. I will couple the two after planting.

  55. Deaf Spy says:

    probably affects drawing windowing stuff but not video per se.

    Incorrect. All modern SoCs include hardware decoding and encoding, the former being very important.

    it’s Android/Linux on ARM

    Android. Only. No one even knows about ARM, and you are only human being on Earth to call it “Linux”.

    It is good you have your personal bias towards ARM, but at least don’t spread lies.

    Btw, I will very eagerly expect your experience with an ARM server and virtualization.

  56. kurkosdr wrote, “He will keep his existing, working Intel box and just urge others to do it”.

    I only have one Via and one Intel box on line these days. All the rest are AMD. Beast is AMD64 Phenom II, a bit old but it gets the job done in 95W. I’m looking to replace Beast with a Cello board with an AMD ARMed chip. TLW’s thin client box will be replaced with an Odroid-C2 which I will probably use to replace other clients as well. Her thin client still works but the screen refresh will be a bit faster with a gigabit NIC. No need for x86 anywhere except in a virtual machine for the print-server. The non-free driver was 32-bit only last time I checked (Yep, not updated since 2009). We could emulate or keep her thin client as print-server.

  57. wizard emeritus says:

    Never try to teach a pig to sing…
    it wastes time…
    And annoys the Pig.

    If Robert Pogson could be satisfied with a cheap chinese tractor that he has had to spend enormous time and effort to get working at all, and which is still not fully functional, then he will be satisfied with his cheap ARM based IT.

    Assuming he actually buys it.

  58. Deaf Spy wrote, “hardware acceleration is not necessary for a consumer device”.

    Hey! Even the esteemed Dr Loser stated that hardware acceleration is not about higher frame-rates so consumers should not care about it. Acceleration probably affects drawing windowing stuff but not video per se. The important thing for consumers is being able to redraw the screen more or less instantly. Even ancient technology could do that at low resolutions. With today’s higher resolutions, Moore’s Law has taken care of it. Even smartphones have multiple GPUs so they can be working on several screens at once and just switch from one to another more or less instantly. Done. Consumers are happy. The rest is just selling points for salesmen. A lot of smartphones are sold without reference to a salesman. Folks just see what the neighbours are using and it’s Android/Linux on ARM, good enough. It’s good enough for me and TLW too. You guys are just silly trying to sell me a Rolls when my feet will take care of most of my need for transportation these days.

    Speaking of the weather. The damned forecast changed abruptly and I woke up to a cold rainy day. I still don’t have my sunflowers, pumpkins and peppers planted. Here’s hoping for a long growing season… I did do a lot of damage to weeds and their progeny yesterday, so while not happy I am grimly satisfied that things are going as I wish. So, my garden will be complete a week late. I’ve planted a hell of a lot of plants. I even have a surplus and have given some away. I might even have a few apples and grapes to eat this year. Amen.

  59. Deaf Spy says:

    I was just reading that ARM’s latest core is 0.65mm2, less silicon.

    A smartphone chip. Even as per your own quote. These “billions” loving it love it, because it is on their phone / tv, but not on their desktops. The smartphone, Robert, is not desktop, no matter how you twist and turn. Don’t count cores and Hzs. It is a totally different world.

    You know, Robert, on this particular topic, Dr. Loser, Wizard and I act as your true friends. We try hard to point out where you go wrong, and save you from making unfortunate investments in IT.

    But hey, whom am I talking to? The man who said hardware acceleration is not necessary for a consumer device. 🙂

  60. kurkosdr, thinking choosing ARM is religious, wrote, “3) They think x86 is power-inefficient by nature, which is complete BS. Intel and AMD just prioritize the laptop and the hybrid-laptop segment and aim for performance.
    4) They think x86 is evil because its ISA is “ugly” while RISC ISAs are “beautiful”. Never mind the whole point of RISC was that you shouldn’t care about how much”beautiful” of not the ISA is, but only care about how fast your compiled code runs…”

    I’ve seen a lot of recent ARMed smartphones run plenty fast enough to please their owners. Then there’s ARM’s latest development, A-73“Up to 2.8GHz frequency for highest peak performance”. OK, 4gHz is higher performance but who cares? The CPU is still idling most of the day. You could put a 1000HP engine in a VW but would it run any faster, legally? Nope. Total waste and impractical. You may notice that consumers are reluctant to buy new legacy PCs. That’s because the old ones, you know, <6 cores and ~3gHz are fast enough, more than fast enough. TLW is using a 400MHz CPU in her thin client. Think that’s not fast enough? It’s faster than a lot of legacy PCs made in the last decade because, well, Beast was made a decade ago and Beast is fast enough. TLW does lack gigahertz networking, though, so we will replace it to get slightly faster page drawing. The new ARMed thingy will run at >1gHz and uses gigabit networking and a faster GPU. Cost of her board + CPU +2gB RAM, $40. ARM is good enough for consumers. Billions love it.

  61. kurkosdr says:

    praised Sun = praised Sun’s SPARC

  62. kurkosdr says:

    Why don’t you go to AMD, then? Their Bulldozer, though inferior to Xeon, will also easily trash any ARM chip.

    x86 is an evil instruction set, man. It seems weird to the average person, but the graybeards and the techno-religious Pogs of this world never got over the x86 vs RISC wars. They just went silent for a moment, the time when Sun was still selling Ultra 25 “workstations” with UltraSparc IIIi CPUs, when the x86 world had Core2 CPUs.

    Here are the reasons those people are so in love with RISC:
    1) They think a move to risk would undermine the software library of Windows, because most of it is compiled for x86. The fact a simple recompile and some rewriting of whatever little assembly that exists in some modern programs (x265) is an easy thing to do, and hence the x86 Windows library can easily be ported to some other ISA if the demand is there, does not cross their minds
    2) They think the fact x86 is proprietary leads to less choice. Never mind that the situation of Qualcomm vs Mediatek with a little Nvidia on the site (today’s situation in ARMworld) is not a lot different than Intel vs AMD with a little VIA on the side.
    3) They think x86 is power-inefficient by nature, which is complete BS. Intel and AMD just prioritize the laptop and the hybrid-laptop segment and aim for performance.
    4) They think x86 is evil because its ISA is “ugly” while RISC ISAs are “beautiful”. Never mind the whole point of RISC was that you shouldn’t care about how much”beautiful” of not the ISA is, but only care about how fast your compiled code runs…

    Generally, don’t try to argue with those people. Those are the same people that praised Sun as a good architecture, despite the fact it mandates a useless register triplication to implement the completely useless minimum 3 register windows (which where mandated to facilitate faster function calls and returns, but it was later proved that context switches make the feature useless).

    Pog would be better served with an AMD or Intel NUC, but for completely religious reasons he will go ahead and put Desktop Linux on an ARM board (uh-oh) and have to deal with Mali’s crappy Desktop Linux drivers. Or on a second thought, he won’t do any of that. He will keep his existing, working Intel box and just urge others to do it, just like any good religious preacher.

  63. Deaf Spy wrote, “Why don’t you go to AMD, then?”

    Overkill. I was just reading that ARM’s latest core is 0.65mm2, less silicon. Saving the planet. Loving competition. It’s all good. Even the Cello uses an AMD A110x chip from 2014.

  64. Deaf Spy says:

    I do insist on being free of Intel, though.

    Why don’t you go to AMD, then? Their Bulldozer, though inferior to Xeon, will also easily trash any ARM chip.

  65. luvr says:

    All of those feeble attempts at insults that the loser is so renowned for, cannot hide how he was gloating about some “unsupported OS from Red Hat”, but completely missed the part about the “Early Access Program”.

    Well, it’s all good, I suppose. No, wait–it’s all “pertinent”… 🙂

  66. DrLoser wrote, “it’s still a nasty piece of outdated crap priced at $295 with only two RAM slots and an (expensive) max of 64GB RAM. Which is only available as DDR3.”

    I agree it is not current technology and for that I should get a discount but it gets points for availability, real soon now. The chip can do DDR4 but they put DDR3 sockets on the thing. RAM is essentially limited by density on the modules. I don’t think 64gB is the real limit. Anyway, we are nearly OK with 4gB so 32gB should be heavenly. TLW will never run out of RAM. It’s a pity, too, about the SATA-count. I do like RAID and an extra drive for bulk storage/backup. The HuskyBoard is superior in that respect. I expect I will buy the thing and the next week someone will have something better… That’s always been the way of IT. I do insist on being free of Intel, though. They were hand-in-hand with M$ in most of Wintel and they did an extra number on AMD that was never undone simply by paying a fine/settlement. I think they deserve to lose me as a customer even if I’m just a single cell of the organism.

  67. Deaf Spy wrote, ” I find it highly amusing you are ready to pay more to get less.”

    Yes! Less malware, crapware and monopoly. I’ll take it.

  68. Dr Loser wrote, ” I’m sure you can provide a detailed accounting for this magic. No, wait, I’m sure you can’t.”

    I think I still have the spreadsheets for that project. Indeed, we had a quotation for ~$100k for 100 Wintel PCs and no servers. With the savings we bought more thin clients, six servers, scanners, cameras, spare storage and RAM and 24+2 networks switches. This was for a large school with two server rooms. The school was more than 200m long so one room would not do. One server did file/auth and the others were terminal servers.

  69. Dr Loser says:

    As always, I’m happy to help you feel good about yourself.

    No educator worth his salt would feel good about himself when he can’t drive an obvious and completely unarguable point through the incredibly thick skull of even his most useless student, Squirrel Nutkins.

    Now, do you have anything pertinent to say on the present subject?

    You know, China, Moore’s Law, ARM servers, and so on?

    Please don’t disappoint me a fourth time, little Nutkins.

  70. luvr says:

    As always, I am glad to explain these things to lesser mortals of very little brain.

    As always, I’m happy to help you feel good about yourself. 🙂

  71. Dr Loser says:

    And, let’s see, how many RAM slots has it got? Two? No future expansion, and right now you’re looking at an expensive 32GB or an even more expensive 64GB. In the latter case you’re not going to get any change out of $500, even if you buy the cheap and nasty stuff.

    Even with the chip folded in, Robert, it’s still a nasty piece of outdated crap priced at $295 with only two RAM slots and an (expensive) max of 64GB RAM. Which is only available as DDR3.

    I know you are desperate to live in the stone age, Robert. But why would you want to pay $295 for a pathetic mobo/cpu combo and roughly $300 for RAM that you will never be able to upgrade?

    This is one of those things that might work for you, because you are a self-deluding cheapskate.

    But it’s never in a million years going to work for anybody else.

  72. Dr Loser says:

    Sometimes, it’s “better to keep your mouth shut and be thought a fool than open it and remove all doubt”.

    I recommend your sentiment there, Squirrel Nutkins. You should try applying it to yourself.

    The fact remains that I pointed out to Robert that his cite admitted that Red Hat does not, right now, officially support their ARM version. A matter of simple clarification. I assume you are willing to accept clarification.

    And immediately following that, I pointed out that it might never be officially supported.

    It’s not clear why you should have an issue with my comments here. At no point did I say that it wouldn’t be officially supported.

    I merely pointed out that, until it is officially supported, no sane IT manager will touch it with a barge-pole.

    As always, I am glad to explain these things to lesser mortals of very little brain.

  73. wizard emeritus says:

    Check your premises. Its a good bet that Not everybody here knows what an EAP is Mr L.

    But then again,if pointing out this kind of minutae is the best you can contribute,I guess that you have to take what you can get,eh?

  74. luvr says:

    It is not supported, period.
    Well, it isn’t , is it?

    Of course it isn’t—by definition, for an “Early Access Program”. Had you understood that, then you wouldn’t have felt the urge to stress the fact. Sometimes, it’s “better to keep your mouth shut and be thought a fool than open it and remove all doubt”.

  75. Dr Loser says:

    No, you didn’t. You said it is not supported, period.

    Correct, Sweetikins. It is not supported, period.

    Well, it isn’t , is it?

  76. Dr Loser says:

    The servers at Easterville cost $1-2K and we had six bought with the money we saved not buying Wintel.

    This is the first time you have ever mentioned buying servers at Easterville, Robert. Mostly you talk about either in-stock ancient PCs (I sympathise) or else the Schools Discount Buying thing … I forget the name.

    So, this being the first time you’ve bothered to mention it …
    1) I assume that these “servers” were either AMD or Intel. More info, please.
    2) The six in question appear to have cost between $6000 and $12000 on your figures. Doesn’t seem sufficiently parsimonious to me. Why not 6x$1000?
    3) This sum apparently magicked itself out of the air by “not buying Wintel.” I’m sure it did. I’m sure you can provide a detailed accounting for this magic. No, wait, I’m sure you can’t.
    4) Six servers seems a lot for a single school, Robert. What on earth were these servers doing? Surely not Internet traffic, given your evidence on the (understandably) lousy comms.
    5) Why not just buy two servers, which again is probably one more than the typical school needs, and make them beefy enough to deal with the workload? 2x$3000 is at the bottom of your costing here. Oh, and for that sort of money, you get Microsoft educational software and the rest thrown in for free.

    Frankly, I’m astonished at your ignorant palaeolithic profligacy here, Robert. Any other school district would hae impeached you for Gross Misuse of Public Funds.

  77. Deaf Spy says:

    I don’t care how fast they are, ARM will have a lot of idling so why pay more to idle faster?

    Because it will also not idle faster. It is this “not idling” that actually counts.

    Anyway, I find it highly amusing you are ready to pay more to get less.

  78. DrLoser wrote, “When was the last time you spent $295 on any bit of computer tech, Robert?”

    Two instances of Beast cost $1K+. The servers at Easterville cost $1-2K and we had six bought with the money we saved not buying Wintel.

    The ~$300 price for Cello boards is OK considering I get a CPU and a motherboard. Even the whimpiest x86 mobos cost ~$100. You can pay ~$300 and much more for an Intel CPU alone. I don’t care how fast they are, ARM will have a lot of idling so why pay more to idle faster? I could use a couple more SATA connections or another Ethernet jack but really those would be just unnecessary luxuries.

  79. luvr says:

    Dr Loser claimed, “I simply highlighted the fairly obvious fact that the thing is not yet supported.”

    No, you didn’t. You said it is not supported, period. You just cannot face the prospect of having to admit that you lack the reading comprehension skills to understand that we were talking about an Early Access Program.”

    But then again, you’re always right, aren’t you? How could you ever be wrong?

    BWAHAHAHAHAHA! What a loser!

  80. Dr Loser says:

    RedHat has a new release every few years, like other distros. 7 was released in 2013. Don’t you think ARM may well be fully supported in 2016?

    Since you ask, Robert? No.

    One of us is going to have to eat crow on 1st January 2017. Guess which of us that will be?

  81. Dr Loser says:

    Never mind, the thing apparently “successfully booted.” (No indication given as to boot time.)
    Ship it!

  82. Dr Loser says:

    It never ceases to amaze me, Robert, that you — a final-career teacher in the Far North, which incidentally I see as one of your more admirable traits — seem to feel that you are on a par with the likes of myself or the Wiz when it comes to large-scale server systems, whether in data centers or elsewhere.

    We have both spent roughly 30 years of our professional lives in these environments. This accumulated experience does not make us right, obviously. But at the very least, we have faced up to the unique challenges of running server systems.

    You, Robert? You’ve got bupkis, and that bupkis is based on precisely no years of relevant experience whatsoever.

    The rest of the Peanut Gallery have even less than bupkis, as far as I can see.

  83. Dr Loser says:

    Lots of folks were really constricted by what they could put into their existing server-rooms.

    Name one, Robert.

    Oh, and … spoiler alert … don’t try naming Facebook as one of them.

  84. Dr Loser says:

    He must have missed the fact that we’re talking about “Red Hat’s ARM Partner Early Access Program” here.

    Every day, in every way, your level of reading comprehension gets worse, Luvr.

    No, I did not. I simply highlighted the fairly obvious fact that the thing is not yet supported.

    And may never be supported.

  85. Dr Loser says:

    What do you think will happen to the price if the HuskyBoard ever gets to market with very similar technology?

    Nothing. That’s what. You do realise that this idealized piece of crap of yours (Cello) is basically exactly the same thing as the HuskyBoard, don’t you? The only difference is that they have pre-announced a price and an availability.

    And if you think $295 is “cheap” for that level of tech (we shall agree to ignore the capabilities of the CPU for this discussion), then …
    … well …
    … er …
    When was the last time you spent $295 on any bit of computer tech, Robert?

    Let alone a motherboard without a chip or RAM.

  86. The Wiz wrote, ” IF the best that ARM can do when scaled up to so called server class is to offer the same 45W that tech like Intel’s Xeon-D can offer, then why do I Mr. Server user have to go through the headaches and risks of using a new technology If all I get is what might be in the end minimal power savings. It would seem that an Intel who keeps coming up with more power efficient x86-64 technology is going to have the edge that “good enough” brings.”

    Hmmm… lower costs? Lots of folks were really constricted by what they could put into their existing server-rooms. ARM holds the promise of getting more cores into an existing space and reducing the cost of future server-farms. It’s all good. I’ll grant you that the early releases of ARMed servers haven’t shown much in savings but the fact is that the cost of ARM inside is less than Intel Inside when things are mass-produced because ARM takes a tiny tax while Intel is greedy. Also, ARM still has advantages in silicon usage for equal resolultions. ARM is becoming available at ~10nm sooner or later so Intel’s advantages in tech are diminishing. Intel at the moment can make cores that run at higher clocks but ARM can make tinier cores that are cheaper. It’s a tradeoff and ARM certainly has a major place in data-centres and my home.

    At the moment, the only real ARMed server I can buy is the Lemaker Cello which costs ~$300. What do you think will happen to the price if the HuskyBoard ever gets to market with very similar technology? These boards were both aimed at prototypes and software developers. They will certainly work for me and the prices will come down like any other technology mass-produced. I expect there will be newer tech models in 2016 which will give me choice and lower prices. It’s all good.

  87. luvr says:

    Dr Loser obviously cares whether or not the OS is supported by Red Hat (as does every last corporate data center buyer).

    He must have missed the fact that we’re talking about “Red Hat’s ARM Partner Early Access Program” here. In other words, it is clearly not considered ready for prime time at this stage. Usually, “Early Access” does mean that the intention is there to somehow, some day get it ready. Whether, or when, that day comes, will become clear in due course. Just wait and see.

  88. Dr Loser says:

    What don’t you understand about “early access?”

    What don’t you understand about “bleeding edge,” Robert?

    Red Hat and the like can afford to splurge R&D and marketing on something that may never work properly.

    Can you? Oh, wait, you bought that Chinese tractor, didn’t you? I’m beginning to see a pattern here.

  89. Dr Loser says:

    I simply adore that “successfully booted” boast, btw.

    What next? Successfully running a single instance of Apache?

  90. Dr Loser wrote, “You may not care whether or not the OS is supported by Red Hat, Robert, but every last corporate data center buyer cares.”

    What don’t you understand about “early access”? This thing is coming down the pipe and RedHat and their customers do care about it.

    RedHat:“June 22, 2015
     
    Today, we are making the Red Hat Enterprise Linux Server for ARM Development Preview 7.1 available to all current and future members of the Red Hat ARM Partner Early Access Program as well as their end users as an unsupported development platform, providing a common standards-based operating system for existing 64-bit ARM hardware. Beyond this release, we plan to continue collaborating with our partner ISVs and OEMs, end users, and the broader open source community to enhance and refine the platform to ultimately work with the next generation of ARM-based designs.”

    RedHat has a new release every few years, like other distros. 7 was released in 2013. Don’t you think ARM may well be fully supported in 2016?
    “Red Hat does not generally disclose future release schedules.”

    See past releases

  91. Dr Loser says:

    Why are you even looking at a piece of crap like the Cello, Robert?

    To start with, the motherboard is slated to cost $295, which ain’t cheap. You can get a far better Intel/AMD motherboard for that.

    And, let’s see, how many RAM slots has it got? Two? No future expansion, and right now you’re looking at an expensive 32GB or an even more expensive 64GB. In the latter case you’re not going to get any change out of $500, even if you buy the cheap and nasty stuff.

    And, well, DDR3? Way to go on future-proofing your investment, Pog.

  92. Dr Loser says:

    What this means is that Red Hat’s unsupported ARM operating system is now available for partners through Red Hat’s ARM Partner Early Access Program (PEAP).

    You may not care whether or not the OS is supported by Red Hat, Robert, but every last corporate data center buyer cares.

    “More fool them,” you might say. Unfortunately, you need those corporate buyers, in order to drive the price down. Without those millions of unit sales, a single miser in Manitoba isn’t going to be able to sponge off the market in his accustomed fashion.

    You need to stop reading marketing bumf, btw. Or at least you need to treat it with an appopriate degree of scientific caution.

  93. Wizard Emeritus says:

    “So, 2016-2017 will see lots of products become available for ARMed servers”

    The sad triumph of hope over experience. Otherwise known as wishful thinking. There is a very good chance that in the end ARM is going to get bit by the same “good enough” that you keep throwing in our faces Robert Pogson. IF the best that ARM can do when scaled up to so called server class is to offer the same 45W that tech like Intel’s Xeon-D can offer, then why do I Mr. Server user have to go through the headaches and risks of using a new technology If all I get is what might be in the end minimal power savings. It would seem that an Intel who keeps coming up with more power efficient x86-64 technology is going to have the edge that “good enough” brings.

    It’s all good to me.

  94. Deaf Spy wrote, “If ARM themselves don’t have servers on the roadmap, it looks like ARM servers are a mostly experimental feature.”

    ARM: “The Cortex-A72 high-performance processor is designed for a wide-range of applications which require the highest performance, together with the advantages of ARM’s power efficient architecture. Key target markets include: servers”

    Then, there’s Qualcomm and RedHat doing it: “QTI and Red Hat successfully booted Red Hat Enterprise Linux Server for ARM on QTI’s Server Development Platform (SDP). What this means is that Red Hat’s unsupported ARM operating system is now available for partners through Red Hat’s ARM Partner Early Access Program (PEAP). This could be installed on QTI’s SDP to enable faster development of new applications for enterprise servers using ARM SoCs.”

    So, 2016-2017 will see lots of products become available for ARMed servers. The fact that I, an old retired guy, can pre-order one today and Debian has all but a few packages ported to ARM64 is an indication that the log-jams are gone. Debian’s package list, packages.xz is 7.2MB for ARM64 and 7.5MB for AMD64. Take a hint.

    Oh, yes, why would they talk about servers at a press conference about mobility?
    “11:09PM EDT – Today’s talk is focusing on mobile”
    Perhaps you should watch the press conference on servery given by Applied Micro.
    “I am thrilled to share with you that we have made tremendous progress in X-Gene 3 development. X-Gene 3 is targeted at over six times the performance of currently shipping X-Gene products, and very competitive with mainstream high-end Xeon processors for hyperscale workloads”

    Yes, it sounds like ARMed servers have no legs… [SARCASM]

  95. Deaf Spy says:

    Interesting to see what ARM’s message is:
    http://www.anandtech.com/show/10371/computex-2016-arm-press-conference-live-blog

    Is it only me that can’t even see the world “server” anywhere?

    If ARM themselves don’t have servers on the roadmap, it looks like ARM servers are a mostly experimental feature.

  96. Deaf Spy says:

    Wizard Emeritus I have one of the gigabyte boards…

    The only thing you do have, Fifi, is syphilis. Prove me wrong, please: board brand, model and serial number; the CPU you put on it, when devices you plugged in; the OS you installed, version and all.

  97. oiaohm says:

    http://h20195.www2.hp.com/v2/getpdf.aspx/4AA5-0070ENW.pdf
    The company you look at about arm64 bit usage at paypal is HP in their moonshot solutions. Basically the intel atom usage in moonshot has been fully replaced by arm64 because it is better. Also HP is starting to find even xeon power per watt is not good enough.

    These are ARM developers who KNOW how to deal with hardware and they are struggling to get gigabytes server board to work. Gigabytes board is still one of the few ARM boards that is clearly made for imagined server class workload Robert Pogson. And its clearly not ready for prime time.
    Wizard Emeritus I have one of the gigabyte boards they are not hard to set up. Its just a few things catch you. Like none of the network cards have a set mac address out the box. This is also true in a moonshot systems using intel xeon chips as well. So it software configured classed of network hardware. Those coming from general PC and other devices expect stuff like Mac addresses to be pre set by firmware. If you had read and in fact understood Wizard Emeritus they kicked toe on normal fault not arm only either.

    http://blog.hypriot.com/post/getting-docker-running-on-a-high-density-armv8-server-from-hisilicon/
    hisilicon does not give that much on power usage but density is quite insane. A full hisilicon d2 board is 32 cores. These are TSMC FinFET what mean they are about the same nm as intel xeons ie 16 nm vs 14 nm and kick the live heck out of xeons in performance per watt.

    Problem here is getting hands on a hisilicon d2. Only can be ordered from a site only in Chinese.

    So there are quite a few arm64 server boards out their made by different makers problem most of these makers don’t give a rats about us english speakers. If you are worried about power usage you are looking for those with low nm number as soon as the nm gets close to what intel is using intel is always losing the performance per watt.

    Issue with lot of the arm64 is not that Intel chips are better but as english speaking being able to get hands on the good quality arm64 server class. Yes 1.7x more in price for a xeon than arm64 for same performance. So the arm64 server boards are on average cheaper if you can in fact work out how to buy them. Now it a question will your work load in fact work on arm64?

    I am hoping over the next 12 months companies like hisilicon do deals to have suppliers in other languages other than Chinese.

  98. Thw Wiz wrote, “These are ARM developers who KNOW how to deal with hardware and they are struggling to get gigabytes server board to work.”

    That’s a particular issue with UEFI and CentOS. Debian is widely used on the smaller boards with few hassles. Developers of software are often not most familiar with hardware. e.g. Linus…

    RedHat may well have had a particular concern about keeping networking off for reasons of security. The boards I’m looking at are much more similar to the common desktop board, minimum hardware and cost. The Cello, for instance, uses Cortex A-53 core but with bigger caches and Ethernet. Otherwise they are rather like a smartphone minus telephony. There are oodles of boards using essentially a smartphone or controller chip. The chief difference I require is more RAM, cache, SATA and Ethernet, small tweaks. ARM produces a flexible design but obviously folks are providing what ships by the billion units rather than millions. It will come but we have to wait a bit. I have other things to do at the moment: cultivation, planting… Next month I will purchase some IT, most important things first, new mouse, new hard drives, and an Odroid-C2 for TLW.

    BTW, I solved two hardware problems today. I was showing a relative how hard the Chinese tiller was to start by hand and discovered the electric starting now works. I must have overheated the starter last time because I had fuel problems. Further, a thrashing of Beast was happening too often. This time I discovered the /home/ partition had filled up and Firefox went nuts. I’ve never seen that before. I thought it was swapping unnecessarily but it was just caching. Any way, some house keeping and newer/bigger/faster drives should help. I’ve also decided to keep Beast’s keyboard for the ARMed revolution. There just isn’t anything out there that I like for less than $200 and it works flawlessly except for worn keycaps. I’ll use a PS/2 to USB dongle.

  99. Dr Loser wrote, “You appear to be under the impression that, by selecting such a board, you would be wasting your valuable Loons on “features you don’t need.””

    If you look at that board you can see a rather high parts-count, even discrete devices are scattered all over. There is a cost to buying and installing and shipping those parts. The only justification I can see for that would be for future multi-socket setups where the didn’t want to duplicate those functions per socket. Either way, it’s a waste of my resources to buy one even if it were for sale. The Cello or Huskyboard, if it ever goes on sale, is a much better fit. They could use two more sockets for RAM, or a few more SATA connectors or a second gigabit NIC, but otherwise, they are very close to what Beast provides in a much nicer package.

  100. Dr Loser says:

    It has features I certainly don’t need like built-in management software for data-centres and 10gB Ethernet.

    A word of advice, Robert. If, at some point in the future, you stumble across the ARMed server board of your dreams at an acceptable price, do not cut your nose off to spite your face. Do not spurn that opportunity, simply because the board has features you don’t need “like built-in management software … and 10gB Ethernet.”

    You appear to be under the impression that, by selecting such a board, you would be wasting your valuable Loons on “features you don’t need.” In actual fact those features are, to all intents and purposes, free.

    Why? Because those features exist precisely because the manufacturers aim to sell their board into data-centers. Their development cost is amortized across the tens of thousands of boards they expect to sell. To customers who expect and demand those features in a server-class motherboard.

    It’s precisely the same reasoning you would use in advocating FLOSS, as it happens. Chuckle.

    Pick a board that doesn’t have these, or similar, features, Robert, and no matter what price it comes at, you won’t have a server board.

    You’ll have a glorified mobile phone, with more RAM than it can possibly ever make use of.

    The words “useless bloat” spring to mind here, in hardware terms.

  101. Dr Loser says:

    These are ARM developers who KNOW how to deal with hardware and they are struggling to get gigabytes server board to work.

    What, are you saying that Robert and oiaohm:
    a) are not developers
    b) do not know how to deal with hardware and
    c) would struggle to get their GB server board to work?

    Perish those thoughts, Wiz. Erase them from your memory.

  102. Wizard Emeritus says:

    “The price is a bit high for my project, >$1K, and it has a high parts-count compared to many others. It surely would make me RAM-rich however.”

    Its nice to want, Robert Pogson, but the fact is that it is going to be a long time if ever before ARM motherboards with the features that you want are available at the price your “budget” will tolerate. The fact is that those “extra” features 10GbgE and Network connected Baseboard Management Controllers standard on motherboards these days – the consumers of the servers that contain these motherboards demand it and are willing to pay for them. They are going to be part of the package whether you want them or not.

    Dont’ want to pay for the “extras”, then you get to wait for the possible time when the tech like what is in the MP30-AR0 “trickles Down” to the low end, or the board itself is discontinued and soled at fire sale prices.

    Of course you could come off your high house about intel and consider one of the wealth of x86-64 motherboards, any one of which is power sipping in comparison to the ancient white box crap that you call beast. Pair one of these motherboards with a modern energy efficient case and power supply, and you’ll be good for the next 10 years.

    Or you could make due with one of the myriad cheap ARM developer boards equipped with what one ARM developer referred to as “glorified cell phone chips”

    Either way its all good for me.

  103. Wizard Emeritus says:

    “… and get more and more dismissive as the thread continues.”

    Oh it gets better when you google for the Gigabyte motherboard and this thread from the arm developers group pops up.

    https://lists.centos.org/pipermail/arm-dev/2016-February/001653.html

    These are ARM developers who KNOW how to deal with hardware and they are struggling to get gigabytes server board to work. Gigabytes board is still one of the few ARM boards that is clearly made for imagined server class workload Robert Pogson. And its clearly not ready for prime time.

    And in the end the vaunted ARM chip seems to consume as much power (45W) as a Xeon-D. That is not good for ARM’s adoption for general use.

    Then again we will have to see, won’t we.

  104. Wizard Emeritus says:

    “X-Gene does have a market. PayPal, for instance, has deployed them on real workloads.”

    Did you actually read beyond the opening paragraph, Robert Pogson? I especially liked this paragraph.

    “It’s unclear how big the ARM deployment is at PayPal’s data centers. Gopi avoided directly answering requests to clarify this on an earnings call with analysts Tuesday, on which he announced the deployment, saying only that PayPal had “deployed and validated X-Gene,” Applied Micro’s 64-bit ARM processors, and that the servers were “running a real workload.””

    If Paypal is USING ARM servers. then why did’nt Gopi give details? Could it be that once again the vaunted ARM value disappears when its handed a workload beyond a tablet or smartphone? But I guess you are right, ARM is being used at PayPal…somewhere.

  105. Dr Loser wrote, “The day this … thing … gets used as a “standard server” is the day your brain becomes unglued, Fifi.”

    Silly! Gigabyte wouldn’t develop something like this without a market. This is one of the boards I considered but there are a couple of problems:

    • The price is a bit high for my project, >$1K, and it has a high parts-count compared to many others. It surely would make me RAM-rich however.
    • It has features I certainly don’t need like built-in management software for data-centres and 10gB Ethernet. I don’t plan on having anyone else on the LAN with which Beast3 could communicate at those kinds of speeds. While I would like to have two such beasts on the LAN, that is way above my budget, ~$1K total for clients and servers.

    X-Gene does have a market. PayPal, for instance, has deployed them on real workloads.

  106. Dr Loser says:

    And, we present: Today’s boffo demonstration that Fifi is completely unable to read any of the detail behind a link he posts.

    Wiz: Don’t make me laugh. You’ cant even get a decent amount of memory for using ARM as a standard server, let alone as a hypervisor host.

    oiaohm: http://techreport.com/news/28014/gigabyte-latest-microatx-board-has-an-eight-core-armv8-soc
    Lie through you teeth much Wizard Emeritus that board is 8 core arm64 with 128G ram and that is a arm soc board.

    You didn’t read the comments, did you, Fifi? They start with the following:

    If they sell any of these, I’d be genuinely curious to know who buys them and why.

    … and get more and more dismissive as the thread continues.

    The day this … thing … gets used as a “standard server” is the day your brain becomes unglued, Fifi.

  107. Wizard Emeritus says:

    “O here is Wizard Emeritus admitting the stuff he is typing is wrong and does not want to be corrected. That is how I am going to take it ever time you do this.”

    Go right ahead and correct me. I have no intention of responding to a lying fraud like you. From your alleged expertise in IBM’s SAN Volume Controller to your idiotic excuses for why you could not produce an integrated FOSS Symphonic Orchestral Sound Library, even by writing a script to do so.

    Of course you will use my words and “behavior” to inform me that you will not do it because I am not worthy or some such nonsense. All to hide the reality that you lack the skill to pull it off.

    In the end you are capable of nothing but trolling any poster here who you decide needs “correcting”, including this sites owner.

    Oh, and thanks for the Reference to the ATX ARM server motherboard. You should have read the pdf more thoroughly. It has its quite humorous points.

  108. oiaohm says:

    And, again. Even with 64GBs of RAM, running three or four virtual machines will dramatically decrease the caching. Fact. Because each VM will make an in-memory copy of the core OS, wasting space very efficiently.
    Deaf Spy that is not exactly true.
    First question is host to your VM Linux? Host being Linux include ESXi(vSphere), Xen, KVM based virtual machine solutions. Why is this import is the next question.
    Second question is KSM(Kernel Same Page Matching) turned on?
    Third question are they in fact different operating systems?
    In future there will be a forth question are http://lwn.net/Articles/686808/ are you using encryption of ram that kinda prevents sharing.

    Why all these different questions KSM effect. If the OS are in fact the same in all the VM each new instance only uses the ram that it is different to the other instances. Debian, fedora, redhat and Ubuntu instances in fact share identical binaries in places and some of the in memory structures of the core of the kernel are still identical. So — an in-memory copy of the core OS– Each VM machine wants to see a in memory copy of the core OS but each one under Linux based solutions at this stage does not have to eat a full lot of memory. Also comes interesting when you start 4 identical vm instances from the same suspended state on the Linux base solution. Memory consume is way less than you would first expect.

    –wasting space very efficiently– Requires a few things.

    What you need, Robert, is containers. Provided you can learn how to use them.
    No you need to watch intel video on identical core vm machines vs containers what is surprising is there is very little gain in memory usage using containers vs vm done this way(yes the CoreOS experiment). KSM effect on containers memory usage is important to take into account when allocating workloads. So if you have a stack of containers based on ubuntu and a stack based on centos it is a good idea to place the ubuntu ones on 1 server and centos one on the other and not mixing with each other unless you have to. Why you will fill up on ram usage slower. Once memory is encrypted and deduplicating memory is not possible then running a vm is a fast way to lose memory. Running a VM under Windows and OS X is a fast way to lose memory as well because they don’t have the deduplicating.

    Dr Loser it not only Robert who is being way too simple DeafSpy is as well. Problem here is DeafSpy is using logic that does not apply generically when you OS on bare metal is Linux.

    Oh, and oiaohm, don’t bother trolling me on this subject until you are ready with my FOSS Sound Library! or my download script for one.
    O here is Wizard Emeritus admitting the stuff he is typing is wrong and does not want to be corrected. That is how I am going to take it ever time you do this.

    As far as your theoretical inexpensive ARM S0C configuration is concerned… Don’t make me laugh. You’ cant even get a decent amount of memory for using ARM as a standard server, lat alone as a hypervisor host?
    http://techreport.com/news/28014/gigabyte-latest-microatx-board-has-an-eight-core-armv8-soc
    Lie through you teeth much Wizard Emeritus that board is 8 core arm64 with 128G ram and that is a arm soc board. Yes that supports Linux XEN or KVM no problems. ARM as a standard server and hypervisor host is a option. Yes that is a 12 month old board that you can still buy. So a line for me not to comment because Wizard Emeritus knew he was lieing through his teeth. New boards with more powerful arm64 bit processors are due out.

    Actually it s only 45W’s. Of course you have to give up your bigotry and use Xeon-D… the old arm64 board flat is 45W. Newer arm64 bit chips will be better due to lower nm being used.
    http://www.amd.com/Documents/A-Hierofalcon-Product-Brief.pdf
    Like all the new amd arm64 tap out at 32W at max but those are still using quite a large nm. There is something else import to consider. If you compare programs built for arm64 vs x86. Horrible fact is on average same version program the arm64 one is smaller. arm64 does not have the same memory alignment requirements for performance x86 does so you can make arm64 bit executables smaller.

    Something to remember is Xeon D are 14 nm chips and arm64 chips are 32nm. So due difference in nm when it comes to power usage a Xeon D should be the one leading by a large margin but in fact it losing by quite a margin. This is why I see intel in large trouble when we hit the nm wall and everyone ends up producing at the same nm. Intel have to pull a rabbit out of the x86 hat somewhere instead of the current rabbit out of hat having very advanced fabrication compared to everyone else. So 2 generations behind in nm behind arm64 chip on power usage beats intel chip. Please note the gigabyte is 40nm to be 45 watt so 3 generations in production tech behind and you see same power usages this is not right.

  109. Dr Loser says:

    To further explain Deaf Spy’s comments, Robert, it is quite possible for any given box (no matter how ancient) to “process” 20GB per day. That’s 200KB per second, which isn’t really very much. I’ll be fair and allow for spikes, say on a 10:1 ratio. Even so, 2MB per second isn’t really very much.

    The thing is, this depends upon work-load.

    Some rancid little Beastlet of a server will quite happily take advantage of even small quantities of RAM, plus caching, and chuck out static pages every second, all day. The bottleneck here is the Ethernet pipe, not the server.

    Even more complicated WWW workloads, involving dynamic work pages, can crank out similar numbers.

    Once you go away from workloads that involve more than simply serving up an HTML page, however, you quickly get in trouble. Why? Because, as Deaf Spy observes, your server is doing a whole lot less reads and a whole lot more writes.

    This turns out to be very expensive in hardware terms. All the way down to cache-busts.

    Your simplistic model of how servers work only applies in very specific circumstances. And “Moore’s Law” won’t help you with the rest.

  110. Deaf Spy says:

    processes much more than 20gB of data per day.

    does not equal to

    20 GB of host writes per day

    As experience, I am a proud owner of a 120 GB Intel 530 since 2012. I have my OS, all apps, and page file there. Four years later, drive is in perfect health.

    And, again. Even with 64GBs of RAM, running three or four virtual machines will dramatically decrease the caching. Fact. Because each VM will make an in-memory copy of the core OS, wasting space very efficiently. Well, you can limit the RAM of the VMs and rely that the guest OSes would swap out the files they don’t use very often, but that is then as stupid as it can get, because you can cause the cache of the host to cache what the guest OS has already swapped out.

    What you need, Robert, is containers. Provided you can learn how to use them.

  111. Wizard Emeritus says:

    ” The ARMed server SoCs are up to the job of processing as much as AMD64 idling all day long.”

    Except that things change when you do virtualizations. When I was working regularly ran 3-4 Server VM’s in a workstation with 63Gb RAM and kept the CPU pumping at 30-50% while still having enough horsepower to do my personal work. If you do not care about doing work direct on the phyical hardware, you could run 8-10 VM’s in the same memory space and probably get up to 50-75%.

    As far as your theoretical inexpensive ARM S0C configuration is concerned… Don’t make me laugh. You’ cant even get a decent amount of memory for using ARM as a standard server, lat alone as a hypervisor host?

    Oh, and oiaohm, don’t bother trolling me on this subject until you are ready with my FOSS Sound Library! or my download script for one.

  112. Wizard Emeritus says:

    “Why would I need an x86-64 CPU if I’m running ARMed software? The ARMed server SoCs are up to the job of processing as much as AMD64 idling all day long. I would rather use 15 watts of busy RAM rather than 95W of idling CPU.”

    Actually it s only 45W’s. Of course you have to give up your bigotry and use Xeon-D…

    As far as ARM servers are concerned, You mean you are going to stick a crowbar in you wallet and pay for one?

    I’ll believe it when I see it.

  113. Wizard Emeritus says:

    “Beast may not be very busy but it easily processes much more than 20gB of data per day.

    I’ve been running a 256zGb drive in my dell portable since 2011. THat machine has been running flat out virtualizing 5-6 VM’s. It is still good today.

    You also have SDHD drives whoch give you the best of both worlds.

  114. Wizard Emeritus wrote, “even running a KVM environment needs both lots of RAM as well as the horsepower of a beefy x86-64 CPU”.

    Why would I need an x86-64 CPU if I’m running ARMed software? The ARMed server SoCs are up to the job of processing as much as AMD64 idling all day long. I would rather use 15 watts of busy RAM rather than 95W of idling CPU.

  115. The Wiz wrote, “256Gb can be had for cheap”.

    $560 new? That’s not cheap. That’s more than the new Beast plus 32gB RAM. $85 used? That’s risky. I’m still not convinced of the reliability of these things.

    see http://mobile.enterprisestorageforum.com/storage-hardware/ssd-vs.-hdd-performance-and-reliability-1.html (2014)

    and

    “Endurance rating 80GB Intel SSD DC S3500 Series: 24.6 GB of host writes per day, based on web pricing $102.00 (March 2015)
     
    Endurance rating 80GB Intel SSD 530 Series: 20 GB of host writes per day, based on web pricing $80.75 (March 2015)

    Intel rates their units at ~20gB per day for five-year lifetime. SSDs may rock at performance but they stink at longevity. My 512gB hard drives are 10 years old and only one has failed.

    Linux caches the frequently used files in RAM anyway, so I don’t see the need for SSD with RAM so cheap. I paid >$100 CDN for Beast’s current 4gB. The new Beast will be much more capable with 32gB RAM and a few 2TB hard drives.

    Beast may not be very busy but it easily processes much more than 20gB of data per day.

  116. Wizard Emeritus says:

    “I plan to have a ton of RAM. Caching will work well.”

    If you plan on caching then don’t plan on virtualizing very much. even running a KVM environment needs both lots of RAM as well as the horsepower of a beefy x86-64 CPU.
    Having SSD for file caching is a better idea 256Gb can be had for cheap.

    http://www.amazon.com/SAMSUNG-2-5-Inch-Internal-MZ-7PC256D-AM/dp/B005T3GQ0G

  117. Deaf Spy wrote, “Virtualization and file caching don’t go well together.”

    I plan to have a ton of RAM. Caching will work well.

  118. Deaf Spy says:

    Cached files have very little seek time.

    Virtualization and file caching don’t go well together. If you want to cache data heavily, what you actually need is containers.

  119. Dr Loser says:

    As usual, Robert, you are about ten years behind the times when it comes to a concept like Moore’s Law. Let me assist in bringing you up-to-date.

    First of all, the Wall Street Journal, amongst many others, points out that it no longer applies. Interestingly, and as you would expect from the Wall Street Journal, they point out that a continued upward curve would be economically crippling.

    (So much for small cheap infinitely powerful things.)

    And secondly, Gordon Moore, who is still alive and did not, in fact, claim it as a law in the first place, had this to say about it recently:

    “I googled ‘Moore’s Law’ and I googled ‘Murphy’s Law’ and ‘Moore’ beats ‘Murphy’ by at least two to one,” he said in a January interview by Intel.

    I swear, Robert. Every day, every post, you’re sounding more and more like one of those guys who joins the Hale-Bopp cult just because some kind soul donated you a free pair of Nike Decades trainers, and you couldn’t bear to see them go to waste.

  120. kurkosdr wrote, “Oracle lost a court case.”

    Yes, I noticed that, but Oracle could fight on for another decade, like SCOG v World. It isn’t over until you can’t find the money to pay lawyers or a judge and several levels of appeals rule such.

  121. Dr Loser says:

    Moore’s Law was essentially about complexity/nodes of integrated circuits.

    It was never a law. And no, it wasn’t.

    The law has been found to apply loosely to much related IT.

    It was never a law. And I assume you have some quantifiable equation for the phrase “loosely.” And I assume you know, as well as the rest of us know, that “much related IT” is a completely disprovable assertion, and therefore completely worthless in scientific terms.

    Consider the size of PCs.

    Consider the ant, Robert. It’s a far more useful source of inspiration than your present state of senile gibbering.

    “Consider the size of PCs,” indeed.

  122. Dr Loser says:

    This is not about logical correlation but analogy.

    Go on, I’m fascinated, Robert. Do tell. What do you see as the difference between a logical correlation and an analogy?

  123. kurkosdr says:

    BREAKING NEWS:

    http://arstechnica.co.uk/tech-policy/2016/05/google-wins-against-oracle-android-is-fair-use/

    Oracle lost a court case. Which means the rest of the world probably won. Everybody in this blog, gather for a virtual group hug.

  124. Dr Loser wrote, “Are you ever going to be able to master the art of a simple logical correlation, Dougie?”

    Dr Loser had his humour-centre irradiated and needs a transplant. This is not about logical correlation but analogy. Moore’s Law was essentially about complexity/nodes of integrated circuits. The law has been found to apply loosely to much related IT. Consider the size of PCs. I remember when a unit shipped with a weight of about 65pounds and now we have smartphones competing against chocolate bars and coming in under weight. I remember TLW buying a cell-phone in the 1990s. We called it a “brick” and it weighed about as much. So we could have corollaries to Moore’s law for many things: years versus weights, sizes, prices, MHz, core-MHz, etc. Lighten up. M$ introduced 1990ish technology in a 21st century smartphone. What was that about? It certainly wasn’t innovation or competition. Their powder was all wet and they were several cycles behind Moore’s Law… ISTR they had not copy and paste…

  125. Dr Loser says:

    In phones, everyone is equal.

    Except for Google and Android, as compared to [insert failing company here] and Gnu/Linux, Robert. These two categories are not remotely equal.

    Sometimes I admire you for your bull-headed refusal to accept the obvious truth of the marketplace.

    But in this particular case, I don’t. Why? Because I asked you a question, based upon honest analysis (and I purposefully left M$ out of it, because we can all agree that M$ is irrelevant to this question).

    And what do I get?

    Pointless diversions and nothing but babble.

    Address the question, Robert. Why do you think that Android is so much more successful on mobile than any flavor at all of Gnu/Linux?

  126. Dr Loser says:

    Speaking about Moore’s Law, seems M$ phones are dead.

    Are you ever going to be able to master the art of a simple logical correlation, Dougie? Was your almost complete lack of education quite that damaging? Let me offer you an equivalent proposition:

    “Speaking about buttering parsnips, it turns out that the Ancient Egyptians brewed their own beer.”

    I don’t really expect you to understand this equivalency. But I would take it as a personal favor if you could desist from following on from each and every pointless comment you make with an equally pointless cite.

    Do that small thing, Dougie, and I will begin to despise you a little less.

  127. dougman wrote, “about Moore’s Law, seems M$ phones are dead.”

    Chuckle. That’s a new take on it… every 18 months M$’s share shrinks by half… 😉 Sadly, thousands will lose their jobs simply because M$ is run by idiots trying to monopolize everything instead of working for a living. It’s those who do work for a living, their employess, who suffer.

    In phones, everyone is equal. M$ easily could crank out the world’s best Android/Linux smartphone if they put their $billions into it but they just can’t see beyond the ends of their noses. They’d rather keep milking their enslaved customers and “partners” rather than producing a great product at a reasonable price.

  128. Dr Loser says:

    You do know that Moore’s Law (which isn’t a law, but simply an observation) hasn’t applied since about 2008, so far as fab-related improvements go? 28nm or whatever, we’ve finally come up against inviolable barriers of thermodynamics, not to mention quantum tunneling.
    Moore’s Law now applies simply because it’s possible to cram more and more cores on a chip. (You will, of course, be buying top-end i7-grade chips for your server.) And the way it applies is going to sting you in the wallet, Robert, because mostly it’s about parallelization and load balancing.
    Which implies running modern chips at something quite close to capacity, all the time.
    Which implies a frighteningly high electricity bill.

Leave a Reply

Your email address will not be published. Required fields are marked *