Moving to ARM

The “news” yesterday that M$ is moving/porting to ARM was big but not for the reason that M$ made the move. They cannot get their bloatware to run on ARM so they are going to engineer an ARM CPU that will run the bloatware. It will be modular as all ARM CPU designs are but they will need extra cores and cache to run the bloat, negating many of the advantages of using ARM in the first place. There will be followers and partners who push for M$’s way of doing things but the end-user/consumer will still have the opportunity in the market to buy “normal” ARM. Also, it will take many months to produce an ARM CPU that likes M$ and it will take many months for M$ to port the bloat, DRM and M$-isms. Some are predicting 2012 as a release date. A lot will happen by then.

The news of M$’s interest in ARM will have a large impact on the market. No one will want to delay going to ARM for 2 years so the march of ARM+GNU/Linux or Android will continue and into x86 territory. Intel will be stuck. They could produce ARM CPUs but that would threaten their hair-drying line of products. The market for those will flatten or perhaps recede. Really, only servers and some number-crunchers actually need 64bit multiple core CPUs.

M$’s strategy seems defensive. It will freeze part of the market and create new opportunities in others but it is the surest sign that the monopoly is dying. M$ can no longer dictate that Wintel is the way to go. The market dictated to M$ otherwise.

M$ will compete on price/performance as best as it can without killing “the experience”. As long as they charge huge licensing fees and need heavy-duty ARM CPUs all they will be able to do is maintain a presence, not dominate. In the years that they cede leadership to ARM+GNU/Linux, they will lose opportunity forever. Intel and OEMs and retail now have a “green light” to really push GNU/Linux on ARM and x86/AMD64.

The only possible way M$ can hold onto monopoly is buying out ARM or making an exclusive deal with them to only produce bloated CPUs. That is not going to happen. ARM is looking at growth for years to come. Now is not the time to sell and in the future ARM will be too big for M$ to afford.

M$’s moves may make me seem like an oracle but I am not. I don’t know how to run a business to enslave the world but I do know how to make IT run smoothly and efficiently. Moving to ARM makes sense for everyone, not just M$. It will also make sense to use ARM CPUs as thin clients and servers as well as mobile devices and personal computers. 2010 was the year of ARM as I predicted. The moves announced yesterday were all put in motion in 2010. It was a good year for IT and 2011 promises to be even better with the whole world knowing monopoly is on its last legs.

UPDATE
TheRegister has a pretty good analysis of this move.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

20 Responses to Moving to ARM

  1. It is easy to prove that thin clients improve performance for many tasks. Suppose you click on an icon to start an application. With a GNU/Linux terminal server, someone on the system has most likely started that application before so the application and all its library is likely to be cached in RAM. That takes starting OpenOffice.org from 7s on many systems to 2s or less on a GNU/Linux terminal server + thin client. Further, the typical thick client has a single SATA drive and a GNU/Linux terminal server can be economically equipped with RAID/SCSI/SSD. If a thick client gets its data from a file-server and the GNU/Linux terminal server is the file-server, there is no network lag on loading files.

    All kinds of organizations are using thin clients with no loss of functionality:

    http://www.allmediascotland.com/media_releases/22671/queen-margaret-university-wins-top-sustainability-award

    see http://www.redhat.com/f/pdf/rhel-cxo-whitepaper.pdf e.g. page 9 Acme Corp. With 10000 users needed that other OS only for 2000 users and the rest could use thin clients.
    “Segment the users within the organization by the extent of features they use on the Microsoft platform, role, and line of business. Just as customers are segmented and targeted with cost-effective and valuegenerating products and services, organizations need to segment their employees and deliver value to improve their internal productivity.

    • Conduct user segmentation analysis to determine the use of Microsoft products and the Windows platform.
    • Determine the segment of users with a business need for Windows 7 and Office 2010 (typically less than 10% of the organization).”

  2. William Tiberius Shatner says:

    Thin client makes sense even for one user on the basis of space taken near the user, noise and heat.

    So it’s about heat and space, rather than cost now? Have you ever seen a Mac Mini, or an iMac, or those all-in-one PCs, MiniITX PCs or other forms of compact PCs that are all the rage?

    I’m sure you can pull another non-argument out of nowhere in short order, though.

    The big guys like IBM that do a lot of designing of large systems find that about 80% of uses of a PC and the roles of the users of PCs can be satisfied by thin clients

    A) you’re going to have to cite that.
    B) Keep in mind that when the Big Guys talk about terminal servers, they’re talking about the big toys. I have absolutely no doubt that a couple of IBM zSeries (7 figures), HP Superdome or Sun-Fujitsu M9000 (6 figures) can power an entire small enterprise (keyword: enterprise) (barring of course the obvious network issues, and that throughput is capped by the interconnect, likely gigE in the case of thin clients, at which point you’re taking a sizable perf penalty in the event of throughput intensive workflows).

    Remember who guys like IBM are targeting their big toys at, remember what their flagship products are, and keep in mind what it is they ultimately want to sell people.

    Again, there’s a time and place for thin client solutions, and I don’t argue that they can’t be made to work, but there are several gotchas involved and the feasibility of such a solution depebnds largely on workfload, deployment size and context. What I do dispute is the ridiculous notion that it’s a wholesale solution to every or even the majority of situations. That’s a crock and you know it.

    Major exceptions are departments where everyone is doing full-screen video

    Even non full screen video, content creation is very resource and throughput intensive and you take a huge performance penalty when being capped by a gigE interconnect – not to mention the realtime requirements of video editing cannot be met by such a solution.

    The same holds true for audio recording and processing.

    The same even holds true on professional 2d design – the hardware and IO requirements are _massive_.

    Gaming is another area.

    Software development is another depending on your toolset, Eclipse for example is barely responsive on a beefy local machine, I’d hate to imagine what running a half dozen instances does.

    And then there are the network considerations for voip, video conferencing, other forms of network-based realtime communications, all sorts of things using up network bandwidth (network printers, VCSes, file transfer, database replication, invoicing, etc, etc) which are essential to a business setting functioning..

    But yes, if we operate under the assumption that people don’t actually do anything with their computers, then yeah, it’s a catch-all solution in all cases.

    PEBKAC is a well-known phenomenon. With thin clients that problem can be better controlled because system administrators have complete control of the system

    Again, locking down clients is trivial and requires minimal effort with Directory Services (like LDAP) and Group Policies.

    e.g. no peanut-butter sandwiches in the CD tray, no jackets hung over the box, no pulling the power-plug while commits are in progress,…

    Of course users can still muck up the thin clients, but they’re cheap to replace, so who cares, right?

    All over the building people took off their jackets to protect the thin clients. People only do that for love.

    That’s kinda creepy to be honest.

    Clearly hundreds of people I have known disagree with many of the parent comments.

    Because, had there been thick clients instead of thick clients, they would have left them to soak? Because clearly these people did so because they recognize the superiority of thin clients? There’s zero correlation between their actions and your purported reasoning for it. You’re getting flustered, Pog.

    People do get better performance for many purposes from thin clients.

    You’re going to have to cite that too.
    This is the first I’ve heard of “better performance” being the rationale for thin clients, and I’ve been in the industry long enough to remember terminal clients and timesharing.

    “adequate”, “similar”, or “good enough” performance would have been acceptable.

  3. Thin client makes sense even for one user on the basis of space taken near the user, noise and heat.

    The big guys like IBM that do a lot of designing of large systems find that about 80% of uses of a PC and the roles of the users of PCs can be satisfied by thin clients. Major exceptions are departments where everyone is doing full-screen video or where data is so sensitive it must be kept off the network in any form.

    PEBKAC is a well-known phenomenon. With thin clients that problem can be better controlled because system administrators have complete control of the system. e.g. no peanut-butter sandwiches in the CD tray, no jackets hung over the box, no pulling the power-plug while commits are in progress,…

    BTW, in the large school where I set up thin clients with GNU/Linux throughout, there was a failure of the fire/sprinkler system. All over the building people took off their jackets to protect the thin clients. People only do that for love. Clearly hundreds of people I have known disagree with many of the parent comments. People do get better performance for many purposes from thin clients. I have seen poorly designed thin client systems but done right they work very well and cost much less.

  4. William Tiberius Shatner says:

    Single point of failure – Consider one user with a thin client and a terminal server. The terminal server is less likely to go down if it is a heavy duty server. The thin client is less likely to go down because there is less to go wrong than a thick client

    With a single user it’s single point failure either way. I’ll agree that in such a scenario, the argument of a SOF is moot. However, so too are the arguments for power and cost savings, and that setting up thin clients is pointless in such a situation. The server will cost more than the workstation its replacing.

    The overall probability of going down is less with the thin client approach. With “ordinary folk” tweaking their machines running that other OS, the liklihood of going down is much greater than “ordinary folk” running on thin clients.

    My biggest problem with this statement is that no metrics are provided to support the assessment.

    I’d venture that most “ordinary folk” running that “other os” aren’t in the habit of tweaking it, and since we’re talking about a situation in which thin clients are a solution, we’re not talking about ordinary folk to begin with, in which case locking down a network of thick clients and pushing group policies is trivial.

    I just don’t really see the benefit thin clients provide as compared to the gotchas they present. My argument isn’t against remoting applications, I do that all the time (render farms, remoting ZFS for revision control, etc. and I’m also aware that that’s a specific “zone” where thin clients can be feasible, go lower and it’s pointless (servers cost more than the workstations they replace) go higher and it’s pointless (too much strain on the network), and workflows which are absurd for it (content creation).

    I don’t dispute that people who want it to work badly enough, given the appropriate workload can make it kinda-sorta work, but I’m not sold thin clients being a catch-all wholesale solution. The more intensive the workload and the bigger the deployment the less feasible it is – thin client solutions don’t scale up well (note, this isn’t to say that they can’t be made to scale, but doing so comes at the expense of more complexity and at the expense of their original selling point – is beefing up the appservers going to cost you more than thick clients? I can think of a number of cases where it does).

    You have fewer points of failure with thin clients.

    To be more to the point, you have a single point of failure with thin clients, and that’s a very bad thing. But again, the metrics supporting the assessment are flawed.

    Not to mention that your numbers are largely meaningless since you’re not providing any information of what kind of workload they’re servicing.

    The point is I don’t pretend my use cases are general use cases (I run a studio, we push our quads with Quadros and 8gb of memory to their upper limits on a daily basis – imagine the hardware required to terminal that up).

    If a thin client setup works for you, then that’s wonderful. But don’t act like it’s a common use case, or like it’s a catch all solution in any use case.

  5. Most of us work for SMB and are not in large deployments.

    Single point of failure – Consider one user with a thin client and a terminal server. The terminal server is less likely to go down if it is a heavy duty server. The thin client is less likely to go down because there is less to go wrong than a thick client. The overall probability of going down is less with the thin client approach. With “ordinary folk” tweaking their machines running that other OS, the liklihood of going down is much greater than “ordinary folk” running on thin clients. You have fewer points of failure with thin clients. I had 96 thin clients and 13 thick clients running at a school I set up. We had one hard drive failure in three years. We would have had many more running thick clients. We had multiple terminal servers, shifted the load to up servers and kept going. Where I last worked we had 40 machines when I arrived and half the thick clients were not working because that other OS had failed. Compare oranges.

  6. William Tiberius Shatner says:

    ARM is 32bit. For home use that is sufficient address space and throughput for browsing/word-processing/playback.

    I was refering to multiple cores. Re-refer to your original statement.

    minor problems in many cases. The single point of failure is irrelevant in small operations because the server is much more reliable equipment than a normal PC:

    Note that the qualifier “sizable” was used.
    Second that the server is more reliable is a moot point that does nothing to address the issue of a single point of failure, and the comparison to a PC is a non-sequitur at best, as the issue of a single point of failure does not apply to “thick” clients running local applications.

    Single point of failure is to be avoided, and it certainly is not a “minor problem”, the possibility of cascade failure is terrifying from a business perspective.

    and in large operations a cluster will normally be used and the cluster can be made with automatic fail-over.

    This is where another problem lies, the point of moving to thin clients was supposed to be to cut costs vis a vis standard desktop machines, now we’re talking about replacing the desktops with thin clients AND a cluster of beefier servers.

    This potentially nullifies both the advantage of lower power consumption and the advantage of cost, but also introduces new problems, while worsening the ones it was supposed to fix! These factors need to be taken into account!

    You can also place different applications on different servers so at most one application would have an outage.

    On a small deployment, sure, but again the “sizable” qualifier was used. Now you need redundancy for each app server, you increasing costs and complexity, while further marginalizing what was supposed to be gained by the endeavour to begin with, on a large deployment deployment you’re talking clustering clusters. Further you’re still ignoring the gravity of having a single point of failure – sure at most one application is down, but it’s down for _everyone_, meaning nobody is able to work on the task that application was for.

    My experience has been that with X, say, the average bandwidth rises but is far below the peak bandwidth of thick clients sucking files from a file-server which many folks use for convenient backup/economy.

    How large is your deployment, and what is the nature of the workflow? Also, you’re surely not suyggesting the fileserver and app server are one and the same, therefore that people are working directly on the fileserver, therefore negating any advantage regarding backups conferred by the central fileserver?

    In my studio, local copies of files are kept locally until they’re checked into the VCS/fileserver, this doesn’t make the workflow depend on network connectivity, for one, and does not exist within a vacuum. Other factors play into eating up bandwidth – network printers, VOIP, video conferencing, invoicing, etc.

    The “beefy” servers I build cost about $30 per simultaneous user, far less than “the tax”, and over five years, comes to $6 per year.

    This says nothing to me without deployment size, and what kind of servers are in use. I could easily say one of my file servers costs $12 (before factoring amortisation) per user (and omit certain details such as it being commodity hardware servicing 4 users at home, the studio is a different matter – we’re talking multimedia authoring there).

    . Where thin clients really pay is in maintenance.

    It’s funny, I usually make that argument about directory services and group policies.

    Though judging from the single GigE interface, we’re clearly not talking large deployments here, and likely trivial workflows.

  7. ARM is 32bit. For home use that is sufficient address space and throughput for browsing/word-processing/playback.

    WTS wrote:
    “The trouble with thin clients isn’t as obvious as it might seem:

    For starters, they introduce a single point of failure, your central application server goes down, and all your thin clients go down with it.

    Second, on sizable networks, you need much beefier machines to act the role of the central appservers than you would with thick clients, many central app servers to distribute load, and yet more for failover. Has this been weighed in against the savings on client PCs?

    Third, network bandwidth. Your daily client side operations now depend on your internal network not being saturated. Network spikes now affect productivity on client applications. On a sizable network this becomes a problem. Not to mention your internal network is already under considerably more load owing to all the thin clients.”

    These are minor problems in many cases. The single point of failure is irrelevant in small operations because the server is much more reliable equipment than a normal PC: redundant PSU, aggressive fans, monitoring, backup etc. and in large operations a cluster will normally be used and the cluster can be made with automatic fail-over. You can also place different applications on different servers so at most one application would have an outage. My experience has been that with X, say, the average bandwidth rises but is far below the peak bandwidth of thick clients sucking files from a file-server which many folks use for convenient backup/economy. The “beefy” servers I build cost about $30 per simultaneous user, far less than “the tax”, and over five years, comes to $6 per year. Where thin clients really pay is in maintenance. There are folks in this world that budgetnothing to migrate to thin clients as I needed to save time and converted existing equipment to do the job. An ordinary file-server did well enough as a terminal server because it had a gigabit/s NIC and just enough RAM and 4 SCSI drives. Payback was infinite and instant.

  8. William Tiberius Shatner says:

    The trouble with thin clients isn’t as obvious as it might seem:

    For starters, they introduce a single point of failure, your central application server goes down, and all your thin clients go down with it.

    Second, on sizable networks, you need much beefier machines to act the role of the central appservers than you would with thick clients, many central app servers to distribute load, and yet more for failover. Has this been weighed in against the savings on client PCs?

    Third, network bandwidth. Your daily client side operations now depend on your internal network not being saturated. Network spikes now affect productivity on client applications. On a sizable network this becomes a problem. Not to mention your internal network is already under considerably more load owing to all the thin clients.

    An extension to the previous problems, cascade failure. Something goes awry on one of your central app servers, and all the thin clients it services are affected, this is not the case with “thick clients” (quote because I hate retroactive terminology). One client goes wonky, the others are not affected.

    I’d like to wait a few years regarding the example you’ve provided, to see how the solution amortises over time, how the above gotchas play into costs, etc. pardon if I’m more interested in the long term than the short, and pardon if I factor business things into the equation. These variables need to be taken into account, and few people pushing thin client solutions seem to ever even consider them.

    Consider there’s a reason we moved away from terminal clients and timesharing 30 years ago.

    On the plus side, one of the problems with ARM itself has been addressed with a full fledged Windows being ported over to it – the question of migration is moot, change infrastructure, but stick with Windows and you can at least perform of somewhat lateral move. (see the Munchen disaster, notably how long it took to get employees over from Office on Windows to OOo on Windows).

    8 cores 500 mW. Why shouldn’t this thing be in mainstream PCs?

    didn’t you say Really, only servers and some number-crunchers actually need 64bit multiple core CPUs.?

    That’s a trick question, you did. You might not keep track of your arguments, but others do. Pick and angle, and stick with it, thank you.

    The Tegra is neat, though my reading on it suggests it excels and media playback, and no mention of content creation, media encoding or gaming, which are different beasts altogether. Tegra would make a sweet media centre, however, It remains to be seen if ARM can be made to scale up enough to be useful on workstations and desktops, while retaining its power efficiency, and delivering equivalent performance. ARM was after all designed for embedded systems. It has to be able to do AT LEAST the same for less to succeed in these markets.

    I always get a chuckle when I see the “old is new” syndrome at work, systems on a chip in this case. It’s been a staple of the enterprise sector for decades, and was standard in the day of RISC-based home computers, until x86 was made beefy enough to kill them all off.

  9. see today’s post, June 2011. nVidia’s Tegra 2 is expected to be widely used in IT. A YouTube video demo shows it singing and dancing. 8 cores 500 mW. Why shouldn’t this thing be in mainstream PCs? It’s reached the point of vanishing returns on power because the display probably uses most of the power in the device but they certainly make things smaller and more portable.

    Like it or not, thin clients work for a lot of people and ARM is great for thin clients.

    For instance, IGEL saw 129% growth in UK in 2010 for their thin clients and 457% growth for PC-to-thin-client convertors (eating up all those XP machines that are not ready to be scrapped). Think of it this way. If a thin client lasts three times as long as a thick client (less heat and fewer moving parts), that 129% growth should be considered 387% growth for its effect on PCs. This is not a local-to-UK phenomenon. Thin clients make sense in a lot of situations.
    “Redwood City, 9 November 2010: – NComputing, a global leader in desktop virtualization, today announced a major milestone and endorsement of its strategy to transform legacy PC economics and infrastructure. The company has been ranked #1 in enterprise client device shipments in Asia Pacific in a newly published report by IDC.”

    Read http://www.ncomputing.com/node/3154 and weep.

  10. William Tiberius Shatner says:

    negating many of the advantages of using ARM in the first place.

    The main advantage of ARM is low watts per FLOP usage (at the expense of raw throughput, of course). This is not negated by bigger and beefier ARM CPUs.

    They could produce ARM CPUs but that would threaten their hair-drying line of products. The market for those will flatten or perhaps recede.

    Why ARM? And why would pitching a non-x86 arch into a different market hurt their bottom line? It wouldn’t be the first time they do it (see Itanium, made to compete with Sparc and Power in areas they don’t push x86 in). Also, you’d think after buying up Wind River they’d be all over this whole embedded thing.

    Really, only servers and some number-crunchers actually need 64bit multiple core CPUs.

    Or anyone working with multimedia, gamers, or really anything beyong web browsing, these days. People run more than one application at a time, newer applications are more and more resource hungry, as memory is cheap and plentiful. 3 gigs isn’t as much as it used to be.

    M$’s strategy seems defensive
    Not really, they’re just taking their time to do it right. No sense jumping the gun and releasing something half-assed, half-working and on underpowered hardware that probably won’t provide the UX customers expect.

    No one will want to delay going to ARM for 2 years so the march of ARM+GNU/Linux or Android will continue and into x86 territory.

    This only holds true if all things are equal between Windows and Linux. On the UA end, this isn’t true. If it were, Linux would have taken the x86 market already.

    It will freeze part of the market and create new opportunities in others but it is the surest sign that the monopoly is dying. M$ can no longer dictate that Wintel is the way to go. The market dictated to M$ otherwise.

    Please. Locking into a single architecture was never the MO. Microsoft got to where they are because of NT being designed from the begining to be easily portable across architectures. Keep in mind that it used to run on Alpha, MIPS and PowerPC as well as x86. x86 won out because it was cheap and good enough, and so they settled on that. if ARM is the new x86, it’s a safe bet that they’re on it. MS isn’t stupid, you know.

    M$ will compete on price/performance as best as it can without killing “the experience”.

    Please. Welcome to business 101. You only need to compete on price in a situation where all other things are equal. The familiarity, ubiquitousness and software library of Windows means they don’t have to compete on price. Again, if such were the case, Linux would have killed both Microsoft and Apple already.

    As long as they charge huge licensing fees and need heavy-duty ARM CPUs all they will be able to do is maintain a presence, not dominate.

    Wishful thinking at best. Low power only makes sense in the embedded and mobile spaces. Get it through your head, ARM’s selling point is lower power consumption per FLOP, not low power output.

    Intel and OEMs and retail now have a “green light” to really push GNU/Linux on ARM and x86/AMD64.

    Intel has a green light to do whatever they please already.

    The only possible way M$ can hold onto monopoly is buying out ARM or making an exclusive deal with them to only produce bloated CPUs.

    What exactly is a bloated CPU? Again, it’s about low power use per FLOP, not low power output. And Microsoft getting into the hardware market? Really?

    M$’s moves may make me seem like an oracle but I am not.

    One man’s oracle is another’s crackpot, I suppose.

    I don’t know how to run a business
    That much is quite clear.

    Moving to ARM makes sense for everyone, not just M$.

    It really depends on the application. There’s a limit to how high the ARM architecture can scale, by design. It makes no sense for an application that requires high throughput and vertical scalability to switch to ARM.

    It will also make sense to use ARM CPUs as thin clients

    Why won’t you people let thin clients die already? It’s a solution waiting for a problem that doesn’t exist.

    and servers as well as mobile devices and personal computers.

    More baseless wishful thinking. It’s an embedded special purpose architecture. ARM desktops don’t make sense unless you have “evil, bloated” ARM. and it makes so sense on the server for the midrange and top tiers where throughput and vertical scalability reign supreme. ARM on the server does have to potential to eat x86’s lunch on the low end tier, mind you.

  11. oe, I think the Year of GNU/Linux was 2009. I don’t claim that because GNU/Linux took over all of IT or anything like that. I claim that because 2008 was the last year anyone could claim GNU/Linux was for geeks only. Many OEMs sold units installed with GNU/Linux and many millions of people adopted GNU/Linux that year. Anyone, geek or not, could buy a GNU/Linux thingy in 2009 almost anywhere in the world. GNU/Linux is still not prominent on retail shelves in USA except on smart-phones but the rest of the world has been using GNU/Linux pretty widely. ARM is doing an end-around play against that other OS so M$ is now getting on the bandwagon. The advances in popularity that GNU/Linux made in the face of the failure of Vista is trivial compared to the advances made in 2010 in smart-thingies so I call 2010 the Year of ARM. Probably more people got Linux running on their device in 2010 than in most of Linux’s history. That will likely happen again in 2011. Whatever you want to call it the current explosion in popularity of Linux took off about 2009. The dam of retail space cracked in 2009 and a trickle came through. 2010 saw quite a stream and the whole thing will fall in 2011. Perhaps instead of a year we should call 2009-2011 a triple or something.

    Anyone who doubts the above has some responsibility to state a reason why this will not happen. I don’t take “M$ will win” etc. on faith. Give me a reason why by the end of 2011, an ordinary consumer will not be able to buy some form of personal computer retail. Give me a reason why business, who are switching to web/cloud applications could not choose GNU/Linux on something as the lowest cost option in 2011. Intel and some OEMs still cling to x86/amd64 but ARM is expanding into a vacuum of low-power computing and taking GNU/Linux with it. Android is just the beginning. Normal GNU/Linux works better than Android on these things. People like better for less.

    In the early years of the PC when a few OEMs were producing a few million PCs, M$ was able to get everyone that mattered to make an exclusive deal to exclude competition. That will not happen this time around. M$ does not have enough money to buy out everyone. We should see 400 million new personal computers this year and a similar number of smart-thingies. The world can make more money selling a smart-thingy without “the tax”. They will. M$ has nothing to say about it except “We will have vapourware for sale in 2012.” Chuckle. That will be too late.

  12. oe says:

    There won’t be a year of the Linux desktop and there never will be.

    That being said it’s a quiet sea change going on…I have noticed more and more people swapping the LiveCD’s around, using on that older machine that they were going throw out anyway, and its name recognition is definitely on the up-tick. The only reason you don’t see it in main stream media, or stores is that no one is pushing it as a Madison Avenue product, it relies solely on pure word of mouth and the critical mass is growing. Its a lot rarer to stumble into folks who have never heard of it much less tried it in recent years.

  13. Digital Atheist says:

    Lets just cut to the chase. You are mad because Microsoft (or M$ as you so cutely used) have moved in to territory you thought Linux (or Linsux as i prefer) had tied down and was forever verboten for Microsoft (M$). Because MS is working on making Windows 8 and including ARM architecture, you are now dismayed because you know that–given past history–people will willingly choose the Windows product over the Linux (Linsux) option.

    You can toss in any comments you like about Android being on numerous devices and say that Linux is on everything, but we both know that when a flaw is found in Android, it all of a sudden is no longer Linux, it’s Android (this is called the “Android is Linux, except when it isn’t” phenomenon).

    Long story short, by showing off Windows running on various ARM devices, Microsoft (M$) just poured cold water on the burning fuze of Linux (Linsux) ARMageddon.

    Tech eveangelizers have been promoting the “Year of Desktop Linux” for years now (actually nearly 2 decades). Finally, many of your brethern gave up on that and started the hue and cry about the Microsoft (M$) being slaughtered by the coming ARMageddon (before that it was going to be netbooks, until a decade old OS by windows swept Linux (Linsux) out of the netbook market). History shows that you (Linux (Linsux)) evangelizers have been wrong about “Year of desktop Linux” every year, your constant 5 year predictions about how Linux is going to wipe Microsoft out have been 100% bust, and now we are supposed to trust that you know what you are talking about with ARMed to Win?

  14. Did Vista work? Not right away. Did “7” work on ARM? Nope. How about smart-thingies of any kind? Hundreds of thousands of Apps already ported to ARM v vapourware. M$ is not going to get its partners to invest heavily in porting to ARM until M$ can demonstrate that there is a role for that other OS on ARM. In the meantime, Android/Linux and other GNU/Linux is getting into the hands of hundreds of millions of users. Is M$ going to pay hundreds of millions of people to switch if M$ ever gets its act together? Nope. What would the shareholders say about scattering $billions to the wind?

    M$ could port Dalvik to that other OS on ARM but that would hardly be that other OS any longer, would it?

  15. Digital Atheist says:

    You are jokin’ on this.. right? Or is it just the fact that you can’t except that Microsoft just poured cold water on Linux’s burning fuse of ARMaggedon? Either way, believe it or not, Windows seems to be heading to ARM. Bleating about how they can’t possibly make it work isn’t going to stop the fact that it will work, and most likely offer a lot more software choices than some wonky Linux version. 🙂

  16. John Cockroft says:

    Oldman:
    x86 is a *horrible* architecture and only became the industry standard because Intel beat Motorola in producing a cheap 8 bit bus version of the 8086 vs a cheap 8 bit version of the 68000. The 68000 has a clean and simple RISC-like bank of 32 bit registers whereas the 8086 (and all x86 chips afterwards) has a kludged segmented memory architecture (admittedly people generally ignore this these days with 32 bit and now 64 bit registers). Nobody would design a CPU like the x86 nowadays!

    This dreadful instruction set has to be faithfully copied onto each successive x86 generation even though there is a school of thought for getting rid of 16 bit mode and even possibly 32 bit mode given the move to 64 bit operating systems. If that was done then the chip could be stripped down somewhat and made somewhat cooler but it is still handicapped by the instruction set.

    Contrast this with the simple and clean 32 bit RISC (or at least semi-RISC) instruction set of the ARM chips. No wonder even the very latest ARM cores take up less than 10% of the transistors of an equivilent x86 core. This means you are looking at (perhaps) about 40 million transistors for the most powerful quad core ARM chip to date vs about 750 million transistors for the current quad core Core i7 Intel chip. Even with the very best that Intel can do (in terms of power saving technology), it cannot compete with this. Having said all that, a 2GHz ARM is not as powerful as a 2GHz Core i7 copy – perhaps a 2GHz Atom chip would be closer in performance.

    The answer (of course) is to double the number of ARM cores (say) to 8 which would still use substantially less power than a 4 core Core i7 chip. If you were to make a laptop based on this rechnology (say coupled into NVidia’s Tegra chipset) then you would have a machine which had hours of battery life but still ran faster than many x86 based desktops!

    The problem (at the moment) is that the 32 bit addressing space is becoming a restriction. Core i7 is 64 bit (as are AMDs Hammer Architecture and Bulldozer chips – under development) whereas ARM is still only 32 bit. I’m sure that the ARM chip could be updated to 64 bit and if they do so – they should NOT try and run 32 bit code as well. That is the x86 way – the better approach is to recompile applications for the new chips but that does not work well with proprietary operating systems like Windows and OS/X (in which people pay for binary code and then expect to run the same code on future operating systems).

    Time to move to Linux/Android I think.

  17. oldman says:

    “ARM uses about 1/4 as much power as x86 at the same number of cores, clock-speed and resolution. There is no way Intel can reduce that hardware bloat except by using ARM technology or inventing their own.”

    I believe they did

    http://www.digitaltrends.com/computing/intel-debuts-core-2011-line-and-a-movie-service/

    My understanding low power Ivy-Bridge class processors to come are supposed to draw less power than the atom and be within a few watts of ARM at idle, yet offer approximately 15x the performance of ARM under load.

    So much for intel using ARM technology.

  18. Bloated software did not cause Intel to make x86 microcoded. Microcoding was the easy way to add “features” to the chip. Programmers like having lots of instructions so that one line of assembler coding does more. They copied and expanded on the complexity of the chips for decades and we have what we have. ARM took a different path and did not microcode. Until recently they did not even have pipelining. Every instruction was one clock-tick. x86 needed a bunch of clock ticks to do anything back in the 8086 days and Intel copied that. Each level of pipeline and every extra bit flipped on each clock cycle wastes energy, so ARM will be much more efficient. ARM can handle any level of complexity by using multiple cores, for instance, graphics for one, I/O for another and the rest for processing. ARM is tight and modular. Not much silicon or power is wasted.

    ARM uses about 1/4 as much power as x86 at the same number of cores, clock-speed and resolution. There is no way Intel can reduce that hardware bloat except by using ARM technology or inventing their own.

    I have used XFCE4, GNOME etc. on GNU/Linux so I know the effects of bloat. These smart-thingies also are handicapped by interpreting byte-code. They need the most efficient processors they can get.

    Clock-speed is not a good answer to the power consumption of CPUS. The heat wasted varies in part with the square of the clock-speed and you are better off to have more cores. With present levels of bloat, 2-4 cores seems about right. They will use less power with ARM.

  19. oldman says:

    Pog:

    It seems to me that whether you like it or not, if the ARM processor is going to succeed outside of its current niche market, it is going to have to grow in complexity in order to be able to compete. This process was already underway even before microsofts announcement. In the mobile device space, the demands of supporting Android and iOS have pushed clock speeds of ARM designs from the 500-800Mhz range to over 2Ghz, pushed single core designs into multi-core designs, and also resulted in virtualization hooks and PAE like memory paging being designed in ARM to address the addressing limitations of its 32 bit design.

    I would suggest Pog, that the spiral of increasing software functionality driving increasing hardware performance (i.s. Mores law) is showing no signs of abating and will catch up ARM as well. And I would submit that if you want ARM based devices to break out of their current niche into the general purpose computer market, you are going to have to learn to tolerate the “bloat”

  20. Ray says:

    Or they could stuff Windows CE in ARM.

Leave a Reply