Dave’s Top

top is a GNU utility that indicates how busy a GNU/Linux system is and what processes and resources are being used. I was reading an article on Dave Richards – City of Largo Work Blog and found a gem. It shows a terminal server being hammered by an inconsiderate task.
“The shot below was taken from our current GNOME server and you can see that wfica (the citrix client) chews CPU as the canvas is repainted. The server is certainly not taxed and we could get by running it in this manner;”

While it is wonderful the Dave Richards can tune his servers, the big point for me is that a GNU/Linux terminal server can be hammered and still be quite usable. By backing off the load just a bit, one gets far superior performance for a lot of tasks. For the record, his “top” shows 9K+ tasks running on that terminal server. Users of that other OS get bogged down regularly on a normal PC with 50-100 tasks running. Dave has 883 users in his top (really only 250 live people, but multiple sessions exist). That server has 64gB of RAM and costs a pretty penny but it is far more efficient with only 256MB of RAM per person and gives better service because commonly used files are cached (top shows 3.9gB).

Looking at the system as a whole, this means Largo is spending much less money on hardware and software simply because of the flexibility of GNU/Linux. One can use that other OS for similar purposes but, if it cannot make more than a few people happy at once on a PC, what’s the point? M$ spent years discouraging people from using thin clients and GNU/Linux calling them “dumb terminals” and “cancer”, but now everyone’s doing it because it’s a better way to do IT.

In case anyone is wondering where the applications are, they are on other terminal servers. That one’s just for the sessions so it only runs GNOME and a few utilities including the Citrix ICA client for connecting to that other OS. Largo has a terminal server for OpenOffice.org, and another for FireFox in case you are wondering.

His hardware is pretty wild:
“Dave Richards said

OpenOffice
Migration to 64bit Linux is complete and things are working pretty well. Even with 100 users, OOo opens in about 2-4 seconds, nice

We actually consume far less tax dollars than other cities. With basically no desktop costs, all we have to buy are servers; and we are only doing so every 5 years now.

server is a HP ProLiant DL785 G6 8439SE 2.8GHz Six Core 4P 64GB ICE Rack Server (AM438A). We will be running it for 5 years, and got it for a lower price than list price on the web. I would expect to have around 200 concurrent Firefoxs open in the coming months.”

Web prices start around $31K + shipping. Compare that with the price of 250 PCs… It’s less than half price and noise (at the desk), power-consumption, shipping, installation and maintenance are tiny in comparison.

GNU/Linux. It’s the right way to do IT. I recommend Debian GNU/Linux if you want to get the most out of your spending on IT.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

72 Responses to Dave’s Top

  1. kozmcrae wrote, “Trying to keep Microsoft’s Windows secure is a lost cause.”

    Amen. At my last job, I installed the most anal-retentive anti-malware system on Earth. It greatly reduced malware but complaints about everything else sky-rocketed. Systems were barely usable and upgrading was a pain. I had to re-image just to update any app because the software wanted to know application X with checksum Y was OK. Imagine 100 PCs needing that with every upgrade of FireFox or that other OS. It was a nightmarish problem. I switched to GNU/Linux and everyone was laughing and smiling about how responsive the 8 year old PCs were even compared to the new systems with XP or “7”.

  2. aardvark says:

    Mr Pogson:

    “Enough already Kozmcrae and oldman and the others…

    Can’t we just discuss the technology? No one is interested in your ad hominem attacks. Go beat your heads on a wall privately if you want pain. Don’t do it here.”

    That’s kind of what I was (failing) to say, actually.

    All ad hominem attacks should be off-limits.

  3. kozmcrae says:

    It’s not the malware we know about that’s the problem, it’s the malware we don’t yet know about that’s doing the real damage.

    I saw an interview with a government paid Chinese hacker. He said for every virus/Trojan/worm that gets discovered they have many more either in use or waiting to be unleashed.

    Trying to keep Microsoft’s Windows secure is a lost cause. If that hasn’t become clear by now, then I guess it never will. Businesses will continue to throw money at a problem that will only cost more as time passes.

    There are people who boast that their desktop has never been owned by malware. I could make that same boast *if I neglected to state which desktop I actually used.*

    While using the GNU/Linux desktop I’ve never suffered from malware. I cannot say the same for before 2005. That was when I was using Windows XP and had at least two viruses.

  4. M$ gives a heads-up the Thursday before Patch Tuesday. They list the general outlines of vulnerabilities to be patched. The malware artists can then plan an attack for days in advance. When the update is released, they can hack into it within hours and release malware the same day that will be effective over most of the globe. See Zero-day Attack
    That article is mostly about application vulnerabilities. The same phenomenon applies to the OS between the time patches are released and the patch is implemented. I have worked in places where it took days for M$’s patches to be universally installed.

  5. Notice how the popularity of that other OS drops suddenly at #39? We can speculate what that means. Note also that the average=maximum at around #35. The two regions of the list could well mean the difference between well-managed systems and those that fire up and leave. The list may only indicate that botnets don’t want to kill their victims but keep them alive to do work… Imagine how many “critical” updated those M$-units missed.

  6. kozmcrae says:

    “Enough already Kozmcrae and oldman and the others…

    Can’t we just discuss the technology? No one is interested in your ad hominem attacks. Go beat your heads on a wall privately if you want pain. Don’t do it here.”

    If Robert Pogson can live with it then I can too.

  7. Ted says:

    @Oiaohm

    “Yes lot of aircraft glass cockpits computers are running Linux.”

    Onus of proof and all that, Oiaohm…

    Links, please? To “lot[s]”?

    To the Linux glass cockpits that aren’t only allowed as BACKUP in certified aircraft that are not fly-by-wire for preference, too.

    http://www.linuxfordevices.com/c/a/News/Linux-powers-small-plane-glass-cockpit/

  8. Ted says:

    @Oiaohm

    “I am a person who does systems with 99.999 and 99.9999 uptimes.”

    Are these 99.999% and 99.9999% “systems” of yours clusters? A 99.9999% system is 31 seconds of downtime a year – less than the reboot of a physical box (most servers BIOS memory checks take longer than that) so I would doubt they’re single boxes.

    “One day I will attempt 7 and 8 nines of uptime.”

    This is just chest-beating, Oiaohm. You’re only trying to show off.

    99.99999% is three seconds a year.
    99.999999% is a third of a second.

    And to no good purpose; if you get 99.999999%, all you’ve done is fail to achieve 100%.

    A single box running 99.999% would be one reboot a year – it’s just over five minutes downtime. If using VMs, maybe two restarts a year. Perfectly acceptable, even admirable, in a lot of fields. Even 99.99% would be fine if the downtime is appropriately scheduled.

    As an aside; it’s funny how all the top systems for longest uptime run Windows 2003 Server with Linux sneaking in at number 39.

    http://uptime.netcraft.com/up/today/top.avg.html

  9. oldman says:

    “Can’t we just discuss the technology?”

    Gladly Robert Pogson. I don’t mind a discussion, and I accept the fact that there will be “lively dispute” between us, but Mr. K has made it his business to engage in personal attacks on me and others that have nothing to do with technology, and in the end I responded in kind because I will not put up with such crap.

    If you have any further issues, I suggest that you take it up with him.

  10. Phenom says:

    No, they are not. M$ often gives a “heads-up” several days ahead of Patch Tuesday, telling the world what’s coming.

    Care to show some proof? It is always charming how you can negate everyone else’s experience without any actual proof for your position.

    when that other OS does not even do package management

    Why should it? What exact package-management-specific features do you have in mind, which developers would desparately need?

  11. Enough already Kozmcrae and oldman and the others…

    Can’t we just discuss the technology? No one is interested in your ad hominem attacks. Go beat your heads on a wall privately if you want pain. Don’t do it here.

  12. oldman says:

    “Until then you are a lying rat bastard.”

    I dont apologize to bigoted ignorant a$$holes, especially those who are not in a position to do anything other than post childish insults.

    I must say however that your assertion to having seem me in action has me intrigued. Which one of the collection of a$$holes that I have had to deal with in my career are you?

  13. kozmcrae says:

    “Really? Have you actually seen my desktop?”

    I’ve seen better. I’ve seen you in action @ldman. You are ethically challenged. I would expect you to lie.

    Show me you have some scruples (Hint: It’s very easy.) and I will gladly retract my remark. Until then you are a lying rat bastard.

  14. Ivan wrote, “Very few of the mirrors can afford that bandwidth, chief.”

    It takes very little bandwidth to synch a bit more often. Only the changed files need to be transferred, and the lists, of course.

    Ivan also wrote, “Debian Stable routinely has more bugs than Testing*, nice try though.”

    There is a difference in the number and quality of bugs. If you believe that software that has been in used for years by millions of users in every situation has more bugs than stuff a few weeks old, I have a bridge I can sell you. It’s cheap at only $20 million. Think of the revenue you could get from the tolls…

    The bugzilla site of Debian reports bugs of many categories. The release-critical site lists bugs filed against the stable package for packages that are in testing:“the blue line graphs the number of bugs that are a concern for the current stable release”. Testing has far more packages than stable so even though the counts may be similar, the probability of having a particular bug on a system is much less with stable.

    #on a system with wheezy(testing)
    apt-cache search e|wc
    36129 295102 2231900
    ssh system_with_squeeze(stable) "apt-cache search e|wc"
    28875 239778 1768563

    For stable:
    “Total number of release-critical bugs: 669
    Number that have a patch: 158
    Number that have a fix prepared and waiting to upload: 7”

    For testing:
    “Total number of release-critical bugs: 874
    Number that have a patch: 110
    Number that have a fix prepared and waiting to upload: 32”

    And the nature of the bugs? The testing bugs have a lot of show-stoppers like “fails to build from source”. The stable bugs have a lot of wish-list items like “wouldn’t it be great if…”.

    See LibreOffice in testing:
    ” libreoffice (debian/main).
    Maintainer: Debian LibreOffice Maintainers
    619263 [ MR ] [TU] libreoffice: terminate called after throwing an instance of ‘com::sun::star::uno::RuntimeException'”

    LibreOffice in stable:
    “openoffice.org-writer (debian/main).
    Maintainer: Debian LibreOffice Maintainers
    620137 [ M ] [STU] openoffice.org-writer: Saving in docx format destroys the entire file content”

    That bug was handled by suggesting use of libreoffice-backports and noting that
    “Writer shows a clear warning when I try to save in a different format: A window pops up informing me that (translated from German) “This document might contain formatting or content which cannot be saved in format X” So I don’t think that there will be a lot of unsuspecting users. “

    So, testing is much more likely to have actually broken stuff but my wife and I use it with no problems for regular desktop and LAMP stuff. We have one system still on squeeze/stable because I am too lazy to apt-get dist-upgrade it. Really, are you concerned about less than one known bug per package when that other OS does not even do package management and ships product with hundreds of bugs? Debian gets its bug count down to near zero before releasing stable…

  15. Ivan says:

    If you believe that debian mirrors are synced four times a day, then you haven’t used debian as much as you’d have us believe. Very few of the mirrors can afford that bandwidth, chief.

    Too many people get the bugs out of the testing flavour to cause that kind of breakage in the stable flavour.

    Nonsense. Debian Stable routinely has more bugs than Testing*, nice try though.

    * http://bugs.debian.org/release-critical/

  16. oldman says:

    “I guess oldman doesn’t run Adobe stuff or Google stuff or M$ would have prevented their updates at one point, and he mustn’t have had the rootkit that caused XP to fail after an update, or paid for an upgrade in advance… or used an AMD CPU and got the infinite reboot loop.”

    Wow that’s quite a litany! Too bad none of this has affected me. I dont run crap-terons, Adobe has auto updated for quite some time, and most importantly I don’t run unprotected un-patched and/or back level versions of ANY OS! Couple this with strategic use of virtual machines (I’ve been using VMWare workstation since 1999!) and I CAN say that I am free of any of this.

  17. oldman clearly has the Luck of the Irish. Perhaps we should chip in and get him to buy us a ticket for the next big lottery… 😉

    I guess oldman doesn’t run Adobe stuff or Google stuff or M$ would have prevented their updates at one point, and he mustn’t have had the rootkit that caused XP to fail after an update, or paid for an upgrade in advance… or used an AMD CPU and got the infinite reboot loop.

    No, in the real world there are many ways M$’s updates have messed with people and one has to be extremely lucky or restrictive in IT not to have encountered the problems. In my own case, I had 100PCs and 7 servers and every Patch Tuesday was a nightmare with skipped lunches and long hours. I was always having to wait until other teachers went to lunch to do updates and the list of notices I had to read in the process grew longer and longer until I too enabled automatic updates and hoped for the best. I was told to change nothing on the system but I changed that just to keep my sanity. I still had to hold the hands of bunches of systems but I could skip the reading part…

  18. oldman says:

    “Okay @ldman, you have shown your true colors once too often. You are a lying rat bastard. You’ve “borked” your desktop more than once doing your updates. You’ve also waited just like Robert said many people do.”

    Really? Have you actually seen my desktop?

    No Mr. K. I have spoken the truth and nothing you say will change the fact that it IS the truth. The fact that you do not like is not my problem its YOURS.

    As I said, you are a bigoted ignorant a$$hole, and you just proved it is spades.

  19. Breakage does occur but it is extremely rare in a stable release of Debian GNU/Linux. Too many people get the bugs out of the testing flavour to cause that kind of breakage in the stable flavour. The stable release gets no new bug-ridden features, just security updates.

    see Getting and installing Debian GNU/Linux

  20. One can probably set the automatic updates for a few days after the release in order to intervene in case of disasters such as we have seen every year or so from M$. In places where I worked it was “check in the wee hours”. That mostly worked but I had more than a 5% rate of failures at one place. Perhaps that was not “borked” but it borked me. I had better things to do with my life than to check up on M$.

  21. Kozmcrae says:

    @ldman said:

    “I have been allowing automatic updates for many years now without borking any of my desktops.”

    Okay @ldman, you have shown your true colors once too often. You are a lying rat bastard. You’ve “borked” your desktop more than once doing your updates. You’ve also waited just like Robert said many people do.

    You have no ethics @ldman. You should have owned your BS remarks. Too late.

  22. oldman says:

    “They could not do that if the critical updates were not ready. Therefor they are waiting days after an update is ready before releasing it. ”

    So when Robert Pogson speculates on the inner workings of microsoft, we have to take it as gospel. When we relate our actual experiences, they are dismissed as trolling.

    I have been allowing automatic updates for many years now without borking any of my desktops. No malware boogie man has borked my systems.

    IN comparison I have seen all too many linux desktops broken in some shape or form after CRapt get update; Crapt get upgrade has been run.

  23. Phenom wrote, “Critical (security) updates are rolled out as soon as they are ready.”

    No, they are not. M$ often gives a “heads-up” several days ahead of Patch Tuesday, telling the world what’s coming. They could not do that if the critical updates were not ready. Therefor they are waiting days after an update is ready before releasing it. When they do release it, many systems take a day to install it because they want to see what it breaks before spreading it around. Malware artists love M$.

  24. Phenom says:

    Ops, clicked “Post Comment” a bit too early.

    By default, Windows installs downloads and installs critical updates as soon as they become available. Only the rest get scheduled. This is as per XP SP2. You can clearly see that malware artists can’t sync with nothing.

    Do not put your trust in Ohio. I know it is tempting for you to take his blabbering as paragons of technical excellence, but, alas, they are just blabbering. He fails to know so many and so basic things, that he can’t know anything sophisticated.

  25. Phenom says:

    That other OS does things on a schedule so the malware artists can synch to it.

    Hm, well, no. Critical (security) updates are rolled out as soon as they are ready.

  26. Clarence Moon says:

    Sorry Clarence Moon the idea that you can mark out a time when windows updates will be installed and never disrupt your work is a fairly tail.

    Ah, Mr. Oiaohm! It is a sorry life you lead! You know nothing of the law and nothing of the Australian Marines and nothing of business and now you demonstrate you know nothing about scheduling automatic updates for Windows for a convenient time. To improve your knowledge of this topic, please see this brief tutorial.

    It is a shame that there is no such easy primer on how to spell English words and how to punctuate a simple sentence. The only suggestion I have is if you were to use MS Office to form and proof your drivel, you would be well on your way to being able to better present yourself.

  27. kozmcrae says:

    Yonah sounding a bit like @ldman said:

    “Oh, really? You, sir, with your insults, childish mockery, and constant slander towards others here paints a positive picture of Linux advocates? Your behavior is that of a desirable person? Desirable to whom?”

    Yes really. I have taken the low road to insure I have no limitations in describing the Cult of Microsoft. I will not deny that. Try to get any of the Cult of Microsoft to own any of their transgressions. Forget it. It’s a hopeless task. They lie and slander and never own up to any of it.

    Me? I’ll call them on anything out of bounds like Hanson and his “freetard” remark. That was unforgivable. And recently @ldman and aardvark daring Robert to setup a web page loaded with malware. Neither of them apologized for asking Robert to basically break the law. They have proven to me to have no ethics. That’s not slander. That’s calling a spade a spade.

    You can call it anything you want. If you throw yourself in with their lot you’re no better then they are in my opinion.

  28. Ivan wrote, “as most debian mirrors sync once a day, you are in the same boat, “ and other innuendos.

    Security updates are from a single source. That other OS does things on a schedule so the malware artists can synch to it. Updates in Debian are when they are ready and the ftpmaster pushes them.

    “Debian mirrors can be primary and secondary. The definitions are as follows:

    A primary mirror site has good bandwidth, is available 24 hours a day, and has an easy to remember name of the form ftp . country . debian . org. They are all automatically updated whenever there are updates to the Debian archive.

    A secondary mirror site may have restrictions on what they mirror (due to space restrictions). Just because a site is secondary doesn’t necessarily mean it’ll be any slower or less up to date than a primary site.”

    see http://www.debian.org/mirror/list

  29. Ivan says:

    This apple:

    10s and the system is checked.

    is not like this orange:

    Compare that with 24h or so with autoupdates in that other OS.

    You’re basically comparing a daily cron job to how long that cron job takes to complete once triggered.

    Also, as most debian mirrors sync once a day, you are in the same boat, chief. Assuming, of course, the mirror you are using is legitimate and isn’t using someone’s weak PGP key to distribute keyloggers and rootkits…

  30. date;apt-get update;apt-get upgrade;date
    Wed Apr 4 12:06:16 CDT 2012
    Get:1 http://dl.google.com stable Release.gpg [198 B]
    Get:2 http://security.debian.org squeeze/updates Release.gpg [836 B]
    Hit http://debian.yorku.ca squeeze Release.gpg
    ...
    Get:10 http://debian.yorku.ca squeeze-updates/contrib i386 Packages [14 B]
    Get:11 http://debian.yorku.ca squeeze-updates/non-free i386 Packages [14 B]
    Get:12 http://debian.yorku.ca squeeze-updates/main i386 Packages [15.1 kB]
    Fetched 510 kB in 5s (87.3 kB/s)
    Reading package lists... Done
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Wed Apr 4 12:06:26 CDT 2012

    10s and the system is checked. Compare that with 24h or so with autoupdates in that other OS. i.e. I can do the updates whenever I want. Because APT handles my OS, my services and my applications I am way ahead of checking ISV’s websites for updates.

  31. Phenom says:

    Apart from the usual confusion about Windows Update, and the lightheaded self-degradation of following Ohio, you fall into another dissonance, Mr. Pogson:

    The fact that M$ releases updates when convenient in its timezone and messes with the rest of the world’s workday
    vs.
    Who wants to wait until xAM for an update

    You can’t have the two things, Pogson, and that is obvious. You either schedule an update to your preference, or you must get it asap to prevent attacks. You need to choose one of these.

    And, APT package management is hot helpful, if you need to update quickly to the latest patches.

  32. oiaohm wrote, “Yes windows update is a dumb bit of works. The start time when when it starts attempt to get the updates. Microsoft was very incompetent in the design of windows update not providing a point if crossed abort to prevent usage disruption.”

    That certainly was my experience. The automatic updates are either intrusive or unreliable. Those are not characteristics I want in IT. The fact that M$ releases updates when convenient in its timezone and messes with the rest of the world’s workday is icing on a rotten cake. Who wants to wait until xAM for an update when the world’s malware artists have many hours of a headstart in messing with your IT system? Who wants to re-re-reboot in the middle of the workday? No one. Use FLOSS. It works for you and not M$, M$’s partners and malware. The advantages of Debian’s APT package management system alone are sufficient reason to migrate to Debian GNU/Linux.

  33. oiaohm says:

    Clarence Moon
    “Part of the way Windows works is to set automatic updates for some time and day when the machine is not likely to be in use and when I am fast asleep.”
    So you admit lieing threw your teeth. It is possible using distrobutions supporting ksplice todo proper 24/7 running. Same with particular commercial Unix systems. Windows even paying the highest support you can to Microsoft 24/7 running is not a true option.

    So please refrain in future in saying windows machine running 24/7 its not. Its attributing an attribute windows does not support. Claim of 99.99 uptime a month maybe.

    Sorry Clarence Moon I am not just a Microsoft Var. I am a person who does systems with 99.999 and 99.9999 uptimes. So I don’t take kindly to fake crap about windows running 24/7 when that is not the reality. One day I will attempt 7 and 8 nines of uptime.

    “Part of the way Windows works is to set automatic updates for some time and day when the machine is not likely to be in use and when I am fast asleep. I use 2AM Sunday myself.”
    This is still a disruption of services times.

    You are deceiving yourself Clarence Moon because Microsoft tells you that this works. Reality it don’t it will come and bite you sooner or latter.

    Also this shows how little you know by you following statement Clarence Moon.
    “If you do not pick a date but turn on autoupdate anyway, it will update sometime after midnight on Tuesdays, if I remember correctly.”

    In fact it picks a random day then a random time between midnight and 5 am. This is what you call particular fun for someone working night-shift. Reason is in theory to stop network from being over taken downloading windows updates if all the default setting were the same think 200 plus machines downloading windows update at the same time. This is not going to turn out too good. Or particular fun if it picked 5 am and the work starts a 7am and it had trouble downloading the update a little bit. So now it starts installing the update after 7am.

    Yes windows update is a dumb bit of works. The start time when when it starts attempt to get the updates. Microsoft was very incompetent in the design of windows update not providing a point if crossed abort to prevent usage disruption.

    So even you update at 2am if things go the wrong way can still cause you computer to reboot a 11:55am the next day if your network has the right form of network traffic disruption. The record I have seen is 6 days over due. Yes windows update forced due to intentional disruption to update downloads to be running in background for 6 days before then trigging the machine to reboot. This is not the worst case. 7 days gets funny.

    When it attempt to start windows update again as user system it locks the complete windows update process up so the machine will not update any more. I do mean any more its reinstall to fix. If you attempt to do this by manually triggering it will not work. Yes they somehow missed the safe check on auto triggered.

    This is the problem with incompetent people saying something to a truly trained person they have no clue how bad the issues in Windows really are.

    apt-get under debian is not the best it lacks the auto termination after time. But you cannot run two over each other stuffing them up.

    Sorry Clarence Moon the idea that you can mark out a time when windows updates will be installed and never disrupt your work is a fairly tail. Reality is a sod so nothing will go to plan all the time get use to it.

    You got this backwards Clarence Moon. “incredibly unaware of Windows operation” You are the person who is incredibly unaware of Windows operations to the point you say thing that are completely moronic to people who are aware.

  34. Clarence Moon says:

    Your behavior …

    Wouldn’t that be the way that Microsoft might go about disparaging Linux and FOSS in general by making its advocates appear shrill and boorish? Koz may just be some MS guy having fun.

  35. Yonah says:

    Kozmcrae: “They try to paint anyone who uses GNU/Linux or is an advocate of it as an undesirable person.”

    Oh, really? You, sir, with your insults, childish mockery, and constant slander towards others here paints a positive picture of Linux advocates? Your behavior is that of a desirable person? Desirable to whom?

  36. Kozmcrae says:

    “The time saved in a school is inconsequential anyway.”

    Clarence and his Ego strike again!

  37. Clarence Moon says:

    So when does it kernel update. Its not on 24/7 at all.

    For someone who occasionally claims to be a Microsoft VAR, you are incredibly unaware of Windows operations, Mr. Oiaohm. Is that another of your fantasies? Part of the way Windows works is to set automatic updates for some time and day when the machine is not likely to be in use and when I am fast asleep. I use 2AM Sunday myself. If you do not pick a date but turn on autoupdate anyway, it will update sometime after midnight on Tuesdays, if I remember correctly.

    It restores most running programs to their pre-update restart conditions.

    In any case, that is not a very frequent condition and hardly a saving element for switching to a terminal mode architecture.

    Schools have rules…

    Well most businesses and personal users do not, Mr.Pogson and that is hardly a good reason to take a chance on such a shaky proposition. The time saved in a school is inconsequential anyway.

  38. Kozmcrae says:

    Mustard is yummy said:

    “So of course MS tried to stomp Linux since 1998 or so, especially since its community was hell-bent to destroy commercial software from the very start and appeared lunatic as hell.”

    There is one more attribute about their attacks on GNU/Linux. They try to paint anyone who uses GNU/Linux or is an advocate of it as an undesirable person. They try to make them out to be an outcast of society. To remove them from humanity.

    Mustard is yummy is the kind of person who advocates for Microsoft. The kind of person who attempts to make Microsoft look better by dehumanizing people who use competing products. This is an old and very nasty tactic used to keep despotic regimes in power. To use such a tactic sends the message that his basic argument is weak. Dehumanizing the opposition is an attempt at bolstering a position based on flimsy ground.

  39. Mustard is yummy wrote, “Linux wasn’t even on Microsoft’s radar in 1995.”

    Well, UNIX operating systems were…
    “One note about Unix since most web pages are designed on Unix boxes,and probably all good looking pages are, having a Unix client available is critical for gaining acceptance of any one ‘interpretation’ of web protocols” see http://www.justice.gov/atr/cases/exhibits/23.pdf

    In 1995, a lot of computer geeks working on the web were using GNU/Linux boxes and to M$ they were just another UNIX OS.

  40. Linux is not the only FLOSS kernel. There’s FreeBSD and others.

    Google does not care much what licence is used for an app:
    “5.4 You grant to the user a non-exclusive, worldwide, and perpetual license to perform, display, and use the Product on the Device. If you choose, you may include a separate end user license agreement (EULA) in your Product that will govern the user’s rights to the Product in lieu of the previous sentence.”

    So, you can have a EULA that tells the end-user where to go to get the code including a GPL licence for it.

  41. Mustard is yummy says:

    Also, according to Pogson and the like, it is even righteous and the “right thing” if Linux stomps all competition (including even open source competiton like the BSDs, many in the Linux community aren’t exactly fond of them, and rms wages holy war against all licenses except GPL), yet if other try to same with Linux it’s somehow morally wrong. Attacking Linux is kicking a puppy or something.

    Well, Linux isn’t holy, and course it will get attacked by competition given the chance.

    That’s why you shouldn’t dedicate your life to something like an operating system.

  42. Mustard is yummy says:

    “Anyway, you guys are taking it way to personally. If DuckDuckGo would gain any traction, they would do everything to stop it too”

    They=Google.

  43. Mustard is yummy says:

    “Mustard is yummy gets caught in a lie on the Android license.”

    Ho hum:
    http://source.android.com/source/licenses.html

    “The preferred license for the Android Open Source Project is the Apache Software License, 2.0 (“Apache 2.0″), and the majority of the Android software is licensed with Apache 2.0″

    Why Apache Software License?
    We are sometimes asked why Apache Software License 2.0 is the preferred license for Android. For userspace (that is, non-kernel) software, we do in fact prefer ASL2.0 (and similar licenses like BSD, MIT, etc.) over other licenses such as LGPL.

    Android is about freedom and choice. The purpose of Android is promote openness in the mobile world, but we don’t believe it’s possible to predict or dictate all the uses to which people will want to put our software. So, while we encourage everyone to make devices that are open and modifiable, we don’t believe it is our place to force them to do so.”

    Sure, the kernel is still GPL, they can’t change that.

    Microsoft has been trying its best to stomp out Linux since 1995.

    Linux wasn’t even on Microsoft’s radar in 1995.

    1998, yes, but not 1995.

    Anyway, you guys are taking it way to personally. If DuckDuckGo would gain any traction, they would do everything to stop it too, just like they are doing with G+ to try to stomp Facebook. So of course MS tried to stomp Linux since 1998 or so, especially since its community was hell-bent to destroy commercial software from the very start and appeared lunatic as hell.

    As you may have noticed, Microsoft’s “facts” campaigns etc. stopped since they have noticed how inept Linux was on conquering the desktop and the rave talking was just that, talk.

  44. Ivan says:

    Android has been great to prove to a lot of commercials that Microsoft lead was basically making stuff up with the cancer remark.

    Then you should explain why Google has a “no gpl in user-space” policy for Android.

  45. oiaohm says:

    oldman Part of the Cult of Microsoft comes out of the use of Wolfram Research auto hunt down and spam anyone counter to speaking counter to what Microsoft wanted that they did for many years.

    So pack of computer based automated twits annoying people.

    Basically don’t send a computer todo a humans job you only get backlash.

  46. oldman says:

    “That’s why I call them the Cult of Microsoft. The Cult of Microsoft is a name, their actions define the name. So there it is CoM. If you don’t like the name, change what it means.”

    The name is bushwah, as are you Mr. K.

  47. Kozmcrae says:

    Mustard is yummy gets caught in a lie on the Android license. I would expect him/her/it to lie about anything else.

    Mustard is yummy said:

    “Also, please, let’s not forget the countless of hate posts in Linux newsgroup loooong before Ballmer said the evil words.”

    Microsoft has been trying its best to stomp out Linux since 1995. Any “hate” towards Microsoft from the Linux “community” is reactionary. The Cult of Microsoft comes to this blog, hate in hand. There’s no priming them. I have no love for any of them. They have but one message, FUD for GNU/Linux and no matter how hard they try to hide it, or lie about it, it’s plain to see.

    There is one more attribute about their attacks on GNU/Linux. They try to paint anyone who uses GNU/Linux or is an advocate of it as an undesirable person. They try to make them out to be an outcast of society. To remove them from humanity.

    That’s why I call them the Cult of Microsoft. The Cult of Microsoft is a name, their actions define the name. So there it is CoM. If you don’t like the name, change what it means.

  48. Phenom says:

    Pogson, another comment caught by your infamous spam filter. Please revive it.

  49. Phenom says:

    Pogson, let me give you a very simple example, and real life example how GPL 3 does not work for commercial ISV.

    Software company X is developing a site for a local bus transport company. The site features a route planner, and e-shop for tickets. Purchased tickets are printed as PDF files. Due to the specific layout of the bus companies routes (star-shaped sparse graph), the algorithm to produce the best meaningful routes between two points is not trivial. No, Hoffman’s classic does not do in that case, it produces routes no sensible human would accept, thanks for asking. A solution is found, and developers now look for a library to create PDFs. They stumble upon a fine, FOSS, and GPL3-protected library. GPL3 would require that the developer exposes all the code of its software, including the precious unique algorithm for k-shortest paths. Something, they are rather reluctant to do, because they want to sell it to other bus companies in the region.

    Result – the company purchased a commercial PDF-generating library for $300, and forgot about FOSS.

  50. oiaohm says:

    Android has been great to prove to a lot of commercials that Microsoft lead was basically making stuff up with the cancer remark.

    Because if GPLv2 and GPLv3 spread in a non predictable way android would not exist.

  51. Mustard is yummy wrote, “commercial entities aren’t too keen on GPL, that’s why Android, the only truly successful Linux based OS in the end user commercial market, is using the Apache license.”

    Not true. Google, Samsung, HTC, SONY, etc., purveyors of Android/Linux all ship GPL code in the Linux part of Android/Linux. Google chose another licence for the Android part but they still ship Linux with GPL v 2 because Linux is not Google’s property and they cannot change the licence. They’ve all released source code out of respect for the GPL, too.

    GNU/Linux largely uses the GPL (other licences too) and is a very successful OS in the commercial market. Many millions of users, most OEMs, and I say so.

    Samsung on Linux: “Samsung has been putting much effort on delivering optimized software for ARM Linux-based developers as a member of Linaro since its launching in 2010,” said Youngki Chung, vice president of Software Solution Development Team, System LSI Division at Samsung Electronics. “We are pleased with Linaro’s achievements of consolidated software and environments and believe that our customers and the open source community will experience the benefits of acceleration in designing their products through the innovative Exynos platform.”

    IBM on Linux: “IBM is consistently among the top commercial contributors of Linux code, with more than 600 IBM developers involved in over 100 open source projects and thousands of dedicated development and support personnel supporting all of IBM’s products and customers on Linux.”

    So, Mustard is yummy, is wrong again. Commercial entities love GNU/Linux and GPL. It gives them flexibility, efficient re-use of code etc. all things that affect the bottom line positively. Linux and GPL go together well. So does commercially used software.

  52. Mustard is yummy says:

    I think he nailed it. He could have left out the word “heroin” and it would have been spot on. But as it is it’s nothing like the “cancer” remark made by Steve Ballmer. That remark was meant to scare people away from GNU/Linux. Young’s “heroin” remark is just an unfortunate analogy. Although it is a fitting analogy.

    Well, Ballmer’s remark was an analogy too. “GPL infects everything it touches”.. commercial entities aren’t too keen on GPL, that’s why Android, the only truly successful Linux based OS in the end user commercial market, is using the Apache license.

    Both, cancer and heroin and pretty heavy analogies, yet one Ballmer gets all the blame.

    Also, please, let’s not forget the countless of hate posts in Linux newsgroup loooong before Ballmer said the evil words.

  53. oiaohm says:

    Robert Pogson if you have access to aircraft grade thin clients 1 second to power on and boot. Its kinda a case someone walks away from desk you turn them off. The screen will take longer to come alive than the box.

    Yes they are 1 second from application of power to login screen displayed. Basically leaving thin clients on is directly linked to how crappy that are.

    I have used aircraft grade in location with solar power only.

    http://www.embedded-bits.co.uk/2011/1-second-linux-boot-to-qt/

    Yes there are particular companies that specialise in rapid start Linux thin clients. Surprisingly for the fast start time they don’t eat much power.

    If you are working anywhere particularly short of power Robert Pogson drop by aircraft repair centres. They have some impressive hardware on rapid start and low power consume. If aircraft engines fail computer based equipment for navigation has to run on battery for a insanely long time to be certified to be fitted in a Glass cockpit. The time for running from battery is 72 hours. You should have fallen out the sky from lack of fuel along time before the glass cockpit system stops.

    For those that don’t know a Glass cockpit is a Cockpit were no real dials or gauges exist. Instead all the information about what is going on in the aircraft is displayed on LCD screens(6-15). Of course worse can be the fact that a Glass cockpit might also be a full fly by wire cockpit. So failure equals death if systems take like 20 secs to restart. Allowed start time from cold or hot in a Glass cockpit is 1 second. Since it believed that in 1 second that aircraft will not get fair enough out of control that it cannot be recovered. fly by wire you kinda want back in under 1 second and the screens to know where you are going.

    Yes lot of aircraft glass cockpits computers are running Linux. So lot of people trust there life to Linux daily just don’t know it.

    With work thing servers in some of the glass systems can restart in under a second as well. Current computers on desktop are snails.

  54. Schools have rules about turning off equipment, not just hibernating. There are issues of fire and power consumption as well. I have worked in places where power was $1+ per kwh and it makes no sense to leave anything running. There are places where servers should be shut down too. I would normally leave a thin client running and it takes only seconds to log in.

  55. oe says:

    Suspend (reliably) on thin client, complete with open applications and Xorg environment and resume FROM ANY OTHER THIN CLIENT on the Linux VLAN was a killer feature to me. Resume work from day before in 30 seconds and that’s mainly me settling into the chair…that other OS, don’t make me laugh, by policy we have to reboot our thick clients and the end of the day for all the patches and because the help desk states “they go unstable”. Linux thin client uptime I’ve experienced, weeks on end.
    BTW Linux can scale incredibly, at the old work site we’d submit jobs to 64-node clusters to work CFD problems, meanwhile at home ran a torrentbox, NAS-fileserver, apt-cacher, and mailserver at home on an underclocked P60 class (t bring power consumption down to 15-20Watts) thin client….good luck doing that with commerical OS’es. Meanwhile, did I state the apps and Linux Desktop are awesome….

  56. oiaohm says:

    oldman boots in 20 seconds vs boot in 1 for aircraft grade Linux terminals on solid state. 20 second boots is nothing to talk about.

    Besides most of the performance can be got by Linux means to cache common used files to solid state drives or ram drives. Yes items like bcache. Mechanical ram drives are still fast than flash based ssd drives. Downside of course is if the ram drive battery fails they loss there data threw reboots.

    Still booting in 20 secounds does not help that applications still lag on fire up compare to a server.

    Clarence Moon
    “Doesn’t Linux have sleep and hibernate modes? What is with all this starting from cold boot?”

    If you are asking this question you are a secuirty idiot. Linux and windows both has those modes. Neither allows the kernel to be replaced and updated. Cold boot is required to apply kernel upgrades.

    Clarence Moon
    “My desktop workstation at home and at the office is on 24/7 and so is my wife’s at home.”

    So when does it kernel update. Its not on 24/7 at all.

    The big reason why I hate desktop machines is when you have a large number of them they can drift out of sync with each other. Diskless remote boot with Linux using local caching. I find really nice. Since update server updates everything next time the machine is booted.

    “Mustard is yummy” Really all CEO have said stupid things.

  57. Clarence Moon says:

    Doesn’t Linux have sleep and hibernate modes? What is with all this starting from cold boot? My desktop workstation at home and at the office is on 24/7 and so is my wife’s at home. She likes the picture show on the screen saver and doesn’t even blank the screen. So all she has to do is wiggle the mouse and she is right where she left off the last time she used it.

    I hibernate my laptop and it is running 10 seconds after I poke the power button and is showing the last screen I was using when I put it to bed. With the workstation, it is just wiggle the mouse and I am back where I was, same as with my wife’s computer.

    If you don’t have these modes with Linux, it would seem to me that Linux is a kind of stupid thing to have on your computer and if you do, you should just use them and quit worrying about all this other foolishness.

  58. kozmcrae says:

    From Mustard is yummy’s link:

    “It’s noteworthy that the leaders in the Linux world are never held accountable for their words.”

    It’s also noteworthy that the words you wish to hold the “Linux world” accountable are not given.

    Who is doing the accounting and look, there’s that word “never”. Where was Steve Ballmer “held accountable” for his “cancer” remark?

    I noticed too, one comment left by Dr Loser. Now where have I seen that nym before?

    You must be a PoopOn Mustard. You’ll have to try harder if you want to bring GNU/Linux down to the level of proprietary software.

    The quote in question that Red Hat CEO Robert Young made:

    “Microsoft has this huge revenue stream based on their heroin addiction to selling royalty-based software. Their customers are forced to send money to Microsoft for every machine they install”.

    I think he nailed it. He could have left out the word “heroin” and it would have been spot on. But as it is it’s nothing like the “cancer” remark made by Steve Ballmer. That remark was meant to scare people away from GNU/Linux. Young’s “heroin” remark is just an unfortunate analogy. Although it is a fitting analogy.

  59. M$ organized a global programme to suppress thin clients:
    “Continue worldwide efforts to prevent the NC from gaining any critical mass. This work is all about keeping Sun. Oracle and IBM from dominating the airwaves with NC Java FUD. We will concentrate on transitioning this focus from the product group to support organizations including CATM. US field offices and worldwide subsidiaries.”

    The NC they were worried about were thin clients running a minimal OS and Java applets from servers. The thing M$ hated about thin clients as there was no hard drive for a copy of their OS and associated licensing fee. Citrix was there buddy. Terminal services from M$ were not pushed by M$ until the recent VDI movement. Citrix pushed the idea all along and M$ was years behind.

  60. Ted says:

    “Dave runs one box,”

    Read the article.

    This box is ONE front-end server to other terminal servers. The thing about front-ends is there’s usually more than one; high availability, fail-over and all that.

  61. Ivan says:

    You may not need it, but who is willing to accept your anecdotal evidence that a Windows Desktop PC

    My users ran XP which tried to fill the cache when they logged in making the desktop unusable for 2 minutes or so.

    coming from a cold boot is comparable to a server

    In GNU/Linux, they could get a usable desktop in 5s and open OpenOffice.org in 2s using 8 year old client PCs from a 6 year old GNU/Linux terminal server. The server as it was had just 1gB of RAM or we probably could have made it faster.

    which is never shut off and therefore has already cached every program the user opens? This apple is not like that orange.

  62. Mustard is yummy says:

    “M$ spent years discouraging people from using thin clients and GNU/Linux calling them “dumb terminals” and “cancer”, but now everyone’s doing it because it’s a better way to do IT.”

    “M$” might have discouraged Linux terminals (that’s their job), but they didn’t discourage terminals as such. Ever heard of Microsoft’s Terminal Server?

    And about that cancer remark:

    http://penguinday.wordpress.com/2010/08/11/historical-ancient-heroin/

    http://penguinday.wordpress.com/2010/08/10/archeology/

    Ballmer’s remark was a pretty mild response compared to all the crap Linuxers came up with years before.

  63. Ivan wrote, “Why not provide something other than anecdotal evidence to show that caching is ineffective, as you put it?”

    I don’t need evidence, just reason plus COTS information. The typical hard drive takes a few millisecond to seek and a few milliseconds to rotate to access each file. To fill the cache with the files a user needs now on each PC does take many seconds. It could be 100+ files that the user is accessing to log-in or to open an application. A lot of users have 100 processes running these days. Getting the files from RAM already saves many seconds per user after the first one accessed those files three days ago. The result to the user is dramatic. My users ran XP which tried to fill the cache when they logged in making the desktop unusable for 2 minutes or so. In GNU/Linux, they could get a usable desktop in 5s and open OpenOffice.org in 2s using 8 year old client PCs from a 6 year old GNU/Linux terminal server. The server as it was had just 1gB of RAM or we probably could have made it faster. It had four SCSI drives in RAID 1 to help users get their files open and which were unlikely to be cached. That server cost about $5K when it was new but was a piece of junk by normal standards with a single 32bit core. It had room for a second CPU but I had no thermal compound available to install it. I had two servers and doubled up RAM and storage to make one decent. Putting new hardware in for a GNU/Linux terminal server just increases the advantages.

  64. Price/performance of SSD is still not that good. People want small cheap computers. Even with the rise in HD prices, SSD is more expensive. Eventually Moore’s Law and SSD will overtake hard drives but that’s a year or two down the road. I agree, if you want decent performance of storage in a PC, an array of hard drives and an SSD cache makes a lot of sense but the array plus a lot of RAM is available now at good prices. The COTS PC still ships with one hard drive which is a bottleneck. A GNU/Linux terminal server with most of the files needed to login and open applications in RAM is far superior to any PC with a single hard drive.

    500 gB hard drive current price = $90 and it will likely subside to $50 when the factories are rebuilt.

    500 gB SSD > $1K
    caches can be smaller…

    One can also put SSD on GNU/Linux terminal servers but I have never seen one. For the same money one can buy a hell of a good RAID array.

  65. dougman says:

    $31K is expensive, but when your ONLY spending $124/person it’s cheap considering the alternatives.

    I know of one place where they spent ~$35K for 50-users, that works out to be $700/person, then you have to add in all the CALs on top of that and they run four terminal servers. Dave runs one box, Linux is incredible and awesome.

    I quoted the above place around 1/4th the cost using Linux, they thought it sounded to good to be true and decided to spend money needlessly.

  66. oldman says:

    “device seeks hither and yon all over HD, fetching files, and”

    User runs on solid state disk on modern desktop, boot is over in 20 seconds.

    user is not crippled by limitations of modernized dump terminal environment.

  67. Ivan says:

    Put most of the RAM on those clients on a server with all those files cached and everyone has a usable desktop in seconds.

    Except that the computers in your hypothetical lab are using everything from PC133 to DDR2 RAM so this idea of a Frankenputer, is pretty much dead in the water.

    device seeks hither and yon all over HD, fetching files, and eventually user has the files he needed in RAM.

    So you’re trying to confuse your opinion of desktop behavior with evidence that caching is ineffective? Why not provide something other than anecdotal evidence to show that caching is ineffective, as you put it?

  68. Thick client:

    1. user turns on device,
    2. device seeks hither and yon all over HD, fetching files, and
    3. eventually user has the files he needed in RAM.

    Thin client:

    1. user turns on device,
    2. device boots from server with cached files or local SSD
    3. user logs in or opens application in 1/5 the time because no seeking is required and gets thing done while the user of a thick client is still drinking coffee.
  69. Phenom says:

    Caching is very ineffective on thick clients.

    Please enlighten me, Pogson. How did you arrive at this paragon of computer-architectural wisdom?

  70. Which is more efficient, one user per few gB of RAM or hundreds of users in 64gB? Do the maths.

    Consider a school’s lab full of thick clients running that other OS. Students boot up and spend a couple of minutes seeking files on each hard drive. That seeking gets done 24 times for no benefit at all. Put most of the RAM on those clients on a server with all those files cached and everyone has a usable desktop in seconds. Caching is very ineffective on thick clients. On a server, one can have a bunch of hard drives and seek multiple files simultaneously speeding up any seeking for users’ files, something really useful. I’ve seen it many times, converting thick clients to thin and adding resources to a server is a better way to do IT for almost any task.

  71. Ivan says:

    Users of that other OS get bogged down regularly on a normal PC with 50-100 tasks running.

    That server has 64gB of RAM

    Do you ever find yourself questioning whether your comparisons are valid or consider providing anything more than anecdotal evidence to support your claims?

    You should, especially in light of the earth shattering claims that Windows 7 consistently outperforms Linux made by Phoronix.

  72. kozmcrae says:

    Microsoft can’t stop other cities from knowing of Largo’s experience. Largo is not alone too. There are others. No matter how tied to Microsoft a city’s bureaucracy might be, money talks. And spending money just to get beat up by malware then abused by Microsoft’s licensing practices adds insult to injury.

    Microsoft’s excuses for using it’s overpriced security challenged software are getting flimsier to the point of absurdity. It’s common knowledge that you need more people to service Microsoft installations than Linux ones.

Leave a Reply