Triumph Of Thin Clients

Not only have thin clients triumphed here (except fot the audio problem…) but they’ve triumphed in offices of UK“Strength of feeling and clarity on the benefits of a thin and zero client technology was very apparent with 86% of IT Managers believing ease of use and management was the primary benefit of a thin client infrastructure, followed by energy efficiency (82%) and flexibility (78%). Respondents also cited better cost structure (73%), longer life span (71%) and more secure company data (69%) as major benefits.” housing authorities. They are probably using that other OS but the concept makes a lot of practical sense with any OS. TFA quoted below mentions that only 28% of users of PCs there require video, the achilles heel of thin clients. The Little Woman can only do about 500×300 well on her thin client and some screensavers clog the network…

For everything else, there’s just no reason not to use thin clients. A few applications won’t run on thin clients but it’s usually a licensing thing rather than not being a better way to do the job. One can have a server or cluster of servers run much bigger and badder jobs than the typical PC.

See Battle for the Desktop revealed in latest Housing Association Research.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , . Bookmark the permalink.

74 Responses to Triumph Of Thin Clients

  1. oiaohm says:

    DrLoser you retracted Roberts name yes. But have you formally apologized to Robert for using his name without permission. I will call you a criminal until you do. Its fraudulent representation to use someones name to support your point view without permission.

    DrLoser you have committed a crime. I am willing to accept that you have seen the errors in your way and will never do it again but that requires you to formally apologize to Robert for using his name. Don’t expect the cite because I will not reward criminal action.

    I do not use the word criminal in heat of moment. Idiot/Moron… those can be heat of moment. I use criminal you have done something very wrong.

    It is, however, a painless opportunity for you — just for once — to admit that you were wrong.
    Sorry DrLoser this is not a painless opportunity in fact asking me not to call you a criminal in fact shows you have not learnt your lesson.

  2. DrLoser says:

    Not only are you are idiot you are a criminal.

    I am going to ask you, politely, to retract the second half of that accusation, oiaohm.

    I’m entirely prepared to accept that it was made in the heat of the moment. I’m entirely prepared to accept that you didn’t really mean it.

    It is, however, a painless opportunity for you — just for once — to admit that you were wrong.

    (And otherwise, I reserve the right to make as many ad hominem attacks on you as I wish. I can’t see why Robert would object. If he’s prepared to let a libellous accusation such as this slip by without comment, then he’d look like a hypocrite.

    (And Robert may be many things, but being a hypocrite is not one of them.)

  3. oiaohm says:

    Isn’t that sweet, oiaohm? I wonder why they didn’t do that, say, five years ago.

    Oh, that’s right. Because they’re Marketroids. They wouldn’t recognise a security vulnerability if it slapped them in the face, would they?

    No Marketroids recognize a security vulnerability and attempt to find the fastest way to sweep it under carpet that hopefully no one finds out about it including paying people money not to publish the fact the bug exists for years. Microsoft Marketroids got caught recently paying for 5 years that the ADshock bug never got publicly published.

    DrLoser remember 5 years go OpenSSL Foundation was in charge.

    Now, as to that massive staff increase of, ooh, two people.
    No wrong. The developers who could write code were before heartbleed 1 full time 3 part time the other 7 could only audit. Currently its 2 full time and 10 part-time who can now all code and submit fixes. This is double what there was before. The other 2 have to complete out their contracts before they can join. Linux-foundation is only brining experienced personal in.

    Marketroids term here. Let me lay this out for you. For the past 10 heck that is short from 1998 to now there has been OpenSSL issue after OpenSSL issue. PR the departments ie the Marketroids have been security washing. We are secure we run linux we use Open Source ….. Even that OpenSSL is falling apart due to people around the project not being allowed by their companies to submit code this does not make it to the all important CEO or legal department. Linux foundation had attempt to start 3 projects to increase Linux security but if no one wants to believe there is a issue no money comes.

    Marketroids see security flaws as something that will drive away customers so don’t want to be truthful about it. In fact Marketroids will attempt to focus on flaws in their competitors to gain ground. Us as consumers really should not care about if X or Y has so many flaws.
    What should we care about?
    1 if X or Y construction is properly resourced .
    2 if X or Y is attempting to be mathematically secure.
    3 if X or Y is willing to allow a proper third party publish the current audited state of their product.

    Another other project related to security that has been poorly successful is
    http://www.linuxfoundation.org/programs/legal/compliance/tools
    So you know what Open Source your operation is depending on so what ones you should allow your developers to submit fixes to for you own good.

    Linux Foundation members raised the security issues of OpenSSL year after year at many Conferences. Got zero media coverage. Since the TMR guys never watched any of those videos they could not tell people they need to watch X time on Y video to see the problem. So you level of incompetence for not knowing the party you were attacking resulted in you being ineffective DrLoser.

    http://www.linuxfoundation.org/news-media/announcements/2010/08/palamida-joins-linux-foundation
    What was the Linux foundation attempting to-do 5 years ago. Get companies to understand the problem and allocate internal staff to it. Companies need to know what open source projects they depend on. No point allocating staff to projects your company does not use it.

    DrLoser its not like the Linux Foundation was doing nothing 5 years ago. Just the Plan Linux Foundation thought would be successful to deal with the problem was a no where near as successful as required. The new plan where the Linux foundation employees more coders/developers directly seams to be required so that the coder/developers can be given strict instructions that its security first.

    2 big problems came out of the 2010 Linux foundation plan.
    1) Companies sent auditors to code bases with no coders and was expecting other parties to provide the coders to fix their problems.
    2) Internal forking from hell this includes google. Yes when the bugs that their auditors found were not fixed upstream they internally fixed without ever submitting the code upstream.

    Basically the 2010 plan went left at straight into hell. If the 2010 plan had worked perfectly we would most likely not have had a heartbleed. Openssl would not have been under-resourced if the 2010 plan worked.

    There is another attempt by the Linux Foundation back in 2005 and 2000. So every 5 year Linux foundation has been attempting todo something about the security problems.

    Year 2000 Linux foundation was promoting automated build farms and working with giving projects access to automated build farms. OpenSSL does not have a automated build farm.
    http://en.wikipedia.org/wiki/Compile_farm
    Debian, Ubuntu and OpenSuse provide them for free all of them allow running your applications test-suite to detect bug introduction.

    There is an old saying. You lead a horse to water you cannot make it drink.

    Linux Foundation set up companies to provide so much for Openssl and other open source projects its not funny.

    DrLoser
    I hereby retract Robert’s name from the statement, you cretinous little berk.*
    Too late DrLoser you have lost the right that I will find that cite for you. Olderman did the same stupidity its something I don’t tolerate is using someone else name.

  4. DrLoser says:

    [Mine]Neither Robert nor I believe you, oiaohm.

    {oiaohm]If you use some else name to ask for a Cite never expect me to give it to you DrLoser.

    I hereby retract Robert’s name from the statement, you cretinous little berk.*

    I don’t believe you. Provide a cite.

    Your are fraud using someone else name to get your way.

    This isn’t some inane silent movie from the 1920s, oiaohm. I merely amplified my disbelief by positing a perfectly reasonably additional source of disbelief.

    I fully regret that you can’t even respond to the unadorned original disbelief. But I would suggest that the obvious reason for this is that you are a cretinous little berk.*

    Under no condition should I reward this.

    You’re in the business of giving gold stars out when I’m a good little boy? Pah. And this is where I am going to ask Robert’s indulgence for what would otherwise (without the following) appear to be a purely ad hominem attack. Rather bizarrely, although it makes sense if you read oiaohm’s final comment below, I am actually going to put my asterisked footnote right here:

    * Yes, Robert, I do sincerely believe that oiaohm is a cretinous little berk. In fact, that’s about the best I can say about him. Here, with no further comment, is his analysis of me:

    Not only are you are idiot you are a criminal.

  5. DrLoser says:

    By the way Linux Foundation now it has money to pay so to adding 2 full time staff to OpenSSL.

    Isn’t that sweet, oiaohm? I wonder why they didn’t do that, say, five years ago.

    Oh, that’s right. Because they’re Marketroids. They wouldn’t recognise a security vulnerability if it slapped them in the face, would they?

    Given which, you would obviously feel right at home with them, wouldn’t you?

    Now, as to that massive staff increase of, ooh, two people.

    Any ideas as to their credentials? Or backgrounds? Or, indeed, identities?

    I am soooooooo impressed by this timely and well-measured response.

  6. oiaohm says:

    http://www.theaustralian.com.au/technology/extra-staff-for-openssl-group-after-heartbleed-drama/story-e6frgakx-1227036441778
    By the way Linux Foundation now it has money to pay so to adding 2 full time staff to OpenSSL.

    Linux Foundation has support the part timers into jobs where they can keep on working on OpenSSL.

    Its also interesting that the OpenSSL Software Foundation was able to find money for 1 more developer when every year they have not been able to find the money. The project income has not increased.

    OpenSSL is changed forever more. The short term pain has brought OpenSSL the resources it requires.

  7. oiaohm says:

    DrLoser sorry never heard of xrdp??
    https://community.hpcloud.com/article/using-windows-rdp-access-your-ubuntu-instance
    RDP is just as much a Linux used protocol as a Windows one.

    Your information on NX is incorrect.
    1) NX? Tick. Yes, it is.
    2) RDP? Tick. Yes it is.

    Consequently, there is no difference whatsoever between either one and a simple X-Client to X-Server connection.

    NX client does not require a X11 server these days.

    X11 running on the server started by NX server or xrdp don’t have root privileges or network mode enabled. This in fact nukes a huge stack of security issues.

    DrLoser
    It would be nice, for example, if Dr Steven Henson or Ben Laurie … or that other guy who used to maintain OpenSSL in his spare time … actually got some support, indeed funding, from the Linux Foundation.
    Why does Ben Laurie has a job a Google. None other than the Linux Foundation searching through key open source project staff and lining them up with companies to pay them forever more.

    Steve Marquess who set up the OpenSSL foundation to allow coders in Openssl to sell consultancy work was not only department of defense.

    http://www.buzzfeed.com/chrisstokelwalker/the-internet-is-being-protected-by-two-guys-named-st
    DrLoser read that interview. This is what the Linux Foundation was attempting to work with. OpenSSL foundation operations are broken past repair. Notice that in the past 10 years not once has any of the OpenSSL developer meet each other.

    There are been report after report over OpenSSL. To be pass USA governement requirements you have to pass converity scan. OpenSSL did not pass converity scan in 2006 and still does not today.

    Note the tones of advisers problem. Yes there should be suitable staff for OpenSSL if companies would allow their developers to submit code.

    Neither Robert nor I believe you, oiaohm.
    If you use some else name to ask for a Cite never expect me to give it to you DrLoser. Your are fraud using someone else name to get your way. Under no condition should I reward this. Not only are you are idiot you are a criminal.

  8. DrLoser says:

    Both Shellshock and Heartbleed names are dreams up of the Linux Foundation PR department.

    Neither Robert nor I believe you, oiaohm.

    A cite, please.

  9. DrLoser says:

    Sigh. This means Zemlin is the “Steve Ballmer” of */Linux, just another salesman running the show.

    That is an extremely honest and admirable statement, Robert. I wish more Linux advocates could be that honest.

    (And incidentally, most of oiaohm’s claims were utterly bogus. Not really worth pointing out. Unlike you, Robert, oiaohm is a stranger to honesty and rational conclusions.)

    It would be nice, for example, if Dr Steven Henson or Ben Laurie … or that other guy who used to maintain OpenSSL in his spare time … actually got some support, indeed funding, from the Linux Foundation.

    But, sadly, that’s not quite how the Linux Foundation works. It’s all about marketing: nothing to do with technical chops.

  10. DrLoser says:

    DrLoser how do I connect to thin client to a Linux server simple. NX and RDP. If a thin-client does not support one it normally supports the other. Not one of the 13 X11 bugs effects either of those.

    As it happens, I had to look up NX. Not, I would suggest, the World’s Most Popular Thin Client Protocol.

    Interestingly, however, NX appears to be a Windows client for X on the server, whereas RDP (alongside its typical use to connect a Windows Client to a Windows Server) can be a Linux client to a Windows Server. I would assume, via Citrix, although there are other possibilities.

    So, what you’re saying here, oiaohm, is that as usual you have cobbled up a mess of partial solutions. Which is absolutely fine and wonderful. Your entire IT existence appears to be nothing more than a cobbled-up mess of partial solutions. Works for you!

    Unfortunately, it doesn’t work for anybody else.

    Having got that out of the way, let’s examine NX and RDP. Are they client-server protocols over an Ethernet network?

    1) NX? Tick. Yes, it is.
    2) RDP? Tick. Yes it is.

    Consequently, there is no difference whatsoever between either one and a simple X-Client to X-Server connection.

    Consequently, I am afraid, all thirteen X-Server CVEs apply equally well in your miserably hacked-together scenario.

  11. oiaohm wrote, “Jim Zemlin directive is to assist the Linux world by any means necessary. This includes creating intentional PR messes for projects when it deaned necessary.”

    Sigh. This means Zemlin is the “Steve Ballmer” of */Linux, just another salesman running the show.

  12. oiaohm says:

    DrLoser how do I connect to thin client to a Linux server simple. NX and RDP. If a thin-client does not support one it normally supports the other. Not one of the 13 X11 bugs effects either of those.

    DrLoser the 13 X11 bugs you stated yourself. They were release on 1 day by a member of the X11 board. A years worth of bugs released in 1 day. This was nothing more than a simple PR trap. To get drivers for Wayland and to get major CAD programs ported off X11 requires making X11 seam untrustworthy that it is. X11 board went to the extreme of digging out the authors of the very old flaws to make public comment.

    Jim Zemlin is exactly the right person I am guess you are going to keep on proving you are a idiot until I bring the Linux Conference Australia video in where the plan is talked about years in advance of doing it. Heartbleed name as well as a name being something shock is mentioned in that video as good names for PR grabbing. Even more interesting was that shock had to be for an application that would be installed on every system. Bash just happened to turn up first.

    Most security flaws never get named no matter how bad they are. Clients using a OS really don’t care if a security flaw has a name or not. Bugs are normally based on CVE numbers and descriptions not names when it comes to security flaws.

    Linux Foundation is not the one who has historically repaired bugs in other programs. They had no funding todo so for anything other than the Linux kernel.

    Users of Linux have asked for someone to take up the task to deal with security issues. Linux Foundation hears this then has to form a plan to make it true.

    X11 foundation need to get developers to update their programs and drivers. Media is the FOSS world big stick. Media gets bent to suit FOSS ends.

    Lets edit your script to how it really works instead of made up crap.
    Linux Victim: “My server has just been compromised! What do I do?”
    Linux Foundation: Investigate and take out required CVE numbers.
    At no point does the Linux foundation ask the Victim to name it.

    Linux foundation seeing repeated errors from a project decides to name a bug to bring media attention to the project. The results of bring media attention to a Project is increased developer involvement from companies using the Project.

    Yes, that should work. Cold, hard facts like that are proven ways to gain customer trust.
    Customer trust of open source comes from bug number remaining low. Letting projects be neglected equals bug numbers increasing. Short term pain long term gain. Every named open source bug has created the same effect. The increase developer activity caused in a project due to a named bug reduced the bug count in the project in a permanent way.

    DrLoser I will give you the plan to create the Core Infrastructure Initiative got to its last option. First option was politely asking for resources and hope everyone could see the logic of it.

    Jim Zemlin directive is to assist the Linux world by any means necessary. This includes creating intentional PR messes for projects when it deaned necessary.

    You TMR guys don’t want to believe you are doing exactly what the Linux PR guys want you todo.

  13. DrLoser says:

    You would say this was short term pain for long term gain.

    Since I don’t believe that either half of this hand-wavy proposition contains a scintilla of truth …

    … No, oiaohm. No. That is not what I would say.

  14. DrLoser says:

    Its only stubborn people who decide to keep on using X11 over a network instead of using any of the newer protocols who are at risk of any of the 13 bugs effecting them.

    Oh, those naughty stubborn people.

    How do you connect your thin client to your Linux server, oiaohm?

    Dixie cups and twine, perhaps? Acoustic coupling was all the rage back in the days of the PDP-8.

  15. DrLoser says:

    3) Both Shellshock and Heartbleed names are dreams up of the Linux Foundation PR department.

    That’s a “cold hard fact,” oiaohm? Does it have a “cold hard cite,” or is it just something Jim Zemlin phoned up and told you?

    “Cold hard facts” #1 and #2 are … distinctly uninspiring.

    Linux Victim: “My server has just been compromised! What do I do?”
    Linux Foundation: “Do you have a name for that compromise?”
    Linux Victim: “A name? I need a name? I’ll just call it Ebenezer Dread, then. Do I have to baptise the bloody thing?
    Linux Foundation: (Reasonably) “Well, we can hardly be expected to do anything much if it doesn’t have a name, now, can we? How would we log it into … what’s that reporting doo-dah we use?” (Breaks off and talks to somebody with actual technical knowledge) “Oh yes, Bugzilla. Process is important in these matters, you know.”
    Linux Victime: “And how long is it going to take before this thing has a name?”
    Linux Foundation: “There’s a lot of competing security holes, you know. PHP scripts, CGI, OpenSSL … this could take, say, ten years, give or take a month or so.”

    Yes, that should work. Cold, hard facts like that are proven ways to gain customer trust.

    Unfortunately, oiaohm, after your first three “cold hard facts,” I was laughing so hard that I completely forgot everything else you said.

    But I’m sure it was equally hilarious.

  16. oiaohm wrote, “Its only stubborn people who decide to keep on using X11 over a network instead of using any of the newer protocols who are at risk of any of the 13 bugs effecting them.”

    Yes, and we stubborn people have many other layers of protection if we need them like paranoid firewalls between the lab and the rest of the building and proxy-filtering etc. SSH is pretty simple to use and provides a whole lot of security in one shot. I only use naked X11 on really old thin clients who can’t do the SSH stuff and chew gum at the same time fast enough. One benefit I see coming out of this move to Wayland is that security will be better. I don’t mind security as long as I can do the networking. Getting rid of X11 without adding a networking driver to Wayland would have been a crime.

  17. oiaohm wrote, “Objective of the massive PR stunts of Shellshock and Heartbleed has been 100 percent effective in the funding of the Core Infrastructure Initiative at the Linux Foundation.”

    It’s good to see the Linux Foundation fighting evil any way it can. Thanks. Made my day.

  18. oiaohm says:

    DrLoser I am about to tell you some really cold hard facts about Heartbleed and ShellShock.

    1) Heartbleed class bugs had happened in OpenSSL for over 10 years. Never given unique name until Haertbleed.
    2) Shellshock might seam out there but PHP and CGI class bugs with the same issue had happened over the past 10 years no unique name given.
    3) Both Shellshock and Heartbleed names are dreams up of the Linux Foundation PR department.
    4) Core Infrastructure Initiative was planned before Shellshock of Heartbleed bugs happened it was documented in a Linux Australia Conference video including what it would be called and possible ways to create it including enhancing press coverage over particular exploits to make companies take funding it serous-ally to prevent future press messes.
    5) Objective of the massive PR stunts of Shellshock and Heartbleed has been 100 percent effective in the funding of the Core Infrastructure Initiative at the Linux Foundation.

    DrLoser thank you for assisting in the establishment of the Core Infrastructure Initiative.

    Still, all publicity is good publicity, I suppose.
    Bad publicity to achieve objective is only bad publicity if objective is not achieved.

    You would say this was short term pain for long term gain.

  19. oiaohm says:

    DrLoser of course you did not read what Alan Coopersmith had been using. Yes bug reports tell story cppcheck. Everything cppcheck has found x.org has fixed.

    https://bugs.freedesktop.org/show_bug.cgi?id=50281

    Lot of other bugs in x.org have been fixed other than the 13 CVEs.

    Now, normally, I would patiently explain to you, Robert, what the “-nolisten TCP” config item might mean.
    Is it not simple enough. Networking side of X11 server disabled. The X11 server never opens the 8000 socket. Also its not a config item is a command line item.
    http://ubuntuforums.org/showthread.php?t=1642286
    Yes 2010 and before x.org server networking disabled by default. X11 does not require networking you can even disable loop back.
    http://dri.freedesktop.org/wiki/SharedMemoryTransport/
    Remember opening a socket on Linux or Unix can be a local file. Opening a socket does not automatically equal remote security risk. The big problem with X11 protocol is that tcp part of X11 was in fact an addon. First X11 used only a local file socket.

    Default install state of Linux Distrobutions not one of those 13 bugs in X11 provided a remote attacker exploit option. Why because default install is X11 networking off. Its only stubborn people who decide to keep on using X11 over a network instead of using any of the newer protocols who are at risk of any of the 13 bugs effecting them.

    DrLoser
    Now tell me how many of them (or their libraries) can be trusted to sanitize input over the Net (even over a LAN), which is a minimum requirement.
    Interesting point about X11 applications do not talk X11 protocol directly instead use Xcb or Xlib. In 2013 Xlib and Xcb were checked that they sanitize input from network.

    Majority of suid binaries on Linux don’t use networking at all. Checking for permission 2000(sgid) in a or set up with 4000(suid) is wrong. sgid without suid is normally harmless if a binary exists that is both 2000 and 4000 you have a problem for sure.

    Why is setting 2000 harmless on a Linux system. If you user does not have permission it write as X group as a normal user sgid on a running application under Linux does not change this. So if user running program cannot write to the gid number the application with the sgid on it cannot write to that gid either in fact the program fails to run. If Linux was a proper conforming Posix system this would be another matter. 2000 on Linux being set is security enhancement.

    Writing cross platform applications permission 2000 can have different meaning so should be avoided where possible.

    There are scripts out there to audit Linux who are vastly better than you at this DrLoser.

    Just to be interesting you need to check file capabilities on Linux because the file system under Linux can order suid applications to drop permissions.

    XF86Config is XFree86 xorg.conf is x.org X11 troll mentions XConf this has never been a X11 configuration file.

    I’d assert that FLOSS has a better chance if it’s ignored, or at least never used by a significant proportion of the population.
    1 never going to happen since over 50 percent of the population already use FOSS based stuff. 2 Windows is worse from a security point of view yet this does not get media coverage it should.

    Because, and you may have noticed this, oiaohm, FLOSS has been getting a pretty damningly bad press recently.
    True but the press are a pack of idiots when it comes to security. They are a pack of idiots because being a idiot will sell more papers. Journals about secuirty are dry and boring.

    Heartbleed and ShellShock are small than the number of faults that have been turning up in Windows in the same time frame. Both are mostly press because they got above 10 possible CVE each.

    DrLoser if you hate X11 server you should also hate Internet Explorer and Sharepoint. Remember both Internet Explorer and Sharepoint default installs is either provide or access remote access that could untrustworth. X11 server default installs are Local access only. By the way Internet Explorer and Share-point both obey environmental vars to alter behavior. So its not FOSS world bad behaviors.

    DrLoser see the problem yet. The press makes a song and dance about X11 that is used by less than 2 percent of the user base and due to default configuration less than 1 percent of that 2 percent will be exploitable. Sharepoint and Internet Explorer that have major problems are remotely exploitable in default configuration gets zero press coverage.

    Press is the last thing you should depend on to attack Linux over security issues. Press on security issues is the world biggest Troll. Completely bias and mostly incorrect.

  20. DrLoser says:

    I doubt you will “‘fess up,” however. I’ll just take silence as an implicit agreement that I have a point. Which isn’t very interesting, from the point of view of discussing this stuff.

    So, is there any other thing we haven’t mentioned about the href=”http://theinvisiblethings.blogspot.co.uk/2011/04/linux-security-circus-on-gui-isolation.html”>revolting security deficiencies of X in the modern world?

    Why, surprisingly, yes there is.

    How does session isolation grab ya?

  21. DrLoser says:

    And, likewise.

    I presume that “retreating to your final position,” you would admit, Robert, that the level of security imposed on a communication between an X-Client and an X-Server is of absolutely no consequence whatsoever, when you consider the rotten shabby code on one side (CVEs as quoted) and the various attack vectors on the other?

    You’d look pretty silly if you didn’t. Go on, ‘fess up.

  22. DrLoser says:

    DrLoser, retreating to his final position, wrote, “I cannot confidently assert the same thing of any other FLOSS software.”

    It was a de minimis observation, Robert, nothing more.

    Should you care to nominate any other piece of FLOSS whatsoever, I’ll give it a go. What the hey, even running cppcheck over the stuff will probably turn up interesting finds. Of course, “finding” and “fixing” are two entirely separate things.

    But you don’t even have to do that with X. The whole thing is hopelessly riddled with grot.

  23. DrLoser, retreating to his final position, wrote, “I cannot confidently assert the same thing of any other FLOSS software.”

    Hey! Welcome aboard! When they replace X with Wayland, you’ll be a fan of GNU/Linux just as we are. Version 1.7 of Wayland is in the pipe. GNOME and KDE should soon be ported completely to Wayland. XFCE is working on GTK+3, so I may start playing with Wayland in the New Year. By the time the dust settles next year, perhaps all this stuff will come together to work as well as X, ALSA and the rest used to work over the last decade. If the new software is as maintainable as developers claim, 2015 should see a NEW GNU/Linux thriving on planet Earth. Until then, I’ll stick with what I have.

  24. DrLoser says:

    The default setting on most GNU/Linux systems is -nolisten TCP which takes care of most cases.

    Now, normally, I would patiently explain to you, Robert, what the “-nolisten TCP” config item might mean.

    But I’m losing patience with apologists for X. I’ll await your elegant summary of how this particular bit of config sweeps away all the problems. And I’ll ignore the fact that, if you browse the Web, there’s a lot of people out there who are demanding to know how to turn it off.

    But, no matter. I’m absolutely fascinated to know why it matters which one of the X-Server or X-Client actually issues the socket command.

    Particularly since the comms layer, with or without secure transfer of some sort, has absolutely no bearing on the CVEs in question.

    And absolutely no bearing on the attack vector on the network-facing “X-Client” side.

  25. DrLoser says:

    The Xserver has no defences and should not be used on a LAN where security matters. That’s why folks usually tunnel it through SSH, which certainly meets your minimal requirements.

    As a matter of fact, it doesn’t, Robert.

    You can put any amount of secure comms between the X-Client (server) and the X-Server (client) you like. All that does is (in the best, and hopefully the standard, case) ensure that whatever is sent by the X-Client is securely received by the X-Server, and vice-versa.

    Using the medium of SSL clearly has no relevance to security holes within the X-Server (as documented by the 13 CVEs, inter alia) and it clearly has no relevance to security holes on the network-facing side.

    Which is to say, the X-Client. The things with suids.

    Unless, of course, you know better.

  26. DrLoser, badmouthing X, again, wrote, “the X-Server, has practically no defence against attacks vectored on the Server, whoops again, the X-Client.”

    The Xserver has no defences and should not be used on a LAN where security matters. That’s why folks usually tunnel it through SSH, which certainly meets your minimal requirements. X is comparable to broken windows, you know, the shatter attack… which was prevalent in the old days. M$ didn’t fix that until Vista. The default setting on most GNU/Linux systems is -nolisten TCP which takes care of most cases. On my ancient thin clients, SSH causes too much overhead so I don’t use it. On a modern thin client, SSH-tunneling is very practical and effective. With no local applications on the thin client and no access to X from the outside without proper authentication, X is quite secure that way.

  27. DrLoser says:

    FOSS has to be better than closed source to avoid bad press.

    I’d assert that FLOSS has a better chance if it’s ignored, or at least never used by a significant proportion of the population.

    Because, and you may have noticed this, oiaohm, FLOSS has been getting a pretty damningly bad press recently.

    Heartbleed? ShellShock?

    Still, all publicity is good publicity, I suppose.

  28. DrLoser says:

    Pop quiz:

    Does anyone here have a favourite line in Xconf? (I believe it’s now called xorg.conf. Nothing like consistency.)

    Outside perhaps ten or so lines of this drivel, can anybody confidently assert that they know what it does?

    I can’t imagine how pitiful dweebs like myself get by on Windows with a simple GUI that you never really need to touch.

    Oh, wait. I can.

  29. DrLoser says:

    A FOSS project have more than 10+CVE on a single day is abnormal. A clossed source program having 10+CVE on a single day is highly common.

    Nope, not so.

    Your evidence so far (and I’ll take it on trust. I spent too much time last time, following up links that you were too l33t to supply) refers purely to an entire platform. And let’s not even get into the “vulnerability level” (which in almost all of the X CVEs was 6+).

    This is just one single stack. Furthermore, it’s a stack that has been there for 30+ years. Furthermore, not a single one of these CVEs addressed possible methods of attack injections … because they can’t.

    That’s the beauty of X, you see. All of these CVEs address disgusting and inexplicably woeful bits of “design” on the X-Server side.

    Which is, of course, to say (in common parlance) the Client (thin or otherwise) side.

    And the Client, whoops, I have to stay in Bizarro-Talk and call it the X-Server, has practically no defence against attacks vectored on the Server, whoops again, the X-Client.

    The X-Server takes it on trust that the X-Client knows what it’s doing, security-wise.

    Which it doesn’t. Here’s an exercise for you.
    1) Find the list of possible X suids, filtering the following:
    find / -perm +2000 -o -perm +4000 -type f
    2) Check out the dependencies for each. Pipe through:
    readelf -d

    Now tell me how many of those suids correctly drop privileges, which is a minimum requirement.

    Now tell me how many of them (or their libraries) can be trusted to sanitize input over the Net (even over a LAN), which is a minimum requirement.

    Frankly, any stack that allows you to completely alter behaviour simply by setting the DISPLAY environment variable (or any other) — a minor exploit that, sadly, ShellShock made quite trivial — stinks to high heaven.

    I first came across that (and other) standard X behaviour in 1996, I think.

    Callow youth that I was, I thought to myself, “That’s insane! No doubt somebody will fix it eventually.”

    Eventually, when fixes to X are involved, is a seriously long time, isn’t it?

  30. DrLoser says:

    On this we can agree. C = crap or is it C == crap? I can never remember.

    In this case, it’s C++, Robert. Admittedly, C++ written as though it were C.

    And the reason we can agree is precisely because of your Pascal background. High-level arrays just aren’t supposed to allow this, are they? Pascal pretty much set the standard here.

    Which, incidentally, has been followed by C++ and every language after it. C++, for example, offers std::vector, which neatly encapsulates what these buffoons are trying to do. One might argue that it over-exercises the heap as opposed to the stack, but that’s not really much of an argument: you could easily extend std::vector to use a memory pool of its own, thus incurring a single malloc and nothing more.

    Now, back when this particular piece of garbage was written (and it was garbage even then. I mean, a) who the hell cares so much about stack memory that they only allocate five slots and b) who the hell doesn’t stop when they get to the number of slots?), there’s some sort of justification.

    But code auditing doesn’t just mean spotting immediate flaws. It also involves recognising, and fixing, technical debt.

    Technical debt such as this.

    In twenty years, X has not done this. (Evidence? XFree86 and X.Org are both subject to pretty much all of those CVEs. Is there a difference between them? Technically, none that spring to mind.) It was pretty much worthless when it was created, and nobody has spent any serious time on it since.

    You want me to find more horrific bugs/security gaps of the same sort? I can confidently assert that I can do so.

    I cannot confidently assert the same thing of any other FLOSS software.

  31. oiaohm says:

    Big thing that DrLoser and most MS supporting Trolls don’t get is something horible.

    A FOSS project have more than 10+CVE on a single day is abnormal. A clossed source program having 10+CVE on a single day is highly common.
    Abnormal for a closed source application is 20+CVE numbers on a single day. Between 10 to 20 CVE in a single day is common for closed source applications.

    Of course the FOSS world will put out big news about x.org having 13 CVE numbers. Because this like like a man bites dog event as in its abnormal. Now Windows, MS Office, Internet Explorer, Exchange, Sharepoint release more than 13 CVE numbers in a single day there will be no news coverage because this is like a dog bites man event as in being common and should be expected.

    DrLoser really why make a song and dance about a CVE count number that is common. To sell Microsoft products I have to ignore the fact that Microsoft products run far too high in CVE numbers. You have to take the policy if they don’t ask don’t tell.

    FOSS has to be better than closed source to avoid bad press.

  32. oiaohm says:

    Dr Loser Internet explorers record for a single day are a mess. 13 is really nothing compared to the nightmares that have come out of Microsoft. In fact that is the record for the most number of CVE in a single day associated with a single application.
    http://www.cvedetails.com/vulnerability-list/vendor_id-26/product_id-9900/year-2014/Microsoft-Internet-Explorer.html
    2014-11-11 is 18 CVE in 1 day.
    2014-07-08 is 21 CVE in 1 day.
    2014-06-11 is 54 CVE in 1 day.
    Part of Internet Explorers requirements to operate is dealing with untrusted sources.

    Those are just the spikes for this year.

    Please note the difference here. Internet explorer 3 major spikes in CVE numbers and zero media/troll coverage.

    DrLoser did you not get the clue Internet Explorer exceed 50 CVE in a single day.
    X11 may be only be interfacing with known applications.

    Also note mitre code 119 how often it appears. 119 is the same class of bug you reference in X11 code. 119 is all over Microsoft made applications.

    2014-11-11 was fairly busy for Windows 7 with 7. 13 is not that far out there.
    Nevertheless, oiaohm, I beg to question your implication that Microsoft Windows has ever, even anything near, accumulated 13 consecutive CVEs in a single day.
    This is complete lie said by a idiot who does not bother doing research.
    2011-07-13 with Windows 7 is 18 CVE in a single day.
    2013-09-11 with Windows 7 is also 18 CVE in a single day.

    18 has been quite a common value for Microsoft for a single day.

    Please note this is not a 1 off event.

    Basically DrLoser Microsoft products have spat out 50+ CVE in a single day. Worst 100+ CVE in a working week yes 5 days on a single product. That is when a Microsoft product is having a bad run. X.org 13 issues to be truthful is louder than we like but not where near as loud as Microsoft gets. Remember 100+ CVE in 5 days on a product that is expect to make remote connections to untrusted sources . Yes 100+CVE in 5 days is what Internet explorer has been able todo this year. Yet there is like zero news coverage. DrLoser if you are willing to ingnore something as bad as Internet Explorer exactly why should I really be panicked by just 13 bugs in x.org server. Reality is x.org CVE count when being audited is turning out a lot better than expected. Reality we were expecting numbers in the hundreds with a audit. Seeing something sub 50 when fully totaled up is looking quite ok.

    https://blogs.oracle.com/alanc/
    Alan Coopersmith’s employer is Orcale. His place on the X11 board is as Oracles representative. Its Orcale who commissioned him to Audit. There is a X11 conference video in 2012 where a different Oracle Representative formally states Oracle is going to assemble an Audit team. Basically it is a matter of public record that he is funded todo it.

    This is the many eyes theory of open source. Each group using open source can commission their own audit teams.

    http://www.cvedetails.com/vulnerability-list/vendor_id-8216/product_id-30511/year-2014/X-X-Window-System.html

    DrLoser by the way you love being under researched. X11 was only 11 in a single day.

    X.org server has 13 CVE for the complete year. Ok they were all published on 1 day. Its less than 2 CVE a month. Not one of these was a response to an attacker. So 13 zero days closed before exploit.

    A more valuable guide to how good or bad an application comes from looking at yearly.

    Also DrLoser fails to understand why you should bulk release bugs. Attackers becomes kid in candy store unable to make up mind what flaw to go after. It in fact increases the chances that the patches will be in place before attacker acts.

    Lets just say for arguements case I was using Xfree86 instead of X.org server. Out of the 13 bugs that effected X.org server only 5 effected Xfree86. Even so its only 5 for the year.

  33. DrLoser wrote, “I beg to question your implication that Microsoft Windows has ever, even anything near, accumulated 13 consecutive CVEs in a single day.”

    That’s easy. M$’s OS regularly has 50K bugs at RTM and thousands end up as CVEs.

  34. DrLoser wrote, “X is full of this crap”.

    On this we can agree. C = crap or is it C == crap? I can never remember.

    Imagine how different the world would be if stuff like this were written in PASCAL which was designed to force better decisions. Then we and the compiler would all agree what this stuff meant.

  35. DrLoser says:

    I assume, oiaohm, that you have prepared a scintillating defence for your ludicrous proposition quoted below?

    Just because X11 has 13 CVE you miss that it caused 3 CVE in Windows and 2 CVE in OS X.

    A single cite would do.

    And in the mean time, I will comfort myself with the following evidence of your pathetic tendency to fantasise on this and any other subject:

    Please note code from X11 server might be copied into newer Wayland solutions or even into Windows or OS X applications. Why X11 is MIT licensed so you can embed it source code in closed source applications.

    Well, it might.

    And then again, it might not.

    Possibly the managers involved would ask a security expert from the University of New England to make a judgement call on that one?

    They might.

    Or, then again, they might not.

  36. DrLoser says:

    Personally, I think that somewhere between 55 and 70 CVEs should be the trigger point.

    I hate to be too unreasonable. Even though I only mentioned thirteen in the first place.

  37. DrLoser says:

    If less than 50 CVE are enough to tell people not to use Linux DrLoser you should be screaming from the rooftops for everyone to uninstall Windows due to being a security disaster.

    Since you are a “Microsoft VAR,” oiaohm, I respect your opinion on this matter. After all, part of your everyday job is to explain to customers why they should buy a system that is a “security disaster.”

    I have no doubt that you are fantastically successful in that endeavour.

    Nevertheless, oiaohm, I beg to question your implication that Microsoft Windows has ever, even anything near, accumulated 13 consecutive CVEs in a single day.

    I’m not often impressed by the X system, but in this case I’m prepared to admit to being impressed … at the sheer chutzpah of any lunatic who insists on defending it. (Without cites, btw.)

  38. DrLoser says:

    Never let it be said that I don’t listen to oiaohm, however. I am prepared to concede that the individual he quoted (Alan Coopersmith) has spent the last 4000 man-hours of his working life since 2012 “auditing” X:

    Basically Alan Coopersmith was formally set up in 2012 to start formally auditing.

    Isn’t formally a nice word, oiaohm? It may or may not be true. It certainly implies some sort of contracted process, to which not you, nor I, nor anybody else is privy.

    But that’s all right, because Dr Alan Coopersmith is actually employed by … now, I don’t wish to give the game away by an actual cite. As oiaohm has no doubt learned to his cost (since he no longer cites anything much), these things will bite you in the bum. No, I’ll invite you to guess.

    Debian? Ubuntu? Red Hat? The Linux Foundation? The EU Commission?

    Guess again, suckers, and you won’t like what you find. Also, he’s not the most independent auditor you could find.

    And while you’re mulling that over, and I know how much each and every FLOSS devotee loves Freedoms #2 and #4, I can help you here!

    One of the big problems with analysing X is that … well, there’s rather a lot of it. I’m going to show you a small portion with a very obvious buffer overrun issue. I’m going to claim that this is a pattern that pervades every bit of X. So here we go, from Qkeymapper_x11.cpp:

    void QKeyMapperPrivate::clearMappings()
    {

    uchar*data=0;
    if (XGetWindowProperty(X11->display, RootWindow(X11->display, 0), ATOM(_XKB_RULES_NAMES), 0, 1024,
    false, XA_STRING, &type, &format, &nitems, &bytesAfter, &data) == Success
    && type == XA_STRING && format == 8 && nitems > 2) {

    char *names[5]={0,0,0,0,0};
    char *p = reinterpret_cast(data), *end = p + nitems;
    int i = 0;
    do {
    names[i++] = p;
    p += qstrlen(p) + 1;
    } while (p < end};

    }

    There are so many things wrong with this snippet that I almost don’t want to start. And I’m going to ignore the BIG THING, which is the “ATOM” parameter. It is impossible to grok “X atoms” without actually reading through the entire ICCCM specification. About a million times or so.

    But, leaving that to one side: 0 and 1024. Magic numbers. Probably harmless.

    2 and 5. Magic numbers. Massively harmful. I leave this as a kindergarten exercise for the reader: what happens if nitems is greater than five? Not that even that precondition is at all clear, given the silly and pointless pointer arithmetic.

    And what happens if data came over the network in htons rather than ntohs form? Or, it was 32 bit and now you want 64 bit?

    Do these schmucks even realise that “reinterpret_cast” doesn’t even care?

    X is full of this crap. And I haven’t even begun to discuss “Trust boundaries.” (I can if you wish.)

    You’d have to be a masochistic pervert to consider the thing remotely safe.

  39. Dr Loser says:

    “FOSS has not been audited properly for thirty years”…please cite your source.

    I will do so gladly, Dougie. It’s oiaohm.

    Note, if you will, that I disagreed with this absurd claim. I go on to point out:

    Do tell. As a moron, I am obviously wrong here. FOSS was a disgusting non-audited smelly failure from the start, according to you, oiaohm.

    (Except that it wasn’t.)

    Cogitation isn’t really a speciality of yours, is it, Dougie? I mean, you read the words, and the back-brain kicks in, and you feel you need to make a comment of some sort …

    … but that intermediate effort of “understanding what the other guy just said,” well, I understand it’s a lot of work. Most of us learned to do it as part of our High School Equivalency.

  40. Dr Loser says:

    Whilst waiting for my rather heavily-linked CVE post to reappear (yes, I can see the point of the moderation), we might as well watch oiaohm tripping over himself:

    This is the problem once you deal in cold hard unbiased facts instead of random guess work you are stuck.

    I’m stuck with thirteen cold hard unbiased facts, oiaohm. What are you stuck with?

    Let me guess: “random guess work.” But don’t take my word for it. Take yours:

    Cold hard unbiased facts of bug and defect rates say don’t use Windows.

    Oh, isn’t that sweet? oiaohm just redefined his own “random guess work” as cold hard unbiased facts.

    Curiously enough, without a relevant link. How very unlike oiaohm.

  41. Dr Loser says:

    An interesting observation, if I may:

    At least for the first CVE, I’m pretty sure I could run up a 500 line scrub C program that just looks for mallocs without a check. Splat the entire source tree of X through that, and whoopee!, no CVE.

    With a bit more effort, I could extend that scrub program to look for other malloc issues.

    The buffer/index issues, well, I don’t know, they’re probably harder to find.

    But whoever this bozo is that oiaohm has dug up … if he’s been at this “software audit” thing since 2013, he’s no bloody good at his job, is he?

  42. Dr Loser says:

    It’s a source of continual fascination that oiaohm’s otherwise gratuitous spattering of links completely shrivels in the light of day when they fail to back him up.

    You suffer from tunnel vision. Just because X11 has 13 CVE you miss that it caused 3 CVE in Windows and 2 CVE in OS X.

    Let’s be honest, apart from the gratuitous ad hominem, this assertion makes no sense at all. It’s difficult, if not impossible, to see how X11 could cause a CVE “in” Windows. I’m not really sure about OSX, either. So, to remedy oiaohm’s missing links, and culled from the unimpeachably FOSS source of Phoronix:

    1) CVE-2014-8091 : malloc attack. Affects: *nix.
    2) CVE-2014-8092 : Multiple integer overflows. Affects: *nix.
    3) CVE-2014-8093 : Integer overflows in GLX extension. Affects: *nix.
    4) CVE-2014-8094 : Integer overflow in DRI2 extension. Affects: *nix.
    5) CVE-2014-8095 : XInput extension, buffer overrun. Affects: *nix.
    6) CVE-2014-8096 : XCMisc extension, buffer overrun. Affects: *nix.
    7) CVE-2014-8097 : DBE extension, buffer overrun. Affects: *nix.
    8) CVE-2014-8098 : GLX extension, buffer overrun. Affects: *nix.
    9) CVE-2014-8099 : XVideo extension, buffer overrun. Affects: *nix.
    10) CVE-2014-8100 : Render extension, buffer overrun. Affects: *nix.
    11) CVE-2014-8101 : RandR extension, buffer overrun. Affects: *nix.
    12) CVE-2014-8102 : XFixes extension, buffer overrun. Affects: *nix.
    13) CVE-2014-8103 : DRI3 or Present extensions, buffer overrun. Affects: *nix.

    In passing, I’d claim it’s significant that there is apparently an X extension called “XFixes” which is … ahem … broken.

    Oh, and it doesn’t seem to matter whether you use Evil Proprietary X.Org X or Magic Fairy Dust FOSS XFree86 X. Both are equally at risk for almost all of these vulnerabilities.

    Not a smidgeon of either Windows of OSX there, my little “Microsoft VAR” friend. Maybe I’m missing something?

    Although, if I were, it might have been a tad friendlier if you’d actually bothered to cite the sources for your claim.

  43. Dr Loser says:

    “Certified Microsoft Troll”…so you lie to people over the phone, telling them their computer has a virus?

    No, I use irony and sarcasm, Dougie. There are higher forms of humour, granted, but unfortunately they would be completely wasted on you.

  44. oiaohm wrote, “Worst Internet explorer you know that html rendering part that thousands of applications use under windows took first place for the worst application of 2014 so far.”

    There’s even rumours that M$ will give up and promote FLOSS browsers… Hell is freezing over.

    • Beginning January 12, 2016, only the most current version of Internet Explorer available for a supported operating system will receive technical support and security updates – this will rattle the chains of a lot of businesses that built in dependencies on M$’s non-standard browsers
    • M$ is considering rebranding Internet Exploder
    • A succession of Patch Tuesdays have covered one horrible vulnerability after another

    A rational human could easily form the idea that M$ would be far ahead dropping the product entirely. In one blow, you could eliminate half the vulnerabilities in the system and avoid a lot of bad PR. They have plenty of “application lock-in” and don’t need “browser lock-in”. There’s also the possibility of starting over from scratch and building a proper browser built on open standards and sound engineering instead of what happened to Mosaic under M$’s guidance by salesmen. I’m sure M$ could afford to do it right. The question is “Will they?”

  45. oiaohm says:

    DrLoser
    All I need is common sense.
    Common sense really DrLoser you have not proven you have any of that. You suffer from tunnel vision. Just because X11 has 13 CVE you miss that it caused 3 CVE in Windows and 2 CVE in OS X.

    This is the problem once you deal in cold hard unbiased facts instead of random guess work you are stuck. Cold hard unbiased facts of bug and defect rates say don’t use Windows. Yes they also say don’t use Linux. If it comes down to a choice choose Linux.

    Interesting point due to FTP being a simpler protocol even with TLS it is simpler to audit than HTML. Basic common sense here don’t throw the baby out with the bathwater. Just because FTP historically was defective does not mean current versions are.

    DrLoser if you had common sense you would not go running you mouth off about security issues that have just appeared without comparing to base line.

    Sorry DrLoser you have absolutely no common sense on security topics. Many parties use FTPS. Put it this way FTPS for just sending in a few files is far more secure than using HTTPS wrapped SOAP that some enterprises use. Remote exploits by publicly opening up sharepoint happen.

    Something about security. A server is only secure once it has been ground into dust and used and foundation filler. Up until then you have security risks to be mitigated. FTPS is a more secure option than a hell of a lot of other creativities that businesses has used.

    http://www.cvedetails.com/product/11116/Microsoft-Sharepoint-Server.html?vendor_id=26
    Nice overview of sharepoint.
    http://www.cvedetails.com/product/3436/Microsoft-IIS.html?vendor_id=26
    Then IIS.
    Remember IIS is Microsoft FTP server. If users want to upload files the most secure option even on Windows for number of bugs is just use the FTPS server option.

    http://www.cvedetails.com/top-50-products.php?year=2014

    Just to be nasty. DrLoser wants to play the CVE game. Please be aware that Microsoft for 2014 had the most CVE over all. Worst Internet explorer you know that html rendering part that thousands of applications use under windows took first place for the worst application of 2014 so far.

    I am sorry to say all the noise you are making about a few Linux CVE is completely out of proportional response.

    If less than 50 CVE are enough to tell people not to use Linux DrLoser you should be screaming from the rooftops for everyone to uninstall Windows due to being a security disaster.

  46. oiaohm says:

    Robert Pogson the problem here is not the size of project.
    http://www.zdnet.com/article/coverity-finds-open-source-software-quality-better-than-proprietary-code/
    This here is something DrLoser does not want to talk about. The reality that closed source has more bugs than open source. Documented fact shown by coverity and by cve numbers. By the way every coverity report for the last 8 years has said the same thing.

    Does matter how big or small your open source project is you can use coverity scanner for free.
    https://scan.coverity.com/projects
    Yet for some reason not all active FOSS projects are listed here that are compatible. So yes finding projects not taking advantage of audit services is really simple for FOSS as it publicly published.

    dougman FOSS in the last 30 years has not been audited enough this is proveable every time you check the projects to audit code bases. Automated audit tools not used. Lack of human teams to audit projects manually to make sure Automated tools found everything.

    By the way its not all of FOSS in the last 30 has not been properly audited. There is a larger percentage of closed source software that has not been formally audited by either automated or manual. A percentage of all programs get audited properly.

    Part of dealing with the security problem is accepting the reality. No one has done enough. Attempting to play that Closed source is magically better than FOSS is a fools card. FOSS may have the lower defect rate. That does not change the case that defects can be damaging.

    From defect rate and cve rate best OS to connect is Openbsd very few people use that. Next best is Linux. Windows happens to trail in after all the bsd forks.

    Problem everyone has to remember the GLX and opengl(OS X and Windows) issues is less than 10k lines of code as the source of problem. It does matter how small a project is. It does matter if the project is closed or open. It need auditing.

  47. One needs to define “audit”. Many FLOSS projects were started by one or a few people and maintained by a few people. Is that code “audited”? It probably is if folks were concerned about security and the code-base was small but after you get to millions of lines of code, automation is essential. People just can’t do it all. It’s easy to change code for one purpose but inadvertently create some bug or hole. Formal/exhaustive/routine/detailed auditing catches most of that. FLOSS has plenty of tools that are useful but there is no verification that they are applied for many projects, at least not that’s visible on the web. I think it would be a good step forward for FLOSS to check itself out by other than the original programmers in a systematic way to improve the code. The kernel and LibreOffice did that and found a zillion things needed fixing. Why not do it? There is a cost per-project but as FLOSS is now used by more than a billion people, the potential benefits are huge.

  48. dougman says:

    “FOSS has not been audited properly for thirty years”…please cite your source

    “Certified Microsoft Troll”…so you lie to people over the phone, telling them their computer has a virus?

    “I can practically guarantee that the equivalent of HeartBleed or ShellShock will occur some time in 2015.”…great!…lets start a wager… I have some bitcoins to offer up, what say you?

    On a sidenote, M$ has accepted BTC as well now: http://blogs.microsoft.com/firehose/2014/12/11/now-you-can-exchange-bitcoins-to-buy-apps-games-and-more-for-windows-windows-phone-and-xbox/

  49. DrLoser says:

    Anyhow, delving deep into the bran-tub once more:

    The issue of not auditing FOSS properly resulting in many OS’s catching security exploits is over 30 years old. Really a moron commenting on security does not know this.

    “A moron?”

    And let’s just make your claim clear here. Apparently, FOSS has not been audited properly for thirty years.

    Which might, I suppose, be believable if your only source of information is random Googling. But, as a Certified Microsoft Troll, I should at least point out that 1984 is possibly not a realistic starting point for network IT auditing.

    Do tell. As a moron, I am obviously wrong here. FOSS was a disgusting non-audited smelly failure from the start, according to you, oiaohm.

    (Except that it wasn’t.)

    DrLoser do you have a crystal ball to predict where the next lazy coder will copy something from and not audit it. You don’t right.

    Right.

    But I don’t need one. Given an intuitive Bayesian analysis of the last year or so (HeartBleed, ShellShock, thirteen separate CVEs in one place, the fact that people like you and Robert still cling to the absurd theory that an outward-facing FTP server is secure in any way whatsoever), I don’t need a crystal ball.

    All I need is common sense.

    I can practically guarantee that the equivalent of HeartBleed or ShellShock will occur some time in 2015.

    Why? Because auditing the likes of, say, the X stack is practically impossible.

    And because everybody has, for no good reason, bought into the Kool-Aid. Linux is “inherently secure.”

    One would imagine that the events of 2014, if not before, have thrown this absurd notion onto the pyre.

  50. DrLoser says:

    Another day, another piece of ill-informed and frankly ad hominem blather.

    Which reminds me, Robert. You have exercised your sacred duty of moderating (ie banning) ad hominem posts in the very recent past, haven’t you? Now, I know you get just as much amusement from oiaohm’s generic ignorance on matters of Physics as I do … but it wouldn’t hurt to slap the silly little bugger over the knuckles every now and again, would it?

    Or is oiaohm off-limits, for some bizarre and unfathomable reason?

  51. ram says:

    “except for the audio problem…”

    The solution there is to use NetJack.

  52. oiaohm says:

    DrLoser if you want a historic example to show how stupid we are.
    http://en.wikipedia.org/wiki/Ping_of_death 1998 and before.
    Cause of the Ping of death issue is everyone picked up Liberally licensed sample code on how to do a TCP/IP stack, never audited it then proceeded to use it.

    The issue of not auditing FOSS properly resulting in many OS’s catching security exploits is over 30 years old. Really a moron commenting on security does not know this.

    DrLoser do you have a crystal ball to predict where the next lazy coder will copy something from and not audit it. You don’t right.

    The web of interconnects between code bases is very complex. So when ever you see a FOSS code base being properly audited you be thankful. Just like X.org audit you are never 100 percent sure if you will turn up another interlinked exploit due to common shared source. If it does its another exploit dead for good.

  53. oiaohm says:

    DrLoser the big thing here unlike FTP protocol has a modern updated versions that has been extend to fix all its issue
    If you will not believe me and read the cites.
    http://en.wikipedia.org/wiki/FTPS
    2005 SSL and Certificates come part of the FTP standard as an official extension.

    http://moveitsupport.ipswitch.com/moveit/doc/en/moveitdmz_ftp_certificates_createclientcert.htm

    These instructions are for Windows current servers supporting FTPS. FTPS supports requiring valid client certificate as well as valid server certificate.

    The security difference between a SSL vpn and a FTPS set up properly even on windows is bugger all.

    FTPS can move a lot of files fairly quickly and securely with all OS. Bittorrent not so much on the security side.

    You are attempting to kick a horse you think is dead DrLoser problem is its only sleeping and it going to take high offense and harm you badly with the FTP idea.

    There is no way to update X11 to fix. FTP turned out to be case that a few extensions fixed it major problems.

    Robert Pogson limited bandwidth of X is a blessing.
    X11 uses more bandwidth than the newer protocols.

    X11 was designed to be like html5. Where the server would do a lot of the rendering and processing to hide the network lag from the clients. Problem is X11 was not designed to be secure.

    Some weird thing about an external auditor who has been reviewing the code base since 2012 — a job for life, really, but not one that is likely to achieve anything useful
    Something I did not mention because I waiting for the DrLoser normal idiot answer.

    You know that GLX issue he found. It happens to be in SGI code. Guess where sections of that SGI code happen to be. In Microsoft implementation of opengl of course. What about OS X they used SGI code as well in there Opengl. All that SGI code contains the same types of bugs. Woohoo. Exploits everywhere. That is what we just had.

    If you think Windows and OS X are disconnected from the x.org code bases you are wrong. X.org fully audited is kinda required because its picked up code from all over the place. So results of the X.org so far is if you are using any SGI code don’t trust it.

    DrLoser how can you achieve anything useful if you keep on commenting on things when you don’t understand the topic.

    The big flaw with the idea about security by obscurity and closed source. A simple problem comes in if that developer has in fact released code elsewhere their bad coding practices become known. Once you know what bad coding practices to test for finding exploits even in closed source is simple. Reality the black box of closed source to protect against security exploits get broken very quickly.

    Please note code from X11 server might be copied into newer Wayland solutions or even into Windows or OS X applications. Why X11 is MIT licensed so you can embed it source code in closed source applications.

    There are a lot of other BSD and MIT and other very free to use licensed code bases that are also in need of an auditor. DrLoser welcome to the poison well problem. Yes FOSS is free to use but if its not audited and its a highly liberal license you can fairly much bet fragments of it will make it way into closed source OS’s as developers take short cuts.

    So people using closed source and think FOSS kinda suxs should be worried. Every chance the next security exploit you will suffer from will come from FOSS some developer copied in to save some time. You want to reduce the number of exploits in the wild effecting Windows or OS X all liberal licensed FOSS projects have to be audited.

  54. DrLoser says:

    And if you’re feeling especially adventurous, you might venture into philosophical grounds.

    Tell me, do you or do you not favour “mechanism over policy?”

  55. DrLoser says:

    That’s true only if you want X to do things it was never designed to do, other then show pix and send clicks.

    You pique my interest, Robert.

    What was X “designed to do?”

    For the historically-minded, you are allowed to reference xterm, xload and xclock.

    You may further reference I39L if you wish.

    And just go crazy on Motif, why don’t you?

    … “Design?

    Monkeys throwing paint at a wall would have come up with a better “design.”

  56. DrLoser says:

    Some people just don’t agree with DrLoser. They want to move a lot of files in a hurry.

    And such people take extreme pains to ensure that the FTP server in question is not connected in any way at all to the rest of the network. Once again, Robert: IBM does not allow access to their corporate network through FTP. I am not a fan of IBM. I still believe that they are morally deficient and should ‘fess up for DeHoMAG. But I refuse to believe that IBM are quite that stupid.

    Incidentally, the driving force behind providing an FTP service in these and any other cases you can spirit up is nothing to do with “moving a lot of files in a hurry.” If that is the imperative in question, they would just use Bit Torrent.

    No. The driving force behind providing an FTP service these days is purely and simply because it’s an antiquated and stupid and dangerous Unix protocol that has somehow survived twenty years of incessant Black-Hattery, and there are still FTP clients out there.

    You want to talk to the idiots using FTP clients? You have to provide an extensively firewalled FTP server.

    But don’t go telling me that this is otherwise a good idea. Because it’s just plain stupid and dangerous.

  57. DrLoser says:

    No software is perfect but FTP does its job on millons of servers despite DrLoser’s opinion.

    So do computer viruses, Robert.

    This is not necessarily a recommendation. There are several large government and/or criminal organisations that use outward-facing computer viruses for fun and profit.

    The difference between these large government and/or criminal organisations and you is that, for some unknown reason, although it is possible that even large government and/or criminal organisations …

    … are not daft enough to propose using an outward-facing FTP server in 2014.

    Perhaps you know something that large government and/or criminal organisations don’t? Naturally you will have extensively researched the copious FTP vulnerabilities that are inherent in this otherwise obsolete protocol.

  58. DrLoser wrote, “FTP protocol has a modern updated versions that has been extend to fix all its issues…No it hasn’t.”

    No software is perfect but FTP does its job on millons of servers despite DrLoser’s opinion.

    Here’s an example: Indiana University’s Scholarly Data Archive
    “Once a user has an SDA account, the service can be accessed from any networked host which offers at least a TCP/IP based file transfer protocol client, including high performance access methods, namely parallel FTP (PFTP) and Hierarchical Storage Interface (HSI), as well as an HPSS API available for programmers.”

    and IBM/DOE High Performance Storage System

    Some people just don’t agree with DrLoser. They want to move a lot of files in a hurry.

  59. DrLoser wrote, “X is architecturally broken.”

    That’s true only if you want X to do things it was never designed to do, other then show pix and send clicks. It works doing that quite well. I noticed the Little Woman’s new/old thin client was clogging up the network. It was Flash running amok on a pop-up screen. It turned out I had Beast running a 100mbits/s connection. Fixed that. Now Flash can go nuts and the rest of the network doesn’t care. Sometimes, the limited bandwidth of X is a blessing.

  60. DrLoser says:

    DrLoser the big thing here unlike FTP protocol has a modern updated versions that has been extend to fix all its issues…

    No it hasn’t.

    X11 is still broken and the protocol is classed as not fixable.

    Yes it has.

    All the rest, oiaohm, was just babble. Some weird thing about an external auditor who has been reviewing the code base since 2012 — a job for life, really, but not one that is likely to achieve anything useful — and the usual claim that “fixing things [re NVidia] goes faster with FLOSS.”

    You can repeat this as often as you like, but it doesn’t make it so. And X is architecturally broken.

    As you say, there is no way around that.

  61. oiaohm says:

    Robert Pogson
    I’ve set up a bunch of labs using only Vesa drivers. You can do a lot of pointing, clicking and gawking with that. There’s no DRI there, just data.
    No you don’t need GLX or DRI for X11 to be using different forms of scripting.

    Classic evil was the X font server that some people still leave installed on their X11 clients. Yes where font hint scripting could stall the complete X server. Then you have horribles like http://en.wikipedia.org/wiki/Display_PostScript that is a super complex scripting language included.

    X11 even with DRI or GLX you are not insured to be just data.

    Please also remember mesa will provide software rendering GLX on X11 vesa driver even without DRI.

    http://www.phoronix.com/scan.php?page=news_item&px=Nzc3Nw

    Vesa driver is a fail safe driver. Vesa driver also I will burn power like nothing else due to its complete lack of clue how to power manage.

    Unless you very carefully strip X11 extentions you can very quickly find that you are really attempting to run code on the thinclient like software glsl scipts.

  62. oiaohm wrote, “The base X11 protocol is about sending and receive data you cannot do very much with this.”

    I’ve set up a bunch of labs using only Vesa drivers. You can do a lot of pointing, clicking and gawking with that. There’s no DRI there, just data.

  63. oiaohm says:

    X11 is about sending/receiving data, not code.
    Robert Pogson this is a common miss understanding of X11.

    The base X11 protocol is about sending and receive data you cannot do very much with this. Everything that renders to screen or processes input is a extension. Many of these extensions include scripting. Including some that process input.

    The issue here is early X11 was very much following the path of current day html5 and javascript. Network speed where way too slow.

    X11 Protocol transports code that is run by the Xserver. But unlike html5 X11 lacks any form cross-site/application scripting exploit blocking.

    All this is irrelevant when a teacher or student can look over the shoulder of another and read text/watch typing.
    Not really. These issues of lack of protection from cross-site/application attacks can equal users end up with odd messages programs trying to talk to disconnected users sessions. So a students interface can play up when using X11 because a prior student had a crash and their terminal is the same IP address. Please note not all X11 extensions have been found to check the X11 cookie value before accepting instructions.

    The ssh recommendation is no joke in larger networks it issue prevention.

  64. oiaohm wrote, “X11 Network sniffing keylogging is why you should run X11 using ssh. X11 server side running client code is how come X11 allows complete key logging.”

    ?! X11 is about sending/receiving data, not code. All this is irrelevant when a teacher or student can look over the shoulder of another and read text/watch typing… X11, raw as it is, is a beautiful solution for many applications.

  65. oiaohm says:

    DrLoser it pays to follow up on those X11 server bugs a little closer.
    http://nvidia.custhelp.com/app/answers/detail/a_id/3610
    Did you fail to read this bit. As Linux users we have been screwed. X.org provide X11 Server at long last as a formal code base auditor.

    Every single GLX fault that was in X.org X11 server happened to also be in the binary libraries provided by Nvidia. Please note AMD Catalyst don’t contain the fault because they in fact audited their code base audited when they took over from ATI fixing a huge number bugs so solving Catalyst driver stability problems. Yes this issue does highlight one of the big issues with closed source. If X11 server and Nvidia were both closed source these issues would have remained under the carpet for longer.

    This is last years X11 fault finds by the same auditor Alan Coopersmith from Orcale.
    http://lists.x.org/archives/xorg-devel/2013-May/036276.html
    Client side libraries were worse than X11 server extensions.

    Basically Alan Coopersmith was formally set up in 2012 to start formally auditing. Just to point out he has not completed the audit of the X11 server code base yet. So expect him to find more yet.

    This is what all open source projects need a formally set up and paid auditor. Many eyes theory is valid as long as there are paid eyes todo it.

    X.org X11 server code base has to be audited because even when Linux world does change to Wayland the Xwayland for old application compatibility will be still based on X.org X11 Server code base.

    DrLoser the big thing here unlike FTP protocol has a modern updated versions that has been extend to fix all its issues X11 is still broken and the protocol is classed as not fixable.

    VNC, Xpra, NX and RDP are all newer protocols. NX is a heavily modified version of X11. There is a feature of the newer protocols. Thin client can be restarted the newer protocols without users losing their work.

    Basically VNC, NX RDP and xpra don’t use X11 protocol over the wire. ssh wrapping only helps so much with X11 issues. Three of the alternative of those protocols include audio as a feature.

    All 4 video encode the windows of the application and send video stream over wire. No client application instructions running on thin client.

    If you set things up with only Xservers on clients, you diminish the insecurity a lot as you do when you keep applications software off the clients.
    No you diminish the insecurity a lot more using a client app for NX, RDP, VNC or xpra. Since you have removed access to random X11 extensions on the thin clients. You have also increase you client crash resistance as user can reconnect in case of a thin client lockup or issue. Keeping software off clients with X11 equals using NX, RDP, VNC or xpra. X11 was design that application can give server instructions to perform actions. So sections of the applications code end up running on the server if you use X11 alone so breaking the idea of keeping application software off clients.

    NX, RDP, VNC, xpra are all using video/audio streams. Items the application has not produced so providing more predictable items for a thinclient to deal with.

    DrLoser is a Loser on this topic because he does not know how to make this arguement properly. You completely missed by using X11 protocol alone equals application code running on the thin client and this is a strict no no.

    Network sniffing Key-logging NX, RDP, VNC, xpra are all open to as well if they are not operating in encrypted and per application modes. DrLoser you raised FTP. So here is a classic case of you repeating the same error. X11 Network sniffing keylogging is why you should run X11 using ssh. X11 server side running client code is how come X11 allows complete key logging. Badly setup NX, RDP, VNC, xpra could also allow complete key logging over network. Solution to complete key logging issue is more that not use X11. Reality is we do need per application security.

    DrLoser the only valid reason I have to use X11 over network in ssh is diagnostics. Diagnostics case been like xrdp or Nx or xpra for some reason not being able to run application.

  66. DrLoser says:

    I use X because it’s good enough and it’s easy to set up.

    Not the most convincing Security Mantra I have ever heard, Robert. I hear echoes of FTP servers when I read this …

  67. DrLoser says:

    On a neutral base, and I’d be genuinely interested:

    Do you run X with extensions, Robert? (I would assume not. I wouldn’t, myself. We both know the limitations of the protocol.)

    And do you run it as root? If not, it would be instructive to hear what privilege limitations you impose on it.

  68. DrLoser says:

    There are layers of security you can run to protect thin clients and you don’t need to run X at all.

    I take it we agree that X is hopelessly full of security holes, and has been since the 1980s, then.

    This isn’t much of a recipe for “universally available thin clients via X,” though, is it? Because any use of X that is remotely outward-facing is immediately at risk.

    Inside a LAN, X is only vulnerable to the everyday things. You know, privilege escalation. Key-logging. That sort of thing. Inherent in X, but it doesn’t really matter, because everybody on a LAN trusts everybody else, don’t they?

    At home, maybe. At work, not.

    Now, here’s the obvious question. If, in the ideal thin-client world, “you don’t need X at all …”

    Under what circumstances would you ever use it?

    Once, long ago, I seem to recall that an expert in the field was quite firm in his opinion that the best thing, the best thing ever, that happened to a School System in Easterville was ….

    X.

  69. DrLoser wrote about thin clients: ” including the security holes in X.Org.”

    Quoting his/her citation: “How critical these vulnerabilities are to any given installation depends on whether they run an X server with root privileges or reduced privileges; whether they run X servers exposed to network clients or limited to local connections; and whether or not they allow use of the affected protocol extensions, especially the GLX extension.” ”

    If you set things up with only Xservers on clients, you diminish the insecurity a lot as you do when you keep applications software off the clients. Many labs are on private LANs too. There are layers of security you can run to protect thin clients and you don’t need to run X at all. There are options. I use X because it’s good enough and it’s easy to set up.

  70. DrLoser says:

    Is thirteen separate CVEs within a single monolithic “product” within a range of twenty four possibles (X, in accordance with the First Principle of Unix, “does everything it can think of, and almost all of it badly”) a record, Robert?

    Who knows? The Linux community is endlessly ingenious. This is admittedly a new best for Linux software, but I’m sure it can be beaten quite easily.

    Forks of OpenSSL spring to mind as a fertile opportunity for Linux hackers.

  71. DrLoser says:

    Briefly, anybody still running a screensaver on a thin client is either an outright nutter or is someone with hardware so ancient that hibernation doesn’t work. Or, quite possibly, somebody depending on Linux to drive the thin client … which comes to the same thing.

    Anyway, thin clients are definitely the way to go. I can’t think why nobody much uses them any more You get all the joyous experience of 1980s computing … including the security holes in X.Org.

  72. ram says:

    “One can have a server or cluster of servers run much bigger and badder jobs than the typical PC.”

    You’re not kidding! Even if the displays needs heavy graphics, as is used in the movie industry, the Linux cluster behind them can have incredible amounts of computing power. For a VERY recent example, watch the new (part 3) Hobbit movie. All created and rendered on Linux, of course 🙂

  73. DrLoser wrote, “Why would anybody need a screensaver on a thin client, Robert?”

    The original concept was to save CRT screens from taking an impression of the desktop/login screen. Then it was a secure locking device to keep people out who didn’t know the password. In our case, it serves no useful function so I disabled it.

  74. DrLoser says:

    … some screensavers clog the network.

    Why would anybody need a screensaver on a thin client, Robert?

Leave a Reply