Another Nugget from M$

This Patch Tuesday includes something to stop valid .txt, .doc or .rtf text files from causing a malicious .dll being loaded from the same network folder…

Days ahead of the patch folks all around the world have been able to take over almost any PC on a LAN running that other OS just by placing a malicious .dll on some networked storage device. The mind boggles. I can see SANs intended to share files among the group to be used to own the whole LAN. I can see malware being crafted between now and then to seek out such situations and bringing the house down. These folks will work overtime to exploit this hole large enough to drive oil-tankers through.

If there ever was an instance that pushed people over the threshold to migration to GNU/Linux, this could be it. Stay tuned to see whether the lights dim.

see M$

Yep, “Important” but not “Critical” that every version of that other OS from XP SP3 to “7” Ultimate 64bits is vulnerable to remote code execution if a bit of malware gets in anywhere on the LAN. How much sleep will be had tonight? How many millions of machines will go unpatched for the next few weeks? What horrors are to follow?

If this is not enough to spoil your day, IE is doing privilege escalations again… Hey! Trolls! Are any of you going to claim that other OS is secure after this?

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

40 Responses to Another Nugget from M$

  1. oiaohm says:

    Ivan Please read closer.

    1998 is not Scientific Linux.
    “Fermi Linux project in 1998”

    There is also “Cern Linux” from before 2004 as well that was fully maintained by Cern. Ivan. And as stack of other smaller location based.

    Fermilab, CERN, DESY and ETHZ. Yep all of them were running there own distributions builds of Redhat before Scientific Linux started in 2004. DESY and ETHZ merged in latter.

    Difference is all the locations that make up Scientific Linux team today 1996-2006 were doing customised builds from scratch. After 2004 with the start of Scientific Linux they started doing custom installer builds off of Scientific Linux to meet their needs. So Scientific Linux is built generic.

    Scientific Linux is the name of the joint project. grants got a bit tight around 2004. So duplicating the same work could not be justified. What is the one thing all of them have in common. They are all research centres so explains the name Scientific.

    Ivan you keep on calling me a Ham when in fact you are. My statement about a grant dry time in 2004 as the a cause to the start of Scientific Linux is 100 percent correct. I did not say who the grant dry time was for. You searched for email that you pulled out of context to try to prove me wrong. You really should have read what you used.

    “There are other developers of Scientific Linux. Some from Fermi and some from CERN, DESY and ETHZ. Recently Fermi added 2 people to the Scientific Linux Team.”

    Notice something here. 4 different locations staff Scientific Linux. Fermi and Cern are the two starting. All 4 have to cease to need Linux for Scientific Linux to disappear.

    You just provided the evidence that you are a lieing or incompetent Ivan with you claim that Fermi-lab disappearing would have major effect on Scientific Linux.

    The old Fermi Linux I would never have used because it was only supported by one location. You arguments would have been true with it. Basically you have been completely bogus about Scientific Linux. You are failing to see the split. Fermi Linux+Cern Linux made Scientific Linux yes birth is a distribution merge. Why do I remember it so clearly. Distribution that merge in other distributions are rare in the Linux world you could count them all on one hand. Splitting is common.

    Main site is at Fermi and Cern maintains a full mirror of the Scientific Linux site the other 2 maintain internal mirrors of the site so could restart up if Cern or Fermilab disappeared. 1998 start of Fermi Linux is only half the story. There is another half story with Cern Linux itself.

    That email is from a person who started at Fermi. Cern Linux developers are a lot less likely to talk on topic even that they did some quite interesting cluster management work before they joined Scientific Linux.

    Since that person is Fermi staff they can comment on what Fermi is doing. You do see emails from time to time from the other three mentioning about what they are staffing to avoid duplications.

  2. Ivan says:

    “Yes the start of Scientific Linux in 2004 also lines up with a grant dry time.”

    Try 1998, I thought you knew better than to go to Wikipedia for information, Mr. Ham: http://listserv.fnal.gov/scripts/wa.exe?A2=ind1108&L=scientific-linux-users&T=0&P=33214

  3. twitter says:

    It’s funny that oldman has all day to harass Pogson but no time to learn how his job can be done better at less cost. I’m afraid that his “educational institution” is a Microsoft marketing firm and his job is propaganda and harassment.

  4. oldman wrote, “Speaking as an employee of a large educational institution with a fairly large IT staff, I can assure you the the last thing we need is more work”.

    Other large educational institutions found switching to FLOSS reduced their workload so they could put much more effort into things that actually brought value instead of repairing that other OS and its fleas.

    “Since I’m the divisional technician, I do almost everything from my office. I currently support 960 Sun Ray appliances and 120 PCs. However, 85 percent of my time is spent on these 120 PCs we have left in our division.”

    see http://sysdoc.doors.ch/SUN/saskatchewan_casestudy.pdf

    That’s from 2000 with Solaris but the same relationship holds for GNU/Linux. Where I worked last year the GNU/Linux machines caused a tiny proportion of my calls for service even though they were the vast majority of machines. I went from not being able to keep up to being able to ignore IT and use it.

    Largo, FL, saves $millions using GNU/Linux:
    “the elected officials who are responsible for Largo’s IT budget certainly know about and notice Linux, because using Linux instead of Windows is saving the city a lot of money”

    oldman, don’t you care about spending the budget efficiently?

  5. oiaohm wrote, “This is where FOSS is effective most things are paid for once not many times over.”

    Amen. The efficiency of things like the kernel development being centralized or a repository being maintained by Debian or the openness of communication around FLOSS is priceless. FLOSS is a cooperative project of the world. Everyone does some part and everyone gets to use the result. It is beautiful.

    The assumption that somehow a company working to develop a product and charging many times what it costs to develop is better than FLOSS is a huge error in logic and it leads to many pitfalls: malware epidemics, forced migration to the latest and greatest years before there is any need, the hidden tax of M$ at the retail level, and the bugs that users must endure for years instead of days.

  6. oiaohm says:

    “This is the problem oldman you said that FOSS development is -not- exactly paid for work so were disrespectful.”

    Typo missed the not

  7. oiaohm says:

    oldman most likely you are only medium level. Where you software bill is not insane and where you are to the point where you can justify maintaining your own OS to save on costs.

    oldman you are paying redhat support contracts on stuff right? Do you use your developer hours each year or do you let them expire not used?

    Basically there are commercial solutions around Linux.

    I will take it that you were an arrogance bit of work who tried bossing a developer working for someone else around to get what you wanted. So they told you to go and do it yourself what is perfectly justified.

    This is the point FOSS Developers most of them are truly not working for free but paid full time by someone. If they happen to be fixing your problem and you are not paying them it is affecting their employer as well so you were lucky.

    Commercial closed source applications I guess you have correct way to report bugs so they get repaired. Now if you don’t report bugs you have to wait for them to be fixed in good time right.

    Simple fact of the issue here FOSS and Closed source are not much different. Free download closed source report bug it may or may not be fixed the developer might tell you to go jump because they are not interested. Guess what it the same problem. If you are not paying their wage you cannot expect them todo something. Nice part is FOSS will tell you if they are expecting todo something so you don’t have to pay for it because someone else wants it.

    This is where FOSS is effective most things are paid for once not many times over.

    A person using closed source can claim its unnecessary to report bugs in the software they paid for then complain those bugs are never fixed. Same problem as oldman is having with FOSS just not getting you have to pay bit. Since never paid the FOSS developer does not have to jump.

    Oldman your unnecessary claim is crap. The simple fact you are deciding the defect is not worth you paying to fix. Instead will wait for someone else to pay and fix it. So you no longer have any right to complain about it. Its put up or shut up basically.

    Paying for software happens different ways with FOSS and Closed source. Both development is funded.

    This is the problem oldman you said that FOSS development is exactly paid for work so were disrespectful. When you are not paying all you can do is ask nicely. If you want something done now in the FOSS world reach into pocket and kick up some payment to get it done. Now payment can take form of own coder or payment cash to redhat and others.

    The simple fact of the matter when you are truly large its cheaper to employee own developers than pay software or support bill where someone is taking a extra cut.

    Some cases developers at different companies do bug for bug trading. ie they have the skills to fix X bug but they need Y bug fix. The developer with the skill to fix Y bug needs X bug fixed. Barter payment.

    Oldman truly what do you have to trade in the FOSS world to get stuff done. This is where the idea of magically free disappears. Where the FOSS world becomes different it is a true bazaar model.

    Some vendors are giving out free gifts, Some vendors take cash and some take barter all of them are in public with no hidden workshop so you can see what they are working on in FOSS so you don’t have to pay twice for the same item. Of course ideal is being able to trade in all areas of FOSS.

    Question here oldman that is critical how much doing the work will cost you compared to savings. Will you be able to have more staff and be cheaper over all going away from closed source.

    Remember many hands make light work. Having more hands on deck when things go badly wrong is helpful.

    I like the idea last thing we need is more work. You are over worked now right? So you need more staff. That is what going FOSS is all about. More staff for less spend so more man power to be used. Reason someone is taking a percentage and giving to share holders from third party payments in cash. Barter trade avoids percentages.

    Its always funny hearing closed source people trying to justify not paying for FOSS development in one form or another.

    One of the major reasons why Large go own support solutions is more staff to assist there incoming producing stuff with problems.

    Really oldman when things break what is more important. manpower to address problem or the fact you paid for a closed source licenses?

    On a tight budget sometimes its one or the other. FOSS is more man power on ground. With a cost in man power to look after it. But when things go wrong most feature adds and other things can be stopped until the network is back on-line. Result is normally network back on-line sooner because you had more man power todo it.

  8. oldman says:

    “Basically FOSS is not free from costs. Repairing costs. Linux is always fixable question will you pay the cost.”

    Speaking personally, I’d rather pay a commercial vendor and get on with my work, thank you.

    Speaking as an employee of a large educational institution with a fairly large IT staff, I can assure you the the last thing we need is more work, and it IS work Mr. oiaohm that is IMHO unnecessary, especially in light of the fact that there exist commercial closed source applications that meet our needs.

  9. oiaohm says:

    oldman “AN I have seen them fixed promptly.”
    My issue is like the MS11-071 fixed promptly but not fixed correctly. The core bug at the center of that has been around for over 10 years.

    “By Whom Pog. Have you ever even looked at the kernel source? Lets see you fix even the smallest bug Pog.

    The so called fix-ability of Linux is IMHO the biggest pile of crap, meaningless to 99.999% of computer users.”
    Fix-ability of the Linux kernel has improved in recent years as the kernel is getting cleaner internally.

    Large fix-ablity is a big selling point.
    Medium where you can go to bugzilla and the like and get work around options and patches. Most bugs in the Linux kernel can be avoid by disabling particular features in the build. Fix-ablity is not always fix the code. Rebuild without X Y and Z feature so bug A cannot happen. Is still fixed for the problem at hand like MS patching over bugs to prevent there exploit instead of fixing them.

    Small mostly don’t have the time.

    I’ve also been told “if you cant wait, fix it yourself” by some twerp of a developer. To be correct this a valid response large to large. Large to medium and small not really suitable. Maybe the developer did not understand your size and thought you were Large enough to have the resources to deal with it in house.

    Remember Foss developers are not paid by you directly in most cases. They are paid by someone else who give them a list of requirements that is more dominate than your requests.

    Can be part arrogance. If you are paying the FOSS developer yourself you have the right to demand when they do stuff. If you are not you should accept the fact that they may answer with do it yourself oldman. One option of doing it yourself is ringing redhat and other Linux support companies and asking how much to have X bug fixed. Redhat support contracts include so much redhat developer time you can allocate to fixing the issues you are suffering from.

    Oldman were you paying that developers wages yes or no? ie paying support contract you are paying the wages to get that done. If you were not paying wages the developer was perfectly in his rights to respond the way he did.

    Basically FOSS is not free from costs. Repairing costs. Linux is always fixable question will you pay the cost.

  10. That’s where you should put M$’s stuff, then.

  11. oiaohm says:

    oldman

    “How many commercial entities are going to bet the farm on a non commercial R&D only distro who gives no guarantees and sells no support for their product? I think this issue is more important than Scientific Linux’s viability as a substitute for Red Hat.”

    I am really not betting the farm of Scientific Linux. Usage of Scientific Linux is gap filler. So you have machines that you pay 24/7 support with redhat and you have Scientific Linux for the ones you are not paying support for. It also prevents people from ring redhat over a non supported machine. Basically what would be self supporting redhat enterprise anyhow.

    Since both can be operated by staff trained exactly the same way its not a training overhead. Also can be cheeky. New workloads run on the redhat with support when proven stable migrate to the Scientific Linux.

    If CentOS sorts itself out to be dependable again it becomes a gap filler option again. Gap fillers are gap filler.

    Also for somethings with Linux you have no option bar to go community. http://www.rocksclusters.org

    oldman
    “”They provide insurance and a “throat to choke” for management.””
    True yes a throat to choke but this is a size of scale of company you are talking about. Looking for a throat to choke is medium to small enterprise. If management reads the redhat self supporting contract they find out that you don’t have the right to choke redhat either if you have not paid for support.

    So this is part skill of management. Also anyone who reads the EULA on windows finds out very quickly you cannot choke Microsoft over almost anything. Yet businesses still use their products.

    Management is also worried about costs. Redhat and a community and non commercial R&D only distro is cost management. If the quality is there there is no issue using it. If you are setup with cfengine migrating between them is not a issue.

    Yes the cases do happen where something is broken in redhat but is not broken in Scientific Linux and Centos.

    Its really stupid bet the farm to single Distribution OS unless you have the resources to be fully maintaining it fully yourself.

    Combination Redhat, Scientific Linux and Centos. Gives you a 3 way bet. Setup right you can migrate between them as issues or budget limitations effect you. Not betting farm by having fall back locations. Betting farm is using Redhat and Suse exclusively with no migration system because if you cannot pay the support contracts you are in trouble.

    This is why I always have a fall back location that location I don’t want to be operating from. I know I can operate from there if I was forced under budget. Had to once. Data center fire. It was cut operating costs so that it could be replaced sooner. Insurance is good until after the fire you find out that someone has missed updating it with the real value of hardware in there.

    The truth of the matter if all the distrobutions I was using now disappeared it would cause me a headache before about a week to migrate to new distributions. Mostly working out how the cfengine templates have to be altered to suit the new distributions I would have to use. Basically the systems I work in are very careful not to bet the farm on anything. Lessons of September 11.

  12. oldman says:

    “GNU/Linux and FLOSS never really breaks because it’s alway fixable. ”

    By Whom Pog. Have you ever even looked at the kernel source? Lets see you fix even the smallest bug Pog.

    The so called fix-ability of Linux is IMHO the biggest pile of crap, meaningless to 99.999% of computer users.

    “I have seen bugs not fixed until the next release many times and if you want one bug fixed, you have to pay for a whole new licence.”

    AN I have seen them fixed promptly. I’ve also been told “if you cant wait, fix it yourself” by some twerp of a developer. His package, promptly went in the garbage

  13. That’s probably “business as usual” but in reality, GNU/Linux and FLOSS never really breaks because it’s alway fixable. With that other OS and non-free software, your’re stuck. I have seen bugs not fixed until the next release many times and if you want one bug fixed, you have to pay for a whole new licence. That should be illegal.

  14. oldman says:

    “Really due to what Linux is to Fermilab the Linux department will be the last to go not the first. Most other departments are money black holes but research is very much that way.”

    I would tend to accept your analysis of FermiLab’s viability. But you dont address a more important question. How many commercial entities are going to bet the farm on a non commercial R&D only distro who gives no guarantees and sells no support for their product? I think this issue is more important than Scientific Linux’s viability as a substitute for Red Hat.

    I am sure that you are aware that Red Hat and Suse are where they are for a reason Mr. Oiaohm. They provide insurance and a “throat to choke” for management.

  15. oe says:

    “Windows is truly a crawling horror that should be put down.”

    That’s well said.

  16. oiaohm says:

    “However, that problem is gone starting with Vista.”
    Phenom problem is still in vista the flaw behind MS11-071 is very evil. Its all todo with dll loading. Program is used to open a folder in a directory a particular way. This could be .net running a local program from a web service or something else.

    Due to running a program a particular way instead of the program dlls being loaded from from where the file was opened instead of system or the applications install directory. Even sand boxed.

    So basically trick a service to directly access a file on a filesystem by spawning a new copy of itself placed a .dll at that location it uses bingo you have the rights of the service. This is the 1998 privilege exploit. How to get to SYSTEM user on Windows NT.

    Issue is this in theory still will work with Vista and 7 all you need todo is find a service doing the right thing. With third parties providing services this could be only a matter of time Phenom.

    Basically the 1998 flaw never got fixed other than removing the way to trigger the flaw by that path. Since then there have been many more paths to trigger the same flaw have turned up. Fixed the same way each time. Remove the path to trigger not the design flaw causing the problem.

    Design flaw causing the problem is why should application load a dll from a directory of a data file instead of the dlls from system or its install directory? 99 percent of the time it should not.

    This issue also shows up in some programs by the strange effect of files not opening when they are in particular directories. Most people don’t notice the directory is containing dlls and that is why the special file they are trying to open will not since the dll in that directory is the wrong version for the program they are trying to open it with.

    So yes the fault exists outside just microsoft parts but also in third parties.

    All these classic core design flaws provide attackers with a lot of simple ways in.

    Do you really think it wise to leave something like this floating around and hoping all services are coded in a way this cannot be exploited? Phenom

    Not that sane right Phenom. Lock down the dll loading system problem is solved for good all undiscovered paths so far will be rendered useless.

  17. oiaohm says:

    Ivan The SL core teams are Cern and Fermilab. Sorry there are two official locations working on SL Linux not one. So even if Fermilab stopped Cern would have to as well.

    Also “Fermilab, CERN, and various other labs and universities around the world.” Yes its not Fermilab and CERN alone either. Its a nice big shared project.

    Also Ivan you are being a idiot. Fermilab already has the work lined up for 2015 on for the server department. Renting out the server rooms to other locations for data processing is done by Fermilab when they have slack to try to fill budget black holes now. So the Linux side is basically has ways of getting funding by renting the processing power. Fermilab does have a lot invested in the Linux area. Lack of funding from government the Linux section is the area they will be depending on.

    Idea no need is so wrong. Always have a plan b when doing research in case you cannot get grants. Fermilab has plan B. Political climate issues have hit Fermilab before. Data processing for others does not require big investments of money to build things. Basically waste not want not if you want to live on.

    http://www.fnal.gov/faw/future/timeline.shtml Have you not read this. Ivan Projects are not in fact over one group of project stop next lot start up.

    Sections of Fermilab are winding down other sections are still running perfectly and new sections are winding up. Of course a hard grant time will make the Linux side one of there most valuable assets since it can be profit producing so allow some project to go forwards even without government grants.

    Yes the start of Scientific Linux in 2004 also lines up with a grant dry time.

    Really due to what Linux is to Fermilab the Linux department will be the last to go not the first. Most other departments are money black holes but research is very much that way.

  18. We often see LANs with a shared folder for everyone. If any malware or any person does the malicious act, it can affect the whole system. That’s why the vulnerability needs to be fixed.

  19. Ivan says:

    “to be correct Centos even in there own mailing lists large sections of there maintainers were pissed and recommended the move. Its not says who.”

    Who, Dag? He had nothing to do with the project beyond building RPMForge packages and only unsubscribed from the mailing list. No developers left for SL Linux as you claim. So yes, it is “Says who.”

    “That is going be no time soon.”

    Fermi’s particle collision experiments are over, as soon as the rest of their experiments end, the distribution will disappear as there will be no need for it.

    Remember, Fermi labs is the only place paying people to work on SL Linux and they are in the process of winding down making it incredibly stupid to use it as a platform because it may not be receiving updates next year or the year after, doubly stupid when you factor in the political climate of the United States and the pending budget cuts which will go after useful things like research at Fermi.

    So, unless you have an army of basement dwelling code-gnomes prepared to rebuild Enterprise Linux when Fermi shuts down, SL Linux is not a solution.

    “scary enough Microsoft uses less of there tech than what Fermi or Cern does in Linux. So by your logic we should not be running Windows because Microsoft does not have enough invested in it so can change away from it too simply. Be very careful where you go with logic.”

    You are going to have to explain how you came to this conclussion as, once again, that is not what I said.

  20. Phenom says:

    Posgon wrote: Ah, backwards-compatible vulnerability. That explains why it was found in all versions from XP to whatever.

    Honestly, that was a problem in XP indeed, because many old apps required admin privileges to run. (Despite the fact that MS officially discourages that practice since Windows 2000, but no once listened).

    However, that problem is gone starting with Vista. There old apps are sandboxed, and access to folders like Program Files, Windows, System32, root, etc, is virtualized to some app-specific folders the OS creates on the run. Thus, the app is run with lesser privileges, and is lied to about system folders. In effect, no damage can be done. A rougue app can only put a file in a folder only it sees and no one else would ever access.

    Please, do not compare with a decade-old release, it makes you no honor.

  21. oiaohm says:

    backwards-compatible vulnerability. Are what makes windows so much swiss cheese for exploiting.

    They are vulnerabilities that do not die. Just change way you can exploit them because you have to just avoid the filters MS added over them.

    There are quite a few of these on going repeating vulnerabilities. I just wish MS would bite the bullet and fix them. At max 3 percent of all applications would be affected. About 85 percent of all malware and viruses for windows would be affected.

    The base vulnerability that makes up MS11-071 is also used to privilege escape on windows. So yes MS11-071 that it worked there is everything to take over the complete system because it worked.

    The report in 1998 was about privilege escape using the exact same flaw. Nice and serious being played down quite well don’t you all agree.

  22. twitter says:

    I was not really talking to you, oldman, but the people who might listen to you.

    RealIT, you might ask the people who do HPC how to manage many thousands of individual computers. Business terminals are trivial next to that, but vendors like Red Hat have interesting services to offer along with management, such as unified storage which turns every desktop into a storage node. Of course, the free software world has DHCP servers that work with DNS for unique desktop assignment and location if that’s interesting to you. The real point is that package management is trivial and well mastered in the free software world.

  23. /etc/hosts is useful if you don’t have a central server/LDAP.

    I use SSH all the time to manage PCs.

    e.g. create a directory
    mkdir scripts
    cd scripts
    mkdir pcs
    cd pcs
    touch 192.168.0.24
    touch 192.168.0.25
    etc
    cd ..
    vim all
    all:#!/bin/bash
    cd pcs
    for f in *;do echo $f;ssh $f $1 $2 $3 &done
    #that will open a session for root in every pc in pcs and execute some command like ./all “date” will check the date on every PC.
    #these commands will be sent in rapid succession to each machine which will have an authorized key entry in /root/.ssh/authorized_keys and the command will execute in parallel giving a report on the time on each machine. You can make the command “apt-get update;apt-get upgrade” to update all the machines.

    That is a tiny example of what can be done with SSH. It is quite magical to command a bunch of machines to update simultaneously. This loads up the network and server so it is a good test as well as useful. You can do other things like find out who is running application X when they should be running application Y at work… or check on numbers of processes or free RAM. The limitations are just in the imagination. I have used it to shut machines down on schedule etc.

    SImilarly, one can make a script to send commands to one particular machine or identify the machines by name or room number instead of IP address.

    I don’t know of any real limit on how many machines can be controlled this way. I have used it for 100+ with no difficulty.

  24. Ah, backwards-compatible vulnerability. That explains why it was found in all versions from XP to whatever. I cannot think of any system using XP that I have used that this vulnerability would not have brought the house down. There was plenty of malware around which could have used this to propagate throughout the system.

  25. oiaohm says:

    Dr Loser. MS11-071: Vulnerability in Windows Components Could Allow Remote Code Execution. Is the one you need to read its a insane.

    Also just to top things off its been in metasploit from 6 months before the MS11-071 notice of its exist. Skill required script kiddy.

    There error is not only network drive infecting. Basically open a txt, doc or rtf with the correct dll in the same directory bang you are done. Instead of using the system wide dll it uses the dll in the directory with the txt or rtf file. Basically its I load the wrong dll. Something only Microsoft has ever managed to pull off.

    Its simply dll replacement. Lot of games have exploited this for mod loading. If main game loads data file from different directory the dlls in that directory get loaded so overriding the default dlls the application would have.

    Yes there is kinda a valid usage to this feature. But since its not controlled expect to find more of them. There has to be third party software out there that still has the same defect. Its simple to exploit it dll swapping.

    The correct fix is simple stop all dll swapping without authority. But this has not been done yet.

    Lets block loading dll from network drives and removable media instead. So you still could wrap the attack up in a zip file have person extract it locally and nuke them off the face of earth.

    MS11-071 is another case of hacking around and avoiding fixing the real problem. Because fixing the real problem will cause a few programs to stop working until informed that action should be permitted in those cases. Yes its a feature that 99 percent of the time you don’t need. Hello do we want secuirty or a stage show. We are still getting the stage show from Microsoft. Secuirty comes with some pain when there are design flaws to fix.

    Remember this is all because the “insecure library loading” from 12 months ago was not fixed properly. First record case of the dll loading system in windows being defective is 1998 in Windows NT. We are talking more 10 years of hack over the issue without it being fixed proper. In the exact same kind of way.

    Attackers worked this out very quickly in fact that it was not proper fixed. Library loading needs proper control systems.

    These are the secuirty flaws that annoy the hell out of me. I keep on seeing them come up over and over again and no proper fixing being done.

  26. oiaohm says:

    oldman depends on the end user.

    What is Linux targeting with design of system for security and do they care about secuirty?

    Large users truly do care they have staff decanted to the pure goal of secuirty and they truly do like to be able to check that patch to a issue was done right and will want to be able to push there own patch out if its not done right. Large also don’t want to have to wait. They have found a critical flaw if they know how to fix it they want to fix it now not wait for anyone else to push out patch. Basically the Microsoft model of report bug to them wait for them to patch it completely annoys the hell out of large.

    Medium users sized some care about secuirty some don’t. Do have the resources to run their own secuirty repos if required. 4 days late turning up most of them would not have given a rats. They have had worse like waiting years for issues in Windows to be fixed that they have reported.

    Problem we have is most of the smaller users really don’t care about secuirty. Claim they do but when you start asking questions you find the truth. You will find them running windows 2000 and other worse updated stuff. As long as it works they will keep on using it. The pay for anti-virus software so that if something goes wrong they can blame that instead of the illegal software they downloaded other bad actions they did that increased there risk of infection massively.

    So who do you design secuirty and update systems for because they are the most interested in them.

    Now lets look at natural company growth and how it relates to running your own secuirty repo. When do the resources appear to run own secuirty repo?

    Remember running your own repo also becomes a cost saving in downloads and speed of deployments once a business crosses a particular size. Microsoft does have its own form of repo in wsus except they don’t allow you to add your own extras to it. So the running a private repo starts in small business just restricted. Other sections of the ADS provide small business with the means to push out applications but without good version control like a repo.

    Building your own applications start in Medium business. So Medium business using a repo to push out applications would be normal if the repo system they are using allows this. So own private repo to address secuirty issues with own built packages start in medium up. Since all the resources todo this are now there. Developers and Repo. Heavy focus on secuirty many or many not have started yet.

    Large you move from just building your own applications to building your own OS at times with more focus on secuirty. Why does large normally build there own OS. History is a good teller lets follow SL history from the start. White Box linux->Centos->SL have used at Cern and Fermi in that order White box stuffed up Cern and Fermi moved their support Centos stuffed up and they basically said stuff it we will do it ourselves. Quality vs Cost is the driving issue for Large. This repeats over and over again. If a distributions quality is down Large company will fork it reason its cheaper to fork it than pay redhat. Question is will Large release own distribution back to public for open usage? The answer is 90 percent no because keeping it in house gives advantages for secuirty. This is why redhat is just the tip of iceberg of what Linux is out there. A very large iceberg. Also the reason why redhat is not a good metric to measure Microsoft income against. Micrsoft design does not have Large taking it maintaining it themselves and not paying Microsoft.

    Also bad for IDC numbers is large are more likely to order a pile of parts and build the machines in house as well. Like google custom ordering motherboard with UPS. They were ordered as motherboards not fully built servers. This is not strange either.

    Of course Large cannot afford in most cases to build a OS 100 percent from nothing. Linux and BSD made it affordable for Large to build there own OS’s. Yes all the different forms of Unix came out of Large messing with BSD and attempting to make profit by selling there own versions. So large making there own OS’s based off something is nothing new and has been their longer than Microsoft has existed and possibly be their when Microsoft is just talked about in history books.

    How large acts has not really changed since the birth of computing. Some sanity has grown that building your own distribution and trying to sell it most of the time is not worth it and is just more disruptive than helpful particularly if you try for incompatible it will only come back to hurt you. Large do attempt to avoid making fragmentation as they get more years under there belt. Google is only starting to get old enough to learn that forking and not pushing stuff up stream will hurt you. The older Large already know this. Lot of the older large is what keeps Linux going. Not redhat and most of the distributions you think of.

    What is going on is many old Large companies taking on Closed source and other paid for vendors of software. Because they are not going to want to pay Microsoft, Redhat or anyone else. Small and Medium will mostly pick up the crumbs falling off. Good quality crumbs are there to be had but there are a stack of worthless ones.

    FOSS is Large compatible. Medium and Small its improving as more good quality crumbs fall. Remember the state of the Linux Desktop is more directly controlled by Large than anyone or anything else. So while Large had zero interest in running Linux Desktops. Linux Desktop quality was never that great. Its the large what you have to watch to see what is planned. What the large does will cascade threw.

    NT JERKFACE
    “If by upgrade you mean build a new server. SL doesn’t aim for 100% binary compatibility with RHEL.”
    In fact SL does aim for 100% binary compatibility with it upstream distribution what is RHEL. Every package is even named the same. Basically not knowing what you are talking about. Its not build a new server. About the only difference between RHEL and SL is that 1 SL binary were built by cern and Fermi. In fact the SL binarys are built from the RHEL source packages. So unless RHEL cannot built binary compatible parts from their own source SL is binary compatible. Yes nothing in redhat contract forbids doing this. Pay for 1 redhat subscription and use that to download the source code to build distribution for everything else you need.

    In one way you might say SL is ripping redhat off. But they are that large that if they don’t use redhat they could use debian or anything else as a base.

    Ivan to be correct Centos even in there own mailing lists large sections of there maintainers were pissed and recommended the move. Its not says who. It what has happened. Fermi and Cern were using Centos before its quality issues. Two huge parts of the Centos community public-ally left.

    Ivan
    “I’m sorry Mr. Ham, but no sane person is going to use a distribution that will disappear as soon as Fermi no longer needs it. ”
    That is going be no time soon. Both Fermi and Cern cross the 1 million units server mark each in Linux. This is why they maintain there own distribution. Paying redhat basically times redhat income by 3. Who here likes the ideal of writing out 1 billion dollar plus checks. In fact scary enough Microsoft uses less of there tech than what Fermi or Cern does in Linux. So by your logic we should not be running Windows because Microsoft does not have enough invested in it so can change away from it too simply. Be very careful where you go with logic.

    This is also not uncommon. In the really high large volume market most common know is Google. They don’t pay redhat or anyone else out side there company for Linux. They are simply big enough to maintain there own kit.

    That is the shock to most people here you talk about small fry problems.

    My biggest problem is normally I have too much understanding how all the different sections of the market think. So some of my responses seam wacky because they are based on who would be truly interested in that.

  27. RealIT says:

    @Twitter “Mastered the art of remote management”? SSH is not remote management. How exactly do you manage 2000+ Linux machines effectively? Good luck with that HOSTS file fool.

  28. Dr Loser says:

    OK, Robert. From your quoted post.

    (1) “WINS improperly handles certain specifically crafted data on the loopback address. It’s not clear to me how difficult this would be to exploit.”

    It’s not clear to this ex-security maven “how easy” this would be to exploit, either. Examples or theories, please.

    (2) I’m not happy with the inserta-usb stick/autoplay a DLL thing either. But really. It could affect “… the Japanese Input Method Editor?” Examples or theories please.

    (3) “All of the vulnerabilities come from improper parsing of specially-crafted Excel files.” OK, could be major. I’ll buy. Examples or theories please.

    (4) “One is another insecure library loading vulnerability and the other an error in the parsing of specially-crafted Word documents.” Sounds bloody awful. Examples or theories please.

    (5) “Six vulnerabilities in various Microsoft server products, principally SharePoint, could allow elevation of privilege…” Note the word, could. Examples or theories, please.

    And all of them have patches, and all of them are provided in a timely fashion by “Patch Tuesday.” What, you’d rather MS doesn’t act responsibly and update security regularly, just like any other operating system in existence?

    (With the possible exception of kernel.org.)

    Well, just a drive-by fruiting in this case. I mostly wanted to post to sympathise with you for having to deal with what I understand were a bunch of outlandishly obnoxious trolls from LHB. Nobody should have to put up with that.

    Keep blogging. You’re demented and wrong, but hugely entertaining and basically a good guy.

  29. oldman says:

    “Actually, Oldman, as in the case of Skype, I can see the developers doing what you seek. ”

    Actually Mr. twitter, I dont care what you think because as far as I can see your take is untrustworthy. I was hoping for an answer from Mr. oiaohm, because I am curious about his take on it because he at lease seems to understand the issues.

  30. NT JERKFACE says:

    Due to both OS’s having a high relation ship to Redhat you can in place upgrade CentOS to Scientific Linux.

    If by upgrade you mean build a new server. SL doesn’t aim for 100% binary compatibility with RHEL.

    I’m still waiting for Pogson to post on how there isn’t enough of a community to provide security patches for the #1 server distro.

  31. Ivan says:

    “CentOS has been mostly abandoned because of 3 months. Scientific linux has mostly taken CentOS market share because of that error.”

    Says who? The random person on the web that tries to drown the discussion with huge blocks of indecipherable gibberish without using spell check?

    I’m sorry Mr. Ham, but no sane person is going to use a distribution that will disappear as soon as Fermi no longer needs it.

    “the natural response to secuirty error is under way from the Linux world against CentOS.”

    Well, Mr. Ham, the natural response to someone that can’t be bothered to learn how to spell a simple word like security is to dismiss everything they say off-hand.

  32. twitter says:

    Actually, Oldman, as in the case of Skype, I can see the developers doing what you seek. Skype made repositories for gnu/linux distributions that could be added to user’s source files and people did do this. Skype, being non free software, was always a little more stale than other software but that’s to be expected from developers that go it alone the non free way. For users, upgrades were transparently unified. When developers want users to have the very best, they put the least restrictions on them and often help. There will be more of this as Microsoft implodes and Android takes a bigger bite out of Apple.

    Gnu/linux package managers are much better than the chaos of Windows and freedom also works for IT shops. They can make their own repositories and redistribute newer packages on their own, which is much better than they can do with Windows and other non free software. I’ve worked Windows upgrades at a fortune 100 bank. They needed an army or button pushers to go desktop to desktop to do things that could not be done from any server due to drm and registry issues. That was years ago, but all of the free software distributions had long before mastered the art of remote workstation upgrades and management. Windows is truly a crawling horror that should be put down.

    I prefer to get my packages from my distro because I like the quality control and can wait for features. The only exceptions are software that is in some way connected to non free publishers, such as Microsoft documents and websites run by jerks. Up to date browsers are critical as long as Microsoft and Apple enjoy undeserved and harmful seats at ISO and the W3C. Fortunately, their power is waning and there are plenty of browsers that are more than good enough that are easy for desktop users to get. Microsoft Office formats have long been conquered and are becoming irrelevant. OOXML is a joke that no one uses. Things are pretty good for users that don’t feel like upgrading.

  33. oldman says:

    “In fact with Lbug inux you are not at the whim of distributions unless you choose to be. If something is really critical you are free to built in a private repo for you company to address the secuirty problem. ”

    Can you see an end user end user doing this Mr Ohio Ham?

  34. oiaohm says:

    D-G to be correct a standard Ubuntu install does in fact install more applications by default.

    I did not cover that update breakdown is also different between the two. Windows does ship out more combined updates.

    4 days unpatched is in fact nothing. Mostly due to MS patch Tuesday system. Yes the first Tuesday of the month. So a vulnerability turns up after that it might be delayed to the next patch Tuesday to be fixed. This is a very insane rule. When the patch is ready it should be going out.

    D-G
    “So how do you know exactly for which of the thousands of packages your distribution offers security updates?”
    In fact some of this shows lack of knowledge. Most people don’t notice mature distributions have a secuirty and general repositories. So yes you can short updates by class by what repositories they are coming from. So yes I know exactly what ones are secuirty updates and what ones are just feature expands. All you have todo is check origin. There is a secuirty mailing list that sends out reports on all packages as well.

    Debian, Ubuntu, scientific linux and Redhat(and many others) also place a logo image on if package is inside the distributions support system for updates and secuirty updates. You can also have the package manager produce a list of packages that are not supported by a distribution.

    Of course if you don’t know shows lack of interest in knowing. Not that the information is not there for anyone interested in secuirty.

    D-G
    “You’re also forgetting that in Linux you ARE at the whim of the distributions. Ultimately you’ll only receive those security updates the distributions deem as important enough.” Nice myth.

    In fact with Linux you are not at the whim of distributions unless you choose to be. If something is really critical you are free to built in a private repo for you company to address the secuirty problem. Yes Linux distributions support multi repos with a dominance order for a reason. Most dominate repo package will be installed the other repos will be disregarded.

    Windows you are truly at the whim of Microsoft that they fixed it properly. You don’t have the means to inspect the patch to make sure it was done right. You don’t have the option to do your own replacement if it is defective.

    Sorry Debian OpenSSL debacle caused systems to be altered to prevent it ever happening again. Idea that a screw-up no one gives the darm is wrong. This is normal any distrobution to screwup doing a update they better be ready to prove that they have put new systems in place to prevent it. Historically any distribution that fails to demo systems to prevent repeats of secuirty faults ends up with zero users.

    Ivan Microsoft record for getting a secuirty patch out to fix a secuirty problem correctly is 13 and a half years. Ping of death type bug. Fix for it was limit ping receives from anyone who was not in the same workgroup as you for 9 years. But as long as you could trick the windows computers in the network you owned to the same workgroup you could take full control of the lot. Issue report 1996 properly fixed 2010.

    Ivan CentOS has been mostly abandoned because of 3 months. Scientific linux has mostly taken CentOS market share because of that error. Due to both OS’s having a high relation ship to Redhat you can in place upgrade CentOS to Scientific Linux.

    Ivan the natural response to secuirty error is under way from the Linux world against CentOS.

  35. Ivan says:

    “Ivan really Apples and Oranges.”

    Did you you mean to say that I was comparing apples and oranges? Because the four days that it took Microsoft to push the patches out is far shorter than the 3 months it took for CentOS to provide security patches for 5.5.

    “So Yes Linux should have a higher secuirty report rate. If they are running close to level Windows is defective.”

    Would you please respond to what I said and not what you think I said? I didn’t say any thing about SECURITY reports. I never mentioned SECURITY reports. I did, however, mention the length of time CentOS 5.5 failed to receive SECURITY fixes earlier this year, which any sane individual can clearly see surpasses the four days between notification and fix that the Pog Man blogged about.

  36. D-G says:

    First, forgive me if I misinterpret you. But your mangled, incoherent English is REALLY hard to read and VERY ambiguous.

    “Ivan really Apples and Oranges.”

    Wow, I can understand that. Though I find strange that you would like to say that Ivan is apples and oranges. I’m sure you’d win the nobel prize for that discovery.

    “Windows secuirty updates are not for 90 percent of applications installed. Like most Linux updates are for.”

    No, I can’t get to the meaning of these fine sentences. Sorry. Let me guess: what you really wanted to say is that Windows updates only update Windows itself, whereas in Linux the respective package manager updates everything. Is that about right?

    Well, Linux and Windows ARE different after all. But I’d rather take the Windows approach over Linux’s. To me it offers the advantage of being able to easily update applications to the newest version outside of any release cycle imposed on me by a distribution. Also, since you get the software from the actual developers you won’t run into security debacles of OpenSSL caliber, as written above.

    “So Yes Linux should have a higher secuirty report rate. If they are running close to level Windows is defective.”

    Wrong answer. So your reasoning, expressed in understandable English, is: when Windows updates only address Windows itself, while Linux updates address the whole distribution, it follows that Windows is more insecure than Linux if the number of security updates is about the same for Linux and Windows. As I said: Wrong.

    Ubuntu 10.04.3 LTS: about 89 updates right after installation.
    Windows 7 w/SP1: 52 updates right after installation.

    The only big thing that a standard Ubuntu 10.04 LTS installation has over Windows is OpenOffice. And that had its last security update in February.

    You’re also forgetting that in Linux you ARE at the whim of the distributions. Ultimately you’ll only receive those security updates the distributions deem as important enough. As a user you also rarely install all packages your distribution has to offer. So how do you know exactly for which of the thousands of packages your distribution offers security updates? Do YOU know? No, you don’t. You’re just relying on your distribution to deliver.

    There’s also the factor of time. Taking VLC in Ubuntu as an example: in 10.04 LTS you had to wait at least four days for critical updates to get pushed out. Four days of unpatched vulnerability! Isn’t that something?

    You people are always spouting drivel about how long it takes Windows to push out patches. Well, the situation isn’t all that different with Linux. It takes time to get these patches ready, and even more time for them to run through the chain of command until they can be released into the wild. You know what’s the difference? If Microsoft screws up with a patch, everybody will be at their throats. If some distribution (let’s leave aside Debian’s incompetence as exemplified by their OpenSSL debacle) screws up, nobody gives a damn.

  37. D-G says:

    “Stay tuned to see whether the lights dim.”

    Nope, the lights are still on here, Pog.

    “How much sleep will be had tonight? How many millions of machines will go unpatched for the next few weeks? What horrors are to follow?”

    Oh, you mean like the horror when someone accidentally discovered AFTER TWO YEARS that Debian’s OpenSSL maintainer at his own discretion had removed crucial lines of code from OpenSSL, thereby rendering generated SSL and SSH keys useless?

    Nope, had that never happen to me with Windows.

    But that’s a general problem with your prized Linux ecosystem. The people in the distributions who maintain and package software are most likely not the developers of said software. Put bluntly: they don’t have a clue and are not able to make informed decisions on their own. Every random guy can package software after learning it for half a day. On the other hand on Windows software is “packaged” by those who develop it. Win-win scenario once again.

  38. oiaohm says:

    Ivan really Apples and Oranges.

    Windows secuirty updates are not for 90 percent of applications installed. Like most Linux updates are for.

    So Yes Linux should have a higher secuirty report rate. If they are running close to level Windows is defective.

  39. HAHAHAHAHA! GASP! ROFL!

    Are you trying to kill me by blowing out my heart? My doctor says I have the heart of a young man.

  40. Ivan says:

    “If there ever was an instance that pushed people over the threshold to migration to GNU/Linux, this could be it.”

    Good luck with that.

    “Hey! Trolls! Are any of you going to claim that other OS is secure after this?”

    Well, they haven’t gone three months without a single security update like CentOS 5.5 did earlier this year, so I’d say Microsoft Windows all versions are still more secure than the most popular version of Linux (if you believe Johnny Hughes).

Leave a Reply