With Love From M$

M$ expresses its love for users by announcing critical (remote code execution…) vulnerabilities in every version of their OS from XP to “7” and versions for servers. Happy Valentine’s Day. Hope you don’t get hacked before you manage to update…

That’s a bit like a boyfriend telling a lady she should get checked for STDs because he’s been spreading them. I recommend using Debian GNU/Linux to avoid such complexity in your life. If that other OS still runs for you, go to Goodbye-microsoft.com and obtain Free Software.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

51 Responses to With Love From M$

  1. Yonah says:

    O: “Yes X11 appears high if compositor chosen is chatty.”

    Ahem… I wasn’t using any desktop compositor.

    o: “You also notice the same kind of cpu spikes under windows due to applications being told they are being moved.”

    No, I have not witnessed what you describe. I’ve witnessed a little CPU usage on the same hardware except running Windows. Very little. Indeed, oldman, his excuse making doesn’t cut it.

  2. oldman says:

    Do you really think that all your excuse making cuts it mr. Oiaohm?

  3. oiaohm says:

    Yonah the high X11 usage under ubuntu tracks to the compositor used. This is also why we need dri2 drivers.

    Current X11 tree for window management
    application
    |
    X11 server-compositor
    |
    video card.

    Yes horrid this is the only way it can be done with DRI1 drivers.

    Wayland
    Application
    |
    Wayland Compositor
    |
    Video card.

    This is DRI2 drivers

    Even with X11 in wayland in its a straight line
    Application
    |
    X11 Wayland
    |
    Wayland Compositor
    |
    Video card.

    Yes X11 appears high if compositor chosen is chatty. So people blame X11 for load when its compositor bugging the heck out the X11 server.

    Yes this 4 stack is faster than what the X11 3 stack appears to be. Because it does not have the stupid side ways path to compositor.

    Also Wayland applications render straight into kernel based memory management for video output. So Wayland sends less redraw messages.

    Wayland is also major-ally different. Applications in Wayland don’t know where they are on the screen. They are not told. X11 current day when you move a window it tells the application where it was moved to. Yes its not possible for an application under wayland to screen capture the screen either unless its been particular authorised todo so. Applications by default can only capture there own windows.

    You also notice the same kind of cpu spikes under windows due to applications being told they are being moved. Yes the talk back telling applications where they are now on screen.

    Change to DRI2 drivers will come.

  4. oiaohm says:

    Yonah you still live in the false world that windows runs that everything is ok.

    Please do your self a favour and run the memory checks. If nothing is there no cost other than time. If something is there you will regret it sooner or latter.

    “Why it failed is because Linux drivers are generally of the same quality as the deposits I make in the bathroom most mornings.”

    This is the problem the Nvidia closed source driver does nothing more than basic hardware checks. Does not check if ram is healthy or if control interfaces are in sane locations. Instead takes a leap of faith.

    The Linux Open Source Nvidia driver is not so trusting.

    “Everything functions perfectly, video card memory and all. Passes every test I’ve ever thrown at it.”

    Have you run http://folding.stanford.edu/English/DownloadUtils these tests on memory.

    You have still not told me exact brand make and model of card.

    “Please, enlighten me. What kind of magical features would prevent my card from working under Linux… and yet, in Windows, even with the default drivers, work just fine? The graphics card came in a box with a dragon on it. Does that help?”

    Is that company logo the dragon red 1.5 inches high. If so might be a good idea to replace card. Some of those have fan controller in the wrong place. In gpu general memory. Half way through a game and your fan stops dead lovely right so resulting in overheat. This will be a random event.

    Even the china made stuff traces back to a factory. No markers is normally like foxcomm rejects and one of foxcomm reject brands is a red dragon about 1.5 inchs high on the box as the only marking. Anything where maker is hiding who they are is not trust-able hardware. So if it don’t work with Linux you cannot really be sure that Linux is doing the wrong thing.

    Lot of cases when I have people scream about this problem you find they are not using quality hardware but time bomb hardware. Yonah.

  5. oldman says:

    “I don’t believe your eyes either. I just futzed around with several windows on my machines and got 2% CPU utilization with one and 5% with another. My CPU is not hot at all.”

    I think before you make that statement you need to describe your environment hardware and software. Then we can start to identify the cause of this anomaly.

  6. Yonah wrote, “My personal favorite with the last disto I used, Ubuntu, was maximizing the system monitor and watching the CPU crank up to 40%… JUST drawing the CPU usage graph. I couldn’t believe my own eyes.”

    I don’t believe your eyes either. I just futzed around with several windows on my machines and got 2% CPU utilization with one and 5% with another. My CPU is not hot at all.

  7. Yonah says:

    As slow as Linux usually is on the desktop, there is no way I’d use open source graphics drivers. Don’t even get me started on the obscene amount of CPU time X uses when simply moving a window around. My personal favorite with the last disto I used, Ubuntu, was maximizing the system monitor and watching the CPU crank up to 40%… JUST drawing the CPU usage graph. I couldn’t believe my own eyes. But, we all know the Linux desktop, no matter which environment you prefer (KDE, Gnome, XFCE, ect.), is superior in all ways to anything Windows has to offer. Maybe I was drinking that night. Yeah, that must have been it.

    “Altering gpu memory management in the Windows driver also forces a reboot.”

    *Yawn* Is this how you spend your Friday nights?

    “This is a sign of how desperate the problem is that the Linux world is going this far.”

    To say nothing of the personal hygiene problems!

    “When you know Linux can do so many times better its just not being given the drivers its down right annoying.

    As annoying as Linux advocates? I digress. Yes, of course, when YOU know! It’s only the drivers. Couldn’t be poor design, it’s Open Source! The world’s smartest people (I’m looking directly at you now) have their hands in it. If Jesus himself needed to use a computer, his natural choice would be Linux. Have faith.

    “You have not said the exact make and model of the card. From that I could possible pick out if it has some odd ball feature that is not supported yet.”

    Nor would I need to. I bought it in China, bro. Even the subvendor ID is blank. However, the card itself is solid. Unsupported feature? Please, enlighten me. What kind of magical features would prevent my card from working under Linux… and yet, in Windows, even with the default drivers, work just fine? The graphics card came in a box with a dragon on it. Does that help?

    “From what you have said you run you test drive on untested hardware.”

    Incorrect. This is not what I said. This kinda proves something I suspected. That your dyslexia (which I’m personally still not convinced you actually have, as I’ve heard people lie about stranger things) not only makes it hard for us normal people to understand you, but also causes you trouble when trying to read something written by someone else.

    As I said, though I will word it differently, the machine I tested Linux Mint on is the very same machine I use everyday to run Windows 7. Everything functions perfectly, video card memory and all. Passes every test I’ve ever thrown at it.

    Try to digest this:

    1) 1 Computer, running Windows 7, no problems.
    2) Insert Linux Mint DVD into drive.
    3) Reboot computer.
    4) Boot from CD.
    5) Graphics FAIL!!1

    “your card is a strange brand or model with a extra feature that is why it failed”

    Why it failed is because Linux drivers are generally of the same quality as the deposits I make in the bathroom most mornings.

    “the memory on it is either stuffed or heading stuffed.”

    Or, Linux is stuffed and perhaps you are as well. That seems much more likely. A modern Linux distro can’t even give me a usable display. Very, very sad.

  8. oiaohm says:

    “Then why doesn’t it do it? Please list step by step instructions on how to update the Nvidia drivers on a Linux system without a reboot or loss of applications. Preferably in reverse chronological order. I think you’re up to it.”

    Yonah if you using open source nvidia drivers and they work with your hardware. Just run normal update.

    As you start new programs they use the new user space. So gaining all new GPU optimisations. No reboot or special actions required. Just happens the way it should be. This is also true for open source ATI drivers.

    With DRI2 there is none of this kernel mode driver part and Userspace part have to match. DRI1 the two halves have to match.

    DRI2 Kernel module is power management and memory management. So unless you are installing a update for those there is no requirement to disturb the userspace with DRI2.

    Altering gpu memory management in the Windows driver also forces a reboot. The kernel space of Linux only knows enough to turn the card on for 2d operations with DRI2. So anything 3d is turned on by user-space code same with video acceleration and so on. DRI2 there is bugger all in the kernel really.

    Because the open source drivers are DRI2. So yes they can update on fly for 99 percent of all cases. That 1 percent of cases they cannot is also unavoidable under windows as well.

    Yonah
    “Yeah, that would be pretty stupid. By the way, the current year is 2012.”
    When was DRI2 released 2007. We are talking 5 year latter and we still stuck with DRI1 drivers from the closed source makers. Yes the Linux world is not particularly happy about it.

    Reverse engineering is not a fun process. This is a sign of how desperate the problem is that the Linux world is going this far.

    DRI2 support headless. Basically tell the application there is a screen. The screen does not exist its just a buffer in the video card.

    I agree its stupid. When you know Linux can do so many times better its just not being given the drivers its down right annoying.

    “And this is possible running from a DVD and not writing any data to the hard drive how? Ramdrive? That’s a lot of damn work. Your suggestion also runs counter to the superiority of Linux (ask Twitter), which dictates that such difficulties should not have been encountered in the first place.”

    The problem of closed source makers not providing dri2 drivers should not exist in the first place either. So Linux world is having to develop there own.

    You do find some Linux live cd with the Nvidia closed source driver embeded. It is also possible to remaster the mint disk to default to closed source driver. I was thinking you were proper installing mint.

    “No, this is incorrect. Also, I’m using the card right now. It pulls about an hour of Team Fortress 2 every few nights.”
    That a game works does not mean the card does not have a hidden memory defect. Lets just call the open source nvidia drivers picky.

    From what you have said you run you test drive on untested hardware. There are formal memtest tools for GPU units. These are not games. Your response should have been tested with one of them.

    You have not said the exact make and model of the card. From that I could possible pick out if it has some odd ball feature that is not supported yet. Most of the model of card you started are. Some brands added extra features outside nvidia spec leading to memory check failures.

    What gpu memory test tool did you use?
    http://folding.stanford.edu/English/DownloadUtils
    MemtestG80 and MemtestCL Run them and see if you system starts screaming defect. It might.

    There are two possibilities. One your card is a strange brand or model with a extra feature that is why it failed or two the memory on it is either stuffed or heading stuffed.

    Until you run the right tests you will not know.

  9. Yonah says:

    Oiaoohm: “So this is not a case Linux cannot do it. ”

    Then why doesn’t it do it? Please list step by step instructions on how to update the Nvidia drivers on a Linux system without a reboot or loss of applications. Preferably in reverse chronological order. I think you’re up to it.

    “Its like running Vista with XP drivers”

    Yeah, that would be pretty stupid. By the way, the current year is 2012.

    “(speaking of open source video card drivers) GPU lock up it restart you don’t notice.”

    Bu ke neng. I miss my Amiga though, with it’s ability to change resolutions in the manner you describe.

    “Rerun the disc force vesa video card and install closed source drivers when complete.”

    And this is possible running from a DVD and not writing any data to the hard drive how? Ramdrive? That’s a lot of damn work. Your suggestion also runs counter to the superiority of Linux (ask Twitter), which dictates that such difficulties should not have been encountered in the first place.

    “You do run into the same issues with Windows 7 at times when the default drivers are incompatible.”

    I don’t. I think the last time I ran into a situation like the one you describe was in the Windows 98 era. But how often do you? Once a year, once month, once a week? You seem like a daily guy.

    “Only 1 thing generates that and that is the memory test for the video card.”

    No, this is incorrect. Also, I’m using the card right now. It pulls about an hour of Team Fortress 2 every few nights. This same hardware runs Windows 7 reliably. As bad as I know Linux is, you think I would give it my bi-annual test drive on untested hardware? Sheesh!

  10. oiaohm says:

    Yonah Fault tolerance bit exists in Linux in DRI2 driver with kernel video card memory management that Nvidia and ATI closed source drivers don’t support.

    So you can recover the data in the dead card in a common and standard way you need the kernel video card memory management.

    So this is not a case Linux cannot do it. Its like running Vista with XP drivers with Linux at the moment with closed source video card drivers. You don’t get the fault tolerance in that mode either.

    Yes open source video card drivers you do have the fault tolerance. In fact its auto. GPU lock up it restart you don’t notice.

    With open source dri2 you can run mixed versions of the userspace. So update on fly does happen.

    DRI1 the old model issue is the video memory management is internal inside the video card drivers. So there is no way to move from one to the next.

    Yonah Nvidia still manages to bite the big one doing a transparent driver update under windows 7. Basically I made it a little while latter red screen of death. Its still safer to reboot nvidia under Windows 7 unless you like the possibility of red screen of death.

    Nvidia support on all platforms for transparent update of drivers is basically crap.

    Yonah
    “The result? I end up with a black and white, diagonally checkered pattern on the screen…. and that’s it. Lovely! That’s exactly what I’ve come to expect from Linux. I guess my old Nvidia GT 240 is just too fancy.”
    Rerun the disc force vesa video card and install closed source drivers when complete.

    You do run into the same issues with Windows 7 at times when the default drivers are incompatible.

    It would be interesting to know what brand of Nvidia GT 240 since most of those should work with nouveau drivers.

    “black and white, diagonally checkered pattern on the screen” stopped on screen is a bigger worry than you think. Only reason you will see that is the video card did not memory test clear. Either that card is a different brand with extra features open source video card drivers don’t know is there or it dead card walking with dieing ram.

    So my first instructions are crossing fingers that the card is still healthy. This is the warning a failure to load driver at all will not draw a checker pattern. Gpu render wrong will not either. Only 1 thing generates that and that is the memory test for the video card. Yes it was designed that way. Dead memory you cannot be sure that you can rendered text to screen either. So just display testing state.

  11. Yonah says:

    Robert: “Linux, nor M$, do those those things. That’s Nvidia’s work.”

    Not really. See http://en.wikipedia.org/wiki/Windows_Display_Driver_Model#Enhanced_fault-tolerance to learn how this is possible.

  12. oldman wrote, “the personal cost of the forklift upgrade in software that is triggered by changing OS’s.”

    I have never needed a forklift to do that and the personal cost was slight. The cost of migrating to the next version of M$’s OS is often much greater than going to GNU/Linux.

  13. oldman says:

    “GNU/Linux supports more hardware than “7″ because all that legacy stuff still works. GNU/Linux supports more hardware out of the box than “7″ because the kernel knows more without having to fish on the web. That’s wonderfully useful when it is the old NIC that has no driver in “7″.”

    Linux’s ability to allow one to use dumpster dived hardware is indeed legendary but ultimately irrelevant in a world where one needs to run modern software with modern requirements.

    As far as the old NIC without the windows 7 driver is concerned, there are any number of cheap replacement cards that cost far less that the personal cost of the forklift upgrade in software that is triggered by changing OS’s.

  14. Yonah wrote, “Can Linux do that yet?”

    Linux, nor M$, do those those things. That’s Nvidia’s work. Ask them why they don’t give the same love to GNU/Linux as they do that other OS.

    I have frequently had difficulty with Nvidia drivers in GNU/Linux but I have also never encountered any video hardware I could not find a way to drive in many years of supporting GNU/Linux on random hardware in many schools. That includes hardware that was originally supported in Lose ‘9x or 2000 that would not work in XP. A few years ago I was in a place that had three generations of PCs. The older generation actually had diverse video cards and a single image of XP would not work well on them. I used GNU/Linux on a lab that could not keep more than 14 of those old machines running and did it with a single image, loaded PXE/NFS. GNU/Linux ran 24 of those old machines in that lab like Swiss Watches.

    What’s good for the goose should be good for the gander. GNU/Linux supports more hardware than “7” because all that legacy stuff still works. GNU/Linux supports more hardware out of the box than “7” because the kernel knows more without having to fish on the web. That’s wonderfully useful when it is the old NIC that has no driver in “7”.

  15. Yonah says:

    As a former Amiga user who bit the bullet in 1997 and switched switched to the PC, I never really appreciated Windows until I tried Linux. Every few years I like to try the latest “hot” Linux distro people (on Tech websites) are talking about. So just yesterday I burned a copy of Linux Mint 12, the latest version, rebooted my computer and gave it a spin.

    The result? I end up with a black and white, diagonally checkered pattern on the screen…. and that’s it. Lovely! That’s exactly what I’ve come to expect from Linux. I guess my old Nvidia GT 240 is just too fancy.

    My Windows 7 machine? Umm… last time I upgraded my video card drivers, NO REBOOT WAS REQUIRED! Can Linux do that yet? That is, without using exotic scripting commands or extra tools? Without having to terminate X and closing any programs? Didn’t think so.

  16. oiaohm says:

    “500 identical, locked down, pre-security hardened, pre-tested installs. I fail to see a problem here.”

    You have 500 identical system from imaging and other maintenance. This also means you have 500 identical sets of secuirty flaws.

    1 breached can very quickly turn into 500 breached.

    Linux Thick Client we know this is a case. Hit the power breakers on the buildings. We have shut our threat down to only being in the server room. We do not have to hunt it down its in the only place it can exist. We have it contained. Power breaker containment is one of the most effective because is software cannot interfere with a human hitting a power breaker.

    Thin clients the same tech applies. They are really fast to limit to server room. Once limited to server room you have options.

    Can we go back to a clean backup yes. Quickly yes. Because once we have the possible infected server isolated we can bring the network back up.

    You have in your case 500 possibility effected machines. With 500 independent harddrives. That the infection can return from. So if you are not careful you can end up chasing you tail.

    Please remember bot net infections these days include systems like Windows update. So they can be updating there software against the new counter measures.

    So yep that anti-virus update that allowed you to detect the infection you miss 1 machine and it updates to a version your anti-virus cannot detect so you are back in the clean up cycle again as it reinfected all 500 machines.

    This is what is called the infection tread mill. Containment must be effective. This is where windows desktop PC fall down and cannot get up.

    Microsoft model of stand alone machines does not work. Once you have huge volume.

    Businesses don’t start using things like terminal services and other forms of central hosting of windows without very good reasons. It is the most effective way of allowing containment to exist.

  17. oiaohm says:

    Ted
    “500 identical, locked down, pre-security hardened, pre-tested installs. I fail to see a problem here.”

    Because if you had watched the video there is no such thing as a hardened install that cannot be taken out by some means by some means you have not thought of yet.

    No matter how much testing you do the system will fail. People take out machines with full rom installs of executable code.

    You have never done it in real world either to wake up that don’t work.

    “Which is all made remarkably easy with Active Directory, Group Policy and MSIs. Still not seeing a problem here.”
    This requires you to presume the machine is not converted into a bot net slave. Has not modified the Group Policy function so that the MSI you are installing that are update to the anti-virus are not getting rendered useless before they get installed.

    You are depending on functionality you cannot depend on because you have not audited it. Yes some bots target the MSI installer processes to prevent anti-virus software from being updated under them or other malware scanning software from being installed.

    Your group policy stuff only works while you network is clean. Once infected its completely useless.

    http://www.metasploit.com/modules/post/windows/escalate/bypassuac
    UAC issue is getting quite serous now in fact the method is part of a script kiddy tool and it still not patched. It was discovered in 2009 by the way. So we are going on to a 2 year old plus defect here.

    “Even if you were right, Windows File Protection would restore known clean versions of system files.”

    This is where you are badly wrong. You are reading MS documentation. You can inject files into Windows File Protection and tell them they are the clean versions and have Windows File Protection restore them. There is a flaw. If you add you own signing key to windows. its the signing key check that MS is using to make sure the files in Windows File Protection are valid. Signed by an approved key equals clean even if it a bot net installed approved key.

    A bot key cannot be separated from a company added key by the anti-virus.

    Windows File Protection define of clean is flawed. It define for clean is signed by an approved key. Where does Windows File Protection search for damaged files that are missing. One of the locations inside the System Restore Points.

    Yes it rolls back to the infected version out the System Restore point. That has not been scanned or cleaned by many anti-viruses. Also the reinfect part that is restored might be the zero day your anti-virus does not know about yet. Lot of bots have got smart at overwriting there attack vector.

    Best response is basically reimage and hope the infection is not LAN spreading.

    Sorry Linux Thick Clients when this mess happens is millions of times simpler. No looking over shoulder looking for the attacker attempting to knife my attempt to remove them.

    Linux Thick client you have 1 machine set to clean. Turn the rest off and back on. Simple straight forward and painless. No evil knifes looking for for you to turn your back.

  18. Ted says:

    You don’t understand what imaging has done. OS imaged to 500 machines is still 500 operationally separate installs.

    500 identical, locked down, pre-security hardened, pre-tested installs. I fail to see a problem here.

    So 500 individual installs you have to police. That you have to roll out software to and keep updated.

    Which is all made remarkably easy with Active Directory, Group Policy and MSIs. Still not seeing a problem here.

    UAC has more holes around it than anything.

    I think the expression is “[Citation needed]”.

    Until you wake up that you can not scan the restore points. So you insert one of those run the disk. Reboot the machine and windows detects damage so rolls automatically back to a restore point so bringing the virus straight back.

    Absolute, complete, utter, unmitigated, unalloyed GARBAGE. Windows will do nothing of the sort. Even if you were right, Windows File Protection would restore known clean versions of system files. System Restore is _user-initiated_ and would not kick in on reboot automatically. “Last Known Good Configuration” would have to be user-initiated too.

    Do you have even the vaguest idea of how Windows File Protection and System Restore actually work?

  19. oiaohm says:

    Ted and you did not watch the video. It shows.

  20. oiaohm says:

    Ted
    “Do you really think disk imaging or network deployment is not available for Windows? If you’re an admin with 500 machines running 500 separate installs; you’re insane or incompetent.”

    This shows where you are so thick and incompetent. You don’t understand what imaging has done. OS imaged to 500 machines is still 500 operationally separate installs. So 500 individual installs you have to police. That you have to roll out software to and keep updated.

    Disc imaging does not magically make the machines one. Disc imaging repeatly costs many hours in a year to try to stay ahead of infection. The fact you cannot update those images that are not live creates secuirty windows for attackers. If you could update images that were not live you would most likely be better off with the thick client solution. Where it does not matter if the machines get turned off. Where the server room is all that has to stay on.

    Linux Thick-client solution the 500 instances are all from 1 instance on network real-time. Police can be power switch. Cut power turn back on all 500 are now 100 percent sync with central instance. No cloning out process required since they are not using the local hard drive. So yes really fast to disinfect if infected. Scan and fix the 1 instance then reboot the lot and you are done.

    TED seriously why disc image when you can make the disc the same on all machines because its coming from network. Easy to take care off if there is only 1 image that has to be sourced from 1 location. Auditing is Audit that one location.

    Really instead of hacking around the problem with imaging software. You should be demanding thick client support.

    TED you believe a myth about imaging software about time you stop believing it. Thick client with network provided hard drives has a particular set of advantages. One being way less work to keep the bugger secure. Users turning machines off is the preferred thing to happen with Thick clients so cuts down on power-bill as well.

    Ted
    “But how will this “anti-anti-virus” get onto your system in the first place if it is prevented from executing by UAC or by the current anti-virus? Your premise relies on the malware already being executed. You’re metaphorically opening the box with the crowbar that’s inside it.”

    There is always malware your anti-virus will not detect it only takes one with anti-anti-virus payload to ruin your day. So going past you current anti-virus is a matter of time. Anti-virus software is only as good as the latest signatures it has.

    UAC has more holes around it than anything. Including the fact that once you are running you can puppet UAC to approve everything without informing the user. This bug has not been fixed.

    TED
    “Insert anti-virus optical media (McAfee Stinger or KAV Rescue CD for examples), cold-boot and boot from CD. Run scan, disinfect/delete as approriate. Remove CD, reboot. Insanely hard??”

    Until you wake up that you can not scan the restore points. So you insert one of those run the disk. Reboot the machine and windows detects damage so rolls automatically back to a restore point so bringing the virus straight back.

    Yes insanely hard with many land mines. So machines they call clean are not.

    When you have todo 500+ you are going to make a error unless you have a means of doing it automatically.

  21. Ted says:

    Ignorant of what? That oiaohm doesn’t know what the hell he’s talking about? I’m extremely aware of that, already.

    Read about disc-imaging if you have not seen it. It’s just one of many ways to install software on PCs.

    Really?? You astound me! I’d never have guessed about the existence of such things as WDS, DriveBackup, DriveImage, TrueImage, Ghost or CloneZilla.

    If you haven’t already spotted the subtle sarcasm above;

    I WAS POINTING THAT OUT TO OIAOHM.

    Thank you for selectively (mis-)quoting me, Mr Pogson.

    I suppose it was easy to do, with it only being the immediately preceding sentence, but the bit you missed was;

    Do you really think disk imaging or network deployment is not available for Windows?

    M$’s EULA makes many methods of questionable legality. Is a disc-image a backup or not? Can you have a backup of the disc-image?

    With a volume license, you can use images up to the amount stated in your license agreement.
    Single images of OEM installs would not be a suitable option in any reasonably sized deployment and would be frankly insane in larger ones.

    Windows Backup produces disk images, so it’s reasonable to infer tha MS regard disk images produced by Acronis, Ghost, et al to be backups.

    Read the EULA and tell us where that is described.

    With all respect, Mr Pogson, I’m not doing your legwork. You tell me. If it’s not in there, it’s permitted; “Qui tacet consentire” and all that.

    Imagine a school with a mixture of XP Pro, XP Home from several OEMs…

    I’d rather do it right; imagine a school with a volume license at educational (or charity) discount levels…

    along with anything that allows one to maintain an IT system easily

    Absolute nonsense, bordering on tin-foil hat wearing paranoid conspiracy theory. Please, feel free to roll out a dubious anecdote about how Active Directory or Small Business Server are deliberately made difficult to administer for me to demolish.

  22. Ted wrote, “If you’re an admin with 500 machines running 500 separate installs; you’re insane or incompetent.”

    Ted, don’t be ignorant. Read about disc-imaging if you have not seen it. It’s just one of many ways to install software on PCs. M$’s EULA makes many methods of questionable legality. Is a disc-image a backup or not? Can you have a backup of the disc-image? Read the EULA and tell us where that is described.

    GNU/Linux allows and encourages any way at all to install the software because the licence permitting running, modification, etc. comes with the software. Simple and clean and flexible.

    Methods of installation I have used with GNU/Linux over the years: installation from CD/DVD, ftp, http, tftp, PXE/netboot/install to RAM/NFS, Clonezilla, free downloads from the Internet, etc. most of which would not be allowed by M$ because M$ tries to prevent unlicenced copying along with anything that allows one to maintain an IT system easily. Imagine a school with a mixture of XP Pro, XP Home from several OEMs… This issue alone is sufficient reason to migrate to GNU/Linux.

  23. Ted says:

    @oiaohm

    Lets say I have to manage 500 PC with 500 individual installs of windows.

    Do you really think disk imaging or network deployment is not available for Windows? If you’re an admin with 500 machines running 500 separate installs; you’re insane or incompetent.

    If anti-virus and the malware are running on the same cpu. There is a chance the malware will have anti-anti-virus code that takes out the anti-virus so rendering the anti-virus on that machine non effective. Now you have a machine getting massively infected without you knowing.

    But how will this “anti-anti-virus” get onto your system in the first place if it is prevented from executing by UAC or by the current anti-virus? Your premise relies on the malware already being executed. You’re metaphorically opening the box with the crowbar that’s inside it.

    The only true anti-virus is booted clean and scan.

    With this, I actually agree with you. Even rootkits cannot hide if the OS that’s running them is not running.

    However;

    Something windows makes insanely hard.

    Insert anti-virus optical media (McAfee Stinger or KAV Rescue CD for examples), cold-boot and boot from CD. Run scan, disinfect/delete as approriate. Remove CD, reboot. Insanely hard??

  24. oiaohm says:

    Ted watch this video
    http://www.youtube.com/watch?v=0BHn4Su2qEo
    This video is what Linux guys call a basic instruction to computer security. Yes its only the basics.

    Yet this covers how to smash a system wide open as first step. Without understanding you attacker you don’t stand a chance.

    Also notice the Linux guys are also researching other methods.

    linux malware detect most of the stuff in that does not work any more.

    Really the LCA video is talking about how to stop attackers dead.

    Ted the most important thing is surface area.

    Lets say I have to manage 500 PC with 500 individual installs of windows. This is going to take more hours to be sure its right. Compared to 500 PC running 1 Linux thick client install on a cluster storage.

    Next anti-virus mistake. If anti-virus and the malware are running on the same cpu. There is a chance the malware will have anti-anti-virus code that takes out the anti-virus so rendering the anti-virus on that machine non effective. Now you have a machine getting massively infected without you knowing. There also exists anti-windows update.

    Thick-clients the anti-virus is run storage side. So now attacker has a problem. How do you disable the anti-virus or any other scanning methods the admin is using. Simple problem the attacker is screwed. The attacker can only remain hidden for so long against this wall. Where is Thick-Clients uses. SSI(single system image) clusters for one.

    The only true anti-virus is booted clean and scan. Something windows makes insanely hard. Linux is very easy to take the system off line and fully scan its harddrive.

    Now the Linux idea of updates being independent start making a lot of sense.

  25. oiaohm says:

    ChrisTX also selinux and smack implemented the same as grsecurity with trusted path execution if you turn it on. This is RBAC in selinux and smack. For the signed bit you enabled trousers trousers.sourceforge.net or some other IMA. Yes Linux you have a choice of IMA for signing and certification.

    Yes a lot more distributions come out box with selinux and smack than grsecuirty.

    Each choice on this makes an attackers time worse.

    AppLocker is white list method only. So you want students to write and run there own programs. Hmm. Complete fail.

    selinux and smack both contain if this application is not known to us place in sandbox. So yes you can run it but it limited to being able to connect to anything else even limited where it can write.

    One the most creative people I saw way around Applocker is a word documented that contains a vba macro that loads a dll that is a exe loader so after that student could run what ever they liked no restrictions. Yes it was a full exe loader no need to reference the windows abi to start a executable.

    Applocker really does not work either. You have really never faced Applocker off against students with a brain.

    They were not able todo the same things against selinux and smack protected systems. selinux and smack lists what libraries and binary code you are allowed to pull into an application. This is important. Once you have a binary code loaded you can do what ever you like unless you are sandboxed.

    ChrisTX your protips are not protips they are incompetent twit tips that will be beaten.

    grsecurity should not even be compared to items like SRP and Applocker that don’t work. Grsecuirty is a working tool but its normally the hard way. selinux and smack are most likely in the distribution you have and will do the same job.

  26. oiaohm says:

    ChrisTX You picked the metric CVE’s. Same way to exploit the same code error also get reported multi times for debian. Just because you are caught out don’t blame the method.

    WSUS is buggy due to bug in windows. Same issue with Windows Update and GPO. In theory everything works. I will now list how this all goes bad in real world.

    3 points how these fail.

    1) Windows Update might start at 3 am or any other time you set by GPO but due to network trouble not complete until 11 or 12 am then at that time inform user to reboot. Nothing in GPO you can set to prevent this.
    2)user turns machine off so GPO cannot run.

    Simple solution to prevent these two is be able to run thick client mode and perform update on a off-line client image.

    http://ipxe.org/howto/wds_iscsi I can do half what is required. I can run windows thick client mode but I cannot do a off-line client image update. So leaving me stuck in network hell. So yes any windows machine that is off in this case would still have it hard drive on the iscsi array. Its only a limitation of windows update tools preventing me. Of line update of windows would mean I would not need to GPO at night. I would just tell users to turn there machines off at night. Any left on past a particular point send instruction to turn off. Only need 1 machine running to update really.

    Without activation like Linux. 1 image runs a complete network. With activation like windows the images still can be de-duplicated in block storage.

    3) WSUS is not a network HID(host intrusion detection). Under Linux you want to detect if updates are in place you run a network HIDs. Why is this difference important. Windows rollback system. I install an update the machine hit a issue it rolls back. Problem is WSUS is still informed that X update is applied to that machine. Even with the check used by windows update will result in the update not being reapplied in some cases.

    WSUS is a false sense of security it don’t work due to the issues with it even on Windows network you should have a network HID installed to pick up when WSUS screws up due to Windows screwing up. Lot of secuirty issues happen to windows networks because people trust WSUS to tell them that there networks are updated. Not aware WSUS lies.

    “a shutdown is in most cases the cleanest and fastest solution.”
    Shutdown down a server without need is stupid. Because you lose your cache.

    I know a lot of servers that for times when the kernel has not been changed they do a spin down spin up. This is like a reboot but is not. This is making a copy of init systems single user mode altering instead of going to shell to switch back to normal run level. Result of this everything in userspace bar init gets restarted.

    Caches in memory are left intact after a spin down spin up. So is faster than a reboot or shutdown. System also returns to full performance quicker.

    Personally I don’t like spin up/spin down or shut-down and reboot method because its a leap of faith. You are presuming the patch works. Presuming something works is a very dangerous thing. HA requires you to presume nothing works until you are proven wrong.

    This is why I use instance to instance. Because I can restart everything in a controlled and audited way. Basically there is more than 1 way to skin the cat. Shutdown is the worst possible way. What happens if the server does not come up the other side? Not good.

  27. dougman says:

    Windows SmartScreen, the new Vista UAC for Windows 8, how creative!

    It basically warns users if they are about to open a potentially dangerous website or file. A warning is displayed and it is up to the user to continue or stop at this point. UAC has been criticized in the past for being too obtrusive and annoying, and people will just turn SmartScreen off.

    Windows SteadyState, discontinued since July 2008 and not compatible with Windows 7. Clonezilla, BackupXML, Redo Backup and of course Acronis all work creating system images.

    Re: If you kill applications, a shutdown is in most cases the cleanest and fastest solution.

    Why should someone reboot a server to kill an application?
    Imagine the cost in stopping the work of 50-people, just to reboot!

    That works out to be $12K a minute based on my last client.

    People hate being nagged. People hate losing overnight work.

    Imagine eBay, Amazon or Wikipedia going down for reboot.

    Even Matt Mullenweg hates reboots.

  28. ChrisTX says:

    “We had an industrial-strength anti-virus that was always in people’s face but it was not enough to stem the malware which was always one step ahead.”

    Actually, as soon as an AV is triggering, you’re letting viruses somewhere in. Then you’re already doing something wrong. E-Mail viruses should be caught on the mail server, viruses from drive by download etc. should be caught by SmartScreen – unless of course you’re useing a non-IE browser, then this is an issue – and exploit-based viruses use patched holes.

    “I used Clonezilla to reimage PCs.”

    Don’t. SteadyState is a wrapper around mandatory profiles. It will not reset the PC. Using Clonezilla is obviously incompatible with updating software, meaning you let holes open. Windows isn’t even the direct problem there – if you’re beyond a NAT or Firewall – rather Adobe Flash, Reader, Java and outdated browser versions (especially IE6) are.

    “(not being able to access shares she could access from GNU/Linux).”

    That would sure interest me what kind of shares those are supposed to be. Windows can access NFSv3 and SMB, so unless you’re using NFSv4 (which is still marked experimental in the kernel and was only rather recently introduced into nfs-utils at all) you can always access such shares.

    “In GNU/Linux reboots are mostly required for new drivers or the whole kernel package.”

    That isn’t even the reason why most reboots are necessary in Windows. Hot patching on kernel level works to some extend. It’s rather that if a shared library is in use, incosistency of versions should not exist. Say process A has the old image loaded and process B the new one, and they’re in some way incompatible, this can lead to unexpected issues. If a library is not in use, no problem.

    “I cannot be replaced by an OS for my knowledge of the criticality of a vulnerability. e.g. a vulnerability in SMB2 may not affect clients not using that service but I still may want the update done sooner or later.”

    No. What I wrote above applies: Incosistency of binary images might cause abitrary issues. Instead, then you should delay the application of the update. Which is – drumroll – what WSUS is for. Microsoft does not without reason do MSRC and SRD blogs as well as webcasts to inform you on deployment priorities. Applying an update and not rebooting is never a good idea.

    “I can kill any application that needs restarting on the terminal server.”

    You indeed can. So you can in Windows. However, why would you do that? If you kill applications, a shutdown is in most cases the cleanest and fastest solution. Sure, you can find out what processes lock the update process and restart these. But then, a reboot can be done with prior notice to users and fully automatized. So what advantage does it give you to avoid the reboot there by some 4 pipelines containing command? Right, none.

    “Considering the rarity of malware for GNU/Linux I have never had to do it in the last decade on many terminal servers.”

    PROTIP: Use SRP or AppLocker. (or for Linux grsecurity with trusted path execution). Deployed that in my old school and it did solve exactly that problem.

    “No OS is able to make such decisions better than I because I know what my users want.”

    Yup, that’s why you use WSUS and policies to deploy what you need when you need it. If there are files in use, a reboot is for above laid down reasons the fastest choice. And after all, the OS can determine that.

  29. Ted says:

    @Pogson

    We had an industrial-strength anti-virus that was always in people’s face but it was not enough to stem the malware which was always one step ahead.

    Which one? In my experience, the “industrial strength” anti-virus programs for corporate or other network deployments generally do not get in everyone’s face – they alert a centralised server or set of servers which then alerts the administrator. The only time I’ve seen an AV of this kind alert the USER was when it found a file it could not disinfect or quarantine. A “Get an admin. NOW.” moment, if you will.

    Also, this AV was quite simply not doing the business for you. If you were suffering “waves of malware” and if it was installed, configured and set for updates correctly, you should have migrated away from it. Not to do so would be irresponsible.

    I’m genuinely curious to know which one it was, so I can avoid it when renewal/migration time rolls round again.

    Considering the rarity of malware for GNU/Linux

    Rarity?

    http://www.theregister.co.uk/2011/10/04/linux_repository_res/

    http://www.rfxn.com/projects/linux-malware-detect/

    http://blogs.computerworld.com/14723/no_more_linux_security_bragging_botnet_discovery_worry

    http://blog.eset.com/2011/10/25/linux-tsunami-hits-os-x
    (A two-for-one deal here – Linux malware that was ported from OSX.)

    And more…

    Only rare comparatively speaking; definitely not near non-existent like some of the more vocal and rabid Linux zealots claim. However, most Windows users have two advantages that Linux users do not. Firstly, they’re prepared – most computers ship with an AV pre-installed, and MSSE is freely available for home use. And Windows users don’t think their computer is magically invulnerable or have people telling them they’re invulnerable;

    https://www.linux.com/news/software/applications/8261-note-to-new-linux-users-no-antivirus-needed

  30. ChrisTX wrote, “““Requires a reboot” is subjective and no OS can know the answer. It’s up to the system administrator or should be.”

    Absolutely not true. It’s pretty easy actually, if a library is in usage, the using processes need to be restarted. Most times, a reboot is the easiest and most automatable way there is.”

    A system administer can weigh the risks and decide whether or not to reboot. I have often had an announcement of a reboot fed over the PA that users should reboot. I have often deemed such expediency unnecessary during the work day. I cannot be replaced by an OS for my knowledge of the criticality of a vulnerability. e.g. a vulnerability in SMB2 may not affect clients not using that service but I still may want the update done sooner or later. In GNU/Linux reboots are mostly required for new drivers or the whole kernel package. I can reload kernel modules manually via SSH sometimes without rebooting. Libraries are not different. If a library is in use that in my judgement needs to be refreshed, I can reboot or I can flush the caching and force a refresh from disc storage. LSOF on the terminal server will show me who uses the library.

    lsof | grep | awk ‘{print $1, $9}’ | uniq | sort -k 1
    Some packages (like libc6) will do this check in the postinst phase for a limited set of services specially since an upgrade of essential libraries might break some applications (until restarted)

    I can kill any application that needs restarting on the terminal server. If a user needs that application they will restart it. That would be a last resort for me only for the most urgent updates. Considering the rarity of malware for GNU/Linux I have never had to do it in the last decade on many terminal servers. No OS is able to make such decisions better than I because I know what my users want. I have to see them and look them in the eye when I explain what happened.

  31. Ted wrote, “Then Automatic Updates were badly configured. The settings should have been changed to fit the circumstances. For one, allowing the use to postpone a restart is trivial in Group or Local Policy. As for your struggles against malware, have you ever heard of an anti-virus? Or in your shared computer scenario, did you bother to try using SteadyState?”

    We had an industrial-strength anti-virus that was always in people’s face but it was not enough to stem the malware which was always one step ahead. I used Clonezilla to reimage PCs. It worked but made a lot of work for me. All that work disappeared when we used GNU/Linux. I only once after switching needed to re-install and that was a twit who insisted on going back to that other OS in spite of having less functionality (not being able to access shares she could access from GNU/Linux).

  32. Ted says:

    @Pogson

    Ted wrote, in blissful ignorance

    In blissful ignorance of what, precisely?

    Prefixing other people’s quotes with this kind of nonsense only shows that a) you’re arrogant, and b) you have no counter-argument. I would have expected “I did not abort the reboot because…” followed by a valid reason.

    Do you insult me and provide no argument because I’m right, and you simply did not know how?

    You continually bang on this “I was forced to reboot” drum and roll out the “re-re-reboot” nonsense as reasons to why people should not use Windows and Microsoft are evil.

    Why should I have to interrupt my presentation to deal with the frailty of that other OS?

    So you happily allowed it to reboot, interrupting it for longer? See “cutting off your nose to spite your face”.

    Please allow the word “Windows” to enter your vocabulary. Refusing to name it only makes you look childish. The predominant desktop OS in the world is Microsoft Windows; there’s no question of it being “other”. On desktops, the other OS is Mac OSX. The OS that belongs statistically in “other” is desktop Linux.

    All our machines were set to automatic updates in a vain attempt to keep them running in spite of malware.

    Then Automatic Updates were badly configured. The settings should have been changed to fit the circumstances. For one, allowing the use to postpone a restart is trivial in Group or Local Policy. As for your struggles against malware, have you ever heard of an anti-virus? Or in your shared computer scenario, did you bother to try using SteadyState?

    I switched that client to GNU/Linux and never had that problem again. Case closed.

    Do you also fix punctures on your car by taking all the wheels off? You fixed one easily-remedied problem by changing to a whole new set of problems.

  33. ChrisTX says:

    “That’s because M$ keeps you in ignorance and you assume because you don’t know of a vulnerability that the malware writers do not.”

    That is entirely not the point. See, I’d love to see a list of known holes I can start to exploit. The difference is that with that list, every 4 year old can Google the exploit code for. Mass exploitation by script kiddies is easily possible.

    “APT often recommends a reboot or asks to restart services or does it on its own.”

    Notice how I was referring to system libraries which are not tracked, not services.

    ““Requires a reboot” is subjective and no OS can know the answer. It’s up to the system administrator or should be.”

    Absolutely not true. It’s pretty easy actually, if a library is in usage, the using processes need to be restarted. Most times, a reboot is the easiest and most automatable way there is.

    “Lock down to year 2011. http://www.cvedetails.com/top-50-products.php?year=2011

    I like how the Linux kernel and Chrome which are both shipped by Debian have fewer holes than Debian itself.
    Nevermind by the way, that the figure for Windows 7 is artificially high because of a class vulnerability in win32k.sys which for some reason received a CVE for every single object it was possible to exploit it with, resulting in a single patch receiving 40+ CVEs.
    Nevermind that in April 2011 Microsoft released 2 updates pro-actively hardening the SMB core and fixing a ton of holes by automated discovery by themselbes.

    “In 2011, “Conficker worm is still the most commonly encountered pieces of malicious software seen is Sophos customers”. see http://isc.sans.edu/diary.html?storyid=12526

    Absolutely proves that this is relevant to Windows 7. Surely, that is not surprising, Conficker was an impressive art of malware. However, because some idiots – and statistics and infection rates prove that this is mainly an XP issue – disable Windows Update because it protects them against Microsoft finding their pirated Windows installs, such malware still spreads.

    Actually, I think XP is a 11 year-old OS and despite having seen major upgrade rounds with SP1,2,3 that it’s still 11-year old tech. I don’t really care what you have to say about XP, because honestly, XP is cr*p by today’s standards (Windows 7 that is). From all, security, performance, UX, API availability, and what else.

    “I can attend to forced patches and reboots on our heavily managed Windows network”

    Dunno but probably you should tell your Administrators about this thing named WSUS. It’s made for exactly that purpose and together with an appropriate GPO rolls out updates exactly when they should be, in the middle of the night.

  34. oe says:

    I can attend to forced patches and reboots on our heavily managed Windows network; this has happened to many a coworkers as well more times than I care to count. Fortunately, when real work needs to get done we have a Linux (and Mac) based “research” LAN one building over…

  35. Flying Toaster says:

    Flying Toaster its not science fiction that MS has more application running web exposed in general usage with remote CVE on them than debian that are not patched. (Italics mine)

    Delusion then.

    Really you are one very stupid MS Troll Flying Toaster. Calling up debian CVE numbers causes Linux guys to look at MS CVE products so wake up that you are stupid.

    See my refutation above based on Secunia’s statistics.

    Win Linux.

    Charlatanism at its finest.

  36. oiaohm says:

    Flying Toaster its not science fiction that MS has more application running web exposed in general usage with remote CVE on them than debian that are not patched.

    Really you are one very stupid MS Troll Flying Toaster. Calling up debian CVE numbers causes Linux guys to look at MS CVE products so wake up that you are stupid.

    There are particular topics anyone being a MS Troll should know to stay well clear of. Secuirty is one of those things. Of course at this point MS Troll will now play the larger market share card. Linux guy then plays the more on-line card. Then a debate neither can win goes on. Because really both do equal each other out for odds of being attacked.

    Result is a loss for the MS Troll because the Linux guys now think Microsoft is worse with no possibility of moving the Linux guys. Bad part is some MS guys might now change side. Win Linux.

  37. Flying Toaster says:

    APT often recommends a reboot or asks to restart services or does it on its own. (Italics mine)

    A claim that is empirically provable to be dubious as best.

    This is also, of course, compounded with your usual double-standards on “zero-day”. (see #comment-81018 in “Writing and GNU/Linux”)

    That’s because M$ keeps you in ignorance and you assume because you don’t know of a vulnerability that the malware writers do not.

    So this is the cretin-speak for “I don’t have anything to substantiate my claim”, eh? Way to pull an ad hoc hypothesis here.

    patched or not because they have made their software so complex they cannot debug it.

    The same argument also also be applied to Debian, your favorite OS/love interest, and with the devs’ clear knowledge of unpatched vulnerabilities just to add insults to your injury.

  38. Flying Toaster says:

    We ran on diesel power so running them at night was expensive.

    And I bet diesel power is expensive to run prior to your presentation as well.

    What a way to make excuses for your incompetence, Pogson!

    Now, patching does fix that but the malware artist changes to some other vulnerability and suddenly you are not patched against that until M$ gets its act together, which never seems to happen.

    Science fiction!

  39. ChrisTX wrote of picking up malware, “That you wish Sir. Unfortunately, not even XP will do that “.

    In 2011, Conficker worm is still the most commonly encountered pieces of malicious software seen is Sophos customers”. see http://isc.sans.edu/diary.html?storyid=12526

    Now, patching does fix that but the malware artist changes to some other vulnerability and suddenly you are not patched against that until M$ gets its act together, which never seems to happen. M$ has more “critical”/remote execution exploits in the OS, browser and its office suite than the whole Debian distro with thousands of applications. Combine that with installed base and M$ and “partners” are a disaster for IT. If M$ were wonderfully responsive, there would be no need at all for the security industry that has grown up stealing cycles from Wintel PCs.

    Remember the Honeypots? XP machines were getting infected in minutes just idling on the network. Back in 2004, XP was shipped without a firewall and fell over in seconds. Even today, every PC running M$’s OS is vulnerable to some attack (M$’s opinion, not mine), patched or not because they have made their software so complex they cannot debug it. There are more vulnerabilities in applications these days than the OS but no one in their right mind runs that other OS without some hand-holding third-party application to try to protect the weak sister of IT.

  40. oiaohm says:

    reactosguy
    http://www.cvedetails.com/top-50-products.php
    Notice where Debian is and where Windows 7 is.

    Lock down to year 2011. http://www.cvedetails.com/top-50-products.php?year=2011

    Simple fact of the matter MS Products is worse if anything than debian current state. Yep install a machine with MS OS and every MS product to go with it and you most likely have more holes than debian by a long margin. Yes that is counting all debian third party problems.

    Even thick clients that MS windows still does not support well. Network severed. In power tight areas these can be great. Reason people can turn all the clients off and only the server needs to update.

    I have had a windows machine kick me out at a 11:30 to update. Yes it should have done it at 3 am. Ted windows update is not the most dependable thing when computer has a slightly bad network switch. Windows update had been downloading/attempting the patch from the wsus server from 3am only got it download by about 11 something installed in background then asked to rebooted. Random roll the dice stuff of windows when you mix in a few suspect switches.

    I have found that behaviour track to dieing switches. Not bad enough to effect a Linux thin or thick client to a noticeable amount.

    ted
    “Please don’t kid yourself that Linux file systems do not fragment – they do.”
    Depends on the file system. ext4 with on-line defragmentaton as part of file-system driver kinda does not. For kinda predictable reasons. When ext4 with that feature writing fragmented files it kinda has a o bugger moment. You will notice it kinda lose IO performance as it defrags to prevent the issue.

    There are some with embed defrag those basically don’t. Even then the performance hit is a major issue. Sometimes its better never to defrag just insert a new harddrive and copy everything over it is faster.

  41. ChrisTX wrote a lot of irrelevant stuff and mixed clearly wrong information with believable stuff to give it credence.

    “oh I see” provides no information. “That’s because your system does not know when it requires a reboot.” is clearly wrong. APT often recommends a reboot or asks to restart services or does it on its own. It’s all in the scripts. “Requires a reboot” is subjective and no OS can know the answer. It’s up to the system administrator or should be. That M$ assumes the system administrator does not know best is a fault.

    ChrisTX wrote, “I’d love you to see pointing me to a _known_ security hole in Windows 7+Office 14”

    That’s because M$ keeps you in ignorance and you assume because you don’t know of a vulnerability that the malware writers do not. Ignorance is bliss. What of the vulnerabilities that existed in that other OS since the days of Lose 3.1 that were ported all the way to “7”. Where were they in the vulnerabilities list?

  42. ChrisTX says:

    “Also, within a day or so, perhaps within a few hours even. Windows 7, secure that it is, will pickup some malware and now your completely hosed.”

    That you wish Sir. Unfortunately, not even XP will do that – ok if you run SP1/2 which haven’t been receiving updates for ages, possible. A firewall doesn’t help you a lot at home and an AV is pretty much pointless these days if you use IE9+SmartScreen anyways. Nevermind that XP is having infection rates multiple times as high compared to W7.

    “Oh lets not forget, you also need to perform a defrag to that Windows can reorganize itself due to it’s sloppy nature.”

    The Linux-doesn’t-need-defrag-but-Windows does argument can only be presented by people who don’t know what they’re talking about.

    ( Speaking about ext2/3/4 in the following. XFS/ReiserFS/JFS are ‘requiring’ defragmentation. XFS even optimizes on regularly scheduled defrags )

    So ok, ext4 spreads files across the disk so it doesn’t need defragmentation and Windows doesn’t? Sounds to me as if that was easily implementable, right then, so why wouldn’t MS do it?
    Here’s why: Because it makes more sense to keep the files in directory order on the disk. ext4’s usage of a linked list as primary structure – hello FAT – disallows such an implementation.
    I’m wondering what is the directory fragmentation % of your disk? Because hm, ext4 seems to have to seek all over the bloody harddrive to enumerate a directory, while NTFS doesn’t due to consolidation.

    In fact, NTFS (Vista+) relies on an approach based on analyzing file usage: Files used as read only files (like system binaries, these usually do not suddenly grow in size) are stored consolidated, while bigger files with tendencies to grow are stored with enough space after them to prevent fragmentation. Not only is this more efficient than throwing your files all over your disk, but it’s also compatible with consolidating files to store them in their directory order.

    Then, NTFS relies like XFS on a worker based approach. Using modern task scheduling and system idle detection it can optimize the disk during idle times. This leads in average file and directory fragmentation of <1%.

    "Also don’t point to CVE list because Windows 7 is equally bad if not worse."

    Hahaha, no. I'd love you to see pointing me to a _known_ security hole in Windows 7+Office 14. Remember that these are known holes with exploits buildable for everyone. You might not say that Windows 7+Office 14 is a inpenetratable combination, but you wouldn't be – and that is the point here – able to use _known_ and disclosured holes for that. Debian's backport turd is the reason they have that problem.

    "Also debian has most of those patched in the testing branch."

    Most of these: http://security-tracker.debian.org/tracker/status/release/testing?show_remote_only=1&show_high_urgency=1

    Oh I see.

    "The presentation I was giving was on one computer while the client machine I was using as a thin client is the one that wanted a reboot."

    Say, with what version of Windows? Keep in mind that the update system was majorly reworked in NT 6 and will be heavily improved again with Windows 8.

    "I switched that client to GNU/Linux and never had that problem again. Case closed."

    That's because your system does not know when it requires a reboot. Like a shared library in usage counter. Except for the point that no distro I'd know of implements such checks.

  43. Ted wrote, in blissful ignorance, “The tale also exposes his lack of basic Windows knowledge for an IT worker; shutdown /a and/or net stop “automatic updates” would have prevented the unwanted reboot.”

    The presentation I was giving was on one computer while the client machine I was using as a thin client is the one that wanted a reboot. Why should I have to interrupt my presentation to deal with the frailty of that other OS? It was an object lesson for my students as it turned out. All our machines were set to automatic updates in a vain attempt to keep them running in spite of malware. Many users turned them off at night so anytime was the rule. We ran on diesel power so running them at night was expensive. I could have implemented wakeonlan to update at night but with the particular clients a fraction of the machines randomly did not wake up. The real world is not simple at Ted assumes. I switched that client to GNU/Linux and never had that problem again. Case closed.

  44. reactosguy says:

    How about a fair test.

    Even if the operating systems were the only differing factors, there are other external factors such as Windows having more viruses (thus it is more likely to have one earlier), that Linux can have viruses (therefore you can possibly have a virus in a month), and what type of account you use. No, really. Limited accounts are more secure than admin accounts.

    Next, they don’t provide realistic virus catching scenarios. Nobody is keeping their computer on for a whole month without rebooting or shutting down, and essentially nobody wants to use their computer without security software. It’s an unrealistic test.

    If Windows is as “great” as everyone makes it out to be and as “secure” as M$ says it is, there should not being any problems.

    We don’t make Windows as great as we make it out to be, but it still gets your basic computing tasks done. We’re not promoting it; everyone knows it and uses it on their computer.

    Though I have to admit, I’m using Windows XP, and it’s possible that virus programmers are targeting new systems.

    ———-

    Also, oiaohm, where is your evidence regarding that Windows 7’s CVE list is just as bad as Debian’s?

  45. oiaohm says:

    Flying Toaster if you had read my stuff dougman has failed todo 3 things required to run rebootless.

    So dougman is completely killable for incompetence. Of course I am not incompetent.

    Also don’t point to CVE list because Windows 7 is equally bad if not worse. Also debian has most of those patched in the testing branch. Yes you can run an install as a mixture of stable and testing branch to avoid secuirty issues.

  46. Ted says:

    @Dougman

    The outcome will be that, Windows will stop working altogether during that period a few times and will need to be rebooted.

    Windows will reboot automatically at 3am local time if required, if Automatic Updates is on and left to its defaults. Robert’s tales of Windows kicking him out in the middle of the day are down to a badly configured system. Badly configured either directly in Automatic Updates or in an incorrect clock or time-zone – none of these speak kindly of his sysadmin skills. The tale also exposes his lack of basic Windows knowledge for an IT worker; shutdown /a and/or net stop “automatic updates” would have prevented the unwanted reboot.

    To install patches, the computer must again be rebooted.

    Not necessarily, and at least Windows doesn’t let you think you don’t need a reboot after some updates, unlike Linux.

    Also, within a day or so, perhaps within a few hours even. Windows 7, secure that it is, will pickup some malware and now your completely hosed./b>

    Any malware you might pick up these days through user action (There are very few ‘true’ worms or viruses these days, it’s all Fake AVs and ransomware now) will be constrained to the current user’s profile. It’s TRIVIAL to remove it once you switch to an admin user. You don’t even need to reboot in some cases. I know, I’ve done it.

    Are you infected with any of the Open-Source Linux rootkits?

    http://www.theregister.co.uk/2008/09/04/linux_rootkit_released/

    That’s from 2008, so you’ll have re-installed upir OS about six times since then, BiAnnualForcedDeathMarchTM and all that, but you’d still never know if the new and improved version was lurking. But hey, Linux is invulnerable, right?

    Oh lets not forget, you also need to perform a defrag to that Windows can reorganize itself due to it’s sloppy nature.

    There’s been a few inventions lately you may have heard of; large fast hard disks with large buffers, Windows NT and NTFS. File fragmentation is not the problem it may have been in the past, and only remains an issue where file I/O and latency are important – database files and the like. For general usage, fragmentation would largely not affect you.

    BTW, the newer versions of Windows automatically defrag on a schedule. Fragmentation is also not solely a Windows phenomenon; Mac OSX also defrags the hard disk automatically and moves system files to the outside of platters for performance. Please don’t kid yourself that Linux file systems do not fragment – they do.

    Pwn2Own at CanSecWest, performs events such as I described every year and Windows is pwned each time.

    Mac OSX and Safari usually go first, I think you’ll find.

    The majority of remote exploits on Windows come from reverse engineering the patches, not from the holes being blindingly obvious to anyone with nmap and a mischievious turn of mind. People going out of their way to not install patches is hardly Microsoft’s fault.

    There’s no pleasing the Linux zealots – either Microsoft let people leave systems unpatched and things like Blaster and Slammer run riot months after they’ve issued the patch, in which case they’re lambasted as “irresponsible”, or they’re forcing people against their will to be updated, in which case they’re a short step from “fascists”.

    Linux, well they “rather not bother” it’s just silly.

    http://security-tracker.debian.org/tracker/status/release/stable?show_remote_only=1&show_high_urgency=1

    This may be why Pwn2Own “rather don’t bother” with Linux. It’s a walk-over for them.

  47. Flying Toaster says:

    It seems that Debbie is one heck of a chick that does know how to keep her legs cross.

    (Source courtesy of ChrisTX)

  48. Flying Toaster says:

    The outcome will be that, Windows will stop working altogether during that period a few times and will need to be rebooted.

    You must be a proud graduate of oiaohm’s school of science fiction writing, aren’t you?

  49. Clarence Moon says:

    “You want games? How about a fair test.”

    Send your picture to Merriam-Webster post haste, Mr. Doughman! They need an illustration for their defintion:

    dweeb – n. Slang. A person regarded as …

  50. dougman says:

    You want games? How about a fair test.

    Fine, lets setup two identical laptops.

    1. Windows 7, fully patched, no antivirus, no firewall

    2. Debian Linux, fully patched, no antivirus, no firewall

    Run these on the web for 30-days, without reboot and all the while letting everyone and anyone jump on it to do whatever.

    If Windows is as “great” as everyone makes it out to be and as “secure” as M$ says it is, there should not being any problems. Agree?

    *Nod* head

    The outcome will be that, Windows will stop working altogether during that period a few times and will need to be rebooted.

    *Scratch* head

    To install patches, the computer must again be rebooted.

    *Scratch* head

    Also, within a day or so, perhaps within a few hours even. Windows 7, secure that it is, will pickup some malware and now your completely hosed.

    Awww, crap. 🙁

    Oh lets not forget, you also need to perform a defrag to that Windows can reorganize itself due to it’s sloppy nature.

    *Scratch* head

    See how all this sounds silly? Windows needs additional software to work, and Linux?

    Well, the Linux machine will just keep on running and running without a hiccup, whatsoever.

    *thumbs up*

    Pwn2Own at CanSecWest, performs events such as I described every year and Windows is pwned each time. Linux, well they “rather not bother” it’s just silly.

    D.

  51. Flying Toaster says:

    I recommend using Debian GNU/Linux

    The crab encrusted partner for all “freedom” lovers.

    Now, Windows 7, on the other hand… Again, doesn’t it feel great to beat someone at his own silly game?

Leave a Reply