M$ Refuses to Compete with XP or “7”

Sources close to Microsoft’s sustained engineering team, which builds and releases service packs, have told The Register there are no plans for a second Windows 7 SP – breaking precedent on the normal cycle of updating Windows.

via Microsoft has no plans for a second Windows 7 Service Pack • The Register.

For those who have not administered tons of PCs running M$’s OS, there’s this thing called a “service pack” which is a collection of updates since Day One of each release. The idea is that instead of making hundreds of updates, one can just update the whole thing from a single file and be nearly current, saving a lot of time and complexity when installing the OS. For example, where I last worked everyone was on FAT with XP SP1. Figuring folks might be better off updated, I took an image of the hard drive of the best-performing SP1 machine and spent many hours updating it and bringing it current. Then I stored a new image and installed that on every PC in the classrooms, saving weeks of work. The idea being that if a new batch of PCs come in I could work from the image of the latest update instead of some ancient snapshot. When we got PCs in they were often a year or more behind in updates. It’s just a waste of time to have every update done since some ancient point in time.

The situation with GNU/Linux is quite different. The package manager takes care of this automatically and you are no further out of synch than the last update. The package manager not only keeps track of dependencies, it puts updating a system as no more difficult than if it were current.

Anyway, if M$ won’t make any more service packs for “7”, the whole business world will be annoyed. Moving from XP to “7” will get more expensive in manpower every month. Does anyone think it’s a great idea to have the folks paid by the upgrade in charge of upgrading your system? I suggest they move to Debian GNU/Linux to eliminate this annoyance. Along the way, they can have the convenience of updating apps and OS together, local caching of apps and OS, simple remote management reducing the need to re-image and free upgrades/updates. It’s the right way to do IT.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , . Bookmark the permalink.

29 Responses to M$ Refuses to Compete with XP or “7”

  1. Tiberius James Hooker says:

    “So, Novell exists.”

    As a subsidiary of the Attachmate Group, yes. This is not disputed.

    Sun Microsystems, MySQL AB and BEA Systems still exist as subsidiaries of Oracle, too.

    Nortel and Norstar exist as subsidiaries of Avaya.

    Softimage still exists as a subsidiary of Autodesk.

    Ulead still exists as a subsidiary of Corel.

    Even Sillicon Graphics continues to exist as a subsidiary of Rackmount Systems.

    It’s like raising your dog as a zombie and gleefully telling everyone your dog “still exists”.

  2. TJH, beating a dead horse, wrote, “it doesn’t mean that MySQL AB is still alive. Same goes for Novell.”

    Novell still sells OS.

    Novell still has “partners” selling product.

    So, Novell exists.

  3. Tiberius James Hooker says:

    Nortel Also has news updates from earlier this month.

    It doesn’t mean it isn’t wholy owned and controlled by Avaya. Oracle still updates MySQL.com, it doesn’t mean that MySQL AB is still alive. Same goes for Novell.

    What do you really think happens to a company after it’s bought? I suppose Norstar is alive and well as an independent compsany as well? Abaya still sells Norstar gear after all, long after it was acquired by Nortel, who was acquired by Avaya.

    And all those acquisitions Microsoft made, they’re all healthy, independent companies, too right?

    Did you know that Novell isn’t publicly traded anymore? Why do you think that is? Attachmade (I remember now, AM bought Novell, Rackmount Systems bought SGI) still uses the Novell and SuSE brands, but they’re at best shell companies.

  4. Tiberius James Hooker says:

    “Last year, University of Sherbrooke had the most powerful super-computer in Canada. It ran GNU/Linux of course,”

    What about UQAM, UdM, UdQ, Concordia, McGill or the dozen or so Universities here? I never said nobody uses it, I just said that “hotbed for adoption” is a tad of an exaggeration.

    Good for UdS though.

    “University of Sherbrooke has 4 labs with GNU/Linux”

    Relative to how many Windows and Macintosh labs? Vanier and Dawson Colleges have a Linux lavb or three between both of them, but have about 20 Windows labs, and 3-4 Macintosh labs. Same goes for Cegep du View Montreal, Bois-de-Boulogne, Ahuntsic, Marianopolis, Rimouski, and I suspect the other two dozen.

    “The government of Quebec was sued for buying M$-only and lost…”

    The government of Quebec over the past decade of federalist, Liberal rule is the most corrupt government since Duplessis’ Union Nationale. It’s in bed with both Power Corp, QuebecOr and Vitto Rizzuto’s mafia. Cost estimates are purposely exaggerated on public works so that everyone gets their cut. When it comes to whom the Liberal govt was buying from or what, we have much more important considerations.

    You may not immediately see the relevance of this, but half the point is that this is why government contrast cost so much public funds, and the other half ids that we, right now, don’t give a rat’s ass who the product is coming from, there are far greater concerns.

    “Savoir-faire Linux became the first company in eastern Canada to join the Linux Foundation. It has 60 consultants in PQ.”

    No but seriously Pogs, that’s one company out several hundreds. Either your standards are way too low, or we have very, very different definitions of what constitutes a “hotbed”.

    Are you defining any environment where the rate of adoption is > 0 as a “hotbed”? Adoption here is about the same as it is anywhere else, it’s reasonablt popular in corporate deployment, though no more than anywhere else, and you’ll see a few labs in education, though no more so than anywhere else.

  5. Novell faked its death! see http://www.novell.com/prblogs/balancing-data-and-employee-productivity/

    They have some bot publishing blog entries, just a couple of weeks ago.

    (sarcasm)

  6. TJH wrote, “Truth be told though, and since you mentioned it, you hardly see Linux in schools here. In the Cegeps and universities, you’ll have a lab or two for the CS students, with Linux machines alongside Unix systems, the design, artistic and music programs are exclusively Mac – you’ll see more Macs than Linux and Unix combined, and you’ll see several times more Windows machines than all three combined. *though I can only speak for the Metropolitan and the Capital)”

  7. Tiberius James Hooker says:

    “More FUD! Novell was bought but did not go out of business. Check them out.”

    How’s it FUD, precisely? Are you really that naive, Pogs? Nortel, Silicon Graphics, and others also still have public websites but have, as actual companies, been out of business since being acquired (by Avaya and I think I might be mixing up whether Attachmate bought SGI or Novell). Hell, even MySQL AB’s website is still up, but we all know the company ceased to exist once Oracle took over.

    The brand and some of the products are kept alive by the new owner, but the company is long dead.

    Where was the previous FUD, this is supposed to be more of?

    Ha! La Bell Province is a hotbed of GNU/Linux adoption. The MILLE project was funded by La Belle Province years ago and is still going strong.

    What does that have to do with the unfortunate state of the Manitoban public education system? If the funding situation in Manitoban classrooms really is in the dire straights you describe it as, this should be cause for concern, rather than an excuse to go on some irrelevant Linux tangent.

    It’s amazing, if I understand it right, you use the lack of proper funding in schools as an excuse to promote Linux, which is disappointing. It’s cause for serious concern, shouldn’t you be mobilizing to petition your MPs to prioritize education? Or is it only La Belle Province that is willing to fight for better, better funded education? Use the funding to buy bigger and better or better manageable Linux systems for all anyone cares, but are you really treating it as an opportunity to sneak in Linux? Or are you simply exaggerating?

    I get that you’re up in the middle of nowhere, but here, neither Rimouski, Baie d’Ufre, Gaspesie, Sherbrooke or even Abitibi-Temiscamingue is worse of than Montreal or La Capitale Nationale, we fought for that, of course.

    The worst part is that you get on the defensive so quickly, at no point did I berate Linux, I only argued that every operating environment has mechanisms to handle updating large deployments with trivial ease. At no point did I even mention Linux in relation to funding. It was purely a response (one of concern, might I add), to your description of the state of your hardware, being reliant and repurposed machines and hand me down.

    the public system here, get government subsidized contracts from vendors, uniform hardware configurations, with the major exception being the Macs in the design programs, Mac Pros aren’t cheap, but they get phased in semester by semester. The point being that the better funding allows for better arrangements in terms of hardware.

    Truth be told though, and since you mentioned it, you hardly see Linux in schools here. In the Cegeps and universities, you’ll have a lab or two for the CS students, with Linux machines alongside Unix systems, the design, artistic and music programs are exclusively Mac – you’ll see more Macs than Linux and Unix combined, and you’ll see several times more Windows machines than all three combined. *though I can only speak for the Metropolitan and the Capital)

    Secondary school is almost exclusively Windows, with some Macs here and there. It’s not a hotbed, it’s really just like anywhere else in this regard.

  8. TJH wrote, “I’d love to ask Novell, but they went out of business,”

    More FUD! Novell was bought but did not go out of business. Check them out.

    TJH wrote, “I do feel for you though, apparently the Manitoban school system isn’t as well-funded as in La Belle Province, it’s a shame.”

    Ha! La Bell Province is a hotbed of GNU/Linux adoption. The MILLE project was funded by La Belle Province years ago and is still going strong. MILLE evolved into LTSP-Cluster and there’s a business that supports it globally. LTSP-Cluster is now a part of Ubuntu GNU/Linux and is a scalable implementation (load-balancing multiple servers with secure protocols) of LTSP. There’s a reason it exists. GNU/Linux in schools and LTSP work. Individual schools like LTSP. School divisions with multiple schools love LTSP-Cluster.

  9. Tiberius James Hooker says:

    “Ask RedHat or Novell. They are selling management of systems all over the planet.”

    I’d love to ask Novell, but they went out of business, and i’m pretty certain Attachmate was more interested in the non-Oracle share of the SVR4 copyrights. That aside…

    If we’re talking management of systems as in they manage them for you, then this whole discussion is utterly pointless. You have the vendor managing the systems for you, who cares what method they employ, and your whole argument for “wated” manpower tolling through hotfixes one at a time is made equally pointless since such support packages exist for pretty much any OS, which means ultimately, enterprises with these support contracts are not affected in the least, end of discussion.

    Further, you’re flip flopping all over the place now, mooting your tangent about Debian. We’re talking RHEL support contracts here, which means you’re paying a boatloat ($2499 per socket per year, last I checked, vs. Buying your RHEL/CentOS/OUL support from Oracle for a comparatively reasonable $2499 per system per year, or Microsoft’s licences, for that matter)*

    * = keep in mind that VLK licences are cheap, and only one WinServer license is required, buying support from Red Hat is not cheap.

    If we’re talking about any other sort of managed systems, my general experience in the field is that they’re not testing their updates and fixzes against your sepecific hardware configuration and testing to work in your specific environment for your specific usecase (in many cases, Avaya and their RHEL/CentOS based products come to mind, you so much as blink at it the wrong way, and you void your support contract). Those that do, those plans cost several arms and several legs. Trust me, it costs a lot less to hire someone, than it does to pay Ayaya’s experts $600/hour.

    ” I would just type all “some command” and it was done, every PC and server updating seemlessly. A large organization can set up its own repository of stuff well tested and update from their.”

    You’re not paying attention, as I mentioned in an earlier comment, this is precisely how Linux systems, in my experience, tend to be managed in enterprise environments. All that’s being argued is that technologies exist in the Windows ecosystem to achieve the same result with trivial ease. You totally underestimate the power and flexibility of GPO.

    “Debian GNU/Linux runs a fair percentage of web servers and that’s the way I would handle updates with Debian GNU/Linux.”

    And it’s a commonly used approach to managing Linux systems, +1 to you for taking the sane approach. I don’t find it as robust as integrating into an already existing domain controller, or using beadm on ZFS datasets, but it’s no more or less trivial to do. You work with the tools at your disposal, and make the best of it.

    “TJ Hooker wrote of WSUS, “the whole process happens unattended.””

    Thank you for catching the reference ^_^

    “Nope, with just 100 PCs, I was having about 3% fail to update and required manual intervention to avoid hours more operation without protection from the latest “zero-day” attacks.”

    Do keep in mind that while you’re talking about WSUS (which I have only limited experience with, and am not qualified to comment on) I’m talking about AD, GPO, and RIS/WDS. We’re getting nowhere with this pointlessness, I propose agrreing to disagree and moving on to other points.

    ” My system was set up by a professional with M$’s certification but it did not happen unattended.”

    Come now, we both know all it takes to get an MSE Cert is being able to tie your own shoelaces (and in fairness, RH and Novell certs are no much better – the Sun, IBM and HP certs however, are what nightmares are made of). But again, you’re talking WSUS, I’m talking AD, GPO and RIS/WDS, this is going nowhere. I get it, in your experience WSUS sucks, I’m neither agreeing or disagreeing, but I accept your position on it.

    “The slipstreamed image is useless when a new batch of PCs come in with different motherboard, drivers, etc.”

    Well, like I said, I come from a background in the enterprise. Deployments are usually (and I mean almost always) homogenous hardware configurations, with variances from department to department.

    You could either maintain separate images for different hardware deployments, or you’d maintain the same base image (slipstreamed with hardware agnostic updates and hotfixes) using the cache of generic drivers included in the base install, and they’ll receive the drivers appropriate to their configuration as updates from the domain, it’s a stupidly powerful and flexible solution, and it’s trivial to setup and maintain.

    The catch is that it’s a moot argument, disparate hardware configurations introduce the same problem to deal with on large Linux deployments, this is something that needs to be addressed in any operating environment.

    ” You have to install on some random PC that “purchasing” or in the case of schools, some donor, inflicts on you. ”
    fortunately, I work in a much more organized and dare I say, predictable environment. I need new hardware, I requisition it, then I get it according to spec. Suppliers are pretty steady, configurations are steady. I can safely assume a homogenous hardware configuration within any given department. There’s a reason we do this, and we do this regardless of operating environment for the same reasons. I think we can agree that in any environment, the less entropy the better, the more variance there is in configuration, the more complexity is added to the task of administration and maintenence.

    I do feel for you though, apparently the Manitoban school system isn’t as well-funded as in La Belle Province, it’s a shame.

    “What’s on the PC could be junk, wrong OS, or a year out of date.”

    My closest experience with that is having 4-5 configurations in a dozen departments (not all of of then under my watch, but we help each other out), it was really just a case of slipstreaming updates and hotfixes into a base image, letting the driver cache handle the hardware, and sending out configuration specific stuff out as updates, by department, it’s fairly trivial, especially in large corporate environments where tasks are tiered off (which becomes necessary when your overall deployment numbers in the thousands, spanned across a few dozen departments).

  10. Tiberius James Hooker says:

    “Anyone who comes from a full posix modern day back ground will disagree with you.”

    Stop. Right. There.
    Interix is vanilla OpenBSD ported to the NT kernel. You don’t get much more POSIX than that. What makes it a better solution than Cygwin is that it isn’t tacked on, and it’s native and integrated. It uses the NT architecture’s ability to tap into multiple parallel subsystems.

    What you have to keep in mind is that while Cygwin is geared toward running Linux applications, Interix is geared at providing not only a POSIX environment, but one that integrates into the NT environment (There’a difference between a Linux environment and a Linux one, but I don’t expect you to understand that).

    It’s not for running POSIX applications (we use POSIX systems for that), and if you were paying attention, cygwin was mentioned as a means to run ssh and cron. Your argument is as pointless as it is irrelevant.

    “Yes this way apt functions only have to run once. Since there is only 1 OS copy and its on central file server you don’t have any machines drifting behind.”

    What is it about managing configuration on a central machine and pushing out updates or whatnot to evry workstation needing them, simultaneously (protip: this means at the same time) that is too complicated to understand. You waste a paragraph pretty much explaining the same process in a much more convoluted setup. I manage Linux systems too, I know how it works.

    I’m not arguing that one is “better” than the other. Only explaining that mechanisms which achieve the desired end exist across operating environments, and that with the right tools, the right knowledge and the right skill set, they’re equally trivial to manage. If it seems complicated or impossible to you, I don’t mean to condescend, but perhaps your skillset is lacking. Versatility is key.

    Though since you mention rsync, I’ll take VSS or incremental ZFS send/recv streams over it any way. It’s not that Rsync is bad at what it does (it isn’t), it’s just not as robust as the other two. It’s tough to beat simply beadm’ing in a ZFS dataset exposed via the Jumstart/AI server.

    “Tiberius James Hooker when with WDS do you find out a image is missing a key driver for a machine.”

    Do you even understand what slipstreaming is? Go, google it, or better yet, look it up on Technet. But I’ll give you a spoiler: You slipstream the bloody driver into the image, hell you wouldn’t even have the drivers in the slipstreamed image on large scale deployments with multiple departments (single departments tend to have homogenous workstations and this is not a problem). Hell, you wouldn’t even bother slipstreaming the drivers in, you’d use the generic driver cache (you know, the ones with basic functionality that are bundled in) and push the specific drivers to the requisite groups the same way you would push updates.

    Again, this is not rocket surgery. Anyone who’s spent time managing a corporate environment knows how to do this.

  11. oiaohm says:

    oldman
    –I am curious Hammie, since when does someone whose primary job as a liquidator is to facilitate the final shutdown of failed business become an expert in managing large scale IT deployments in real businesses?–

    You miss something here. My job prime job is not final shut-down. My prime job is salvage side of liquidations. If business can be saved. A profit making business in most cases will pay out more than liquidating the assets. So I have to be able to able to step into a company that has failed due to like a complete IT failure and bring their systems back on-line at least good enough to be operational enough to give company life again.

    Time frames for salvage teams in liquidator teams are way shorter than even I like. 8 hours to bring like 200+ systems on-line to a usable state. Oldman. Online does not have to be fully imaged out just functional so thin terminal or drbl server gets that done. Both I can be salvaging the company records.

    What ever I build in those short time-frames must be built to last and cheap. Since there will not be the budget for another failure.

    Oldman if I do my job well the Company lives.

    Oldman reality I have to know every highly effective network management trick in the book and be willing to use them all to achieve objective. Thin clients, DRBL, Imaging in background.

    Oldman you have made a mistake thinking all liquidator job is to destroy. I own to the side of liquidation that is to create and repair.

    Using Linux can can be deploying windows images in background as users are interfacing with the likes of web based pos/crm/inventory systems.

    Oldman the hard thing in salvage I cannot say I will wait to weekend or night to start working. All operations done as IT in Salvage have to be as transparent to other operations going on as possible.

    Large scale IT deployments done in a general running business is easy compared to what I do. The general large scale IT administrators don’t know how easy there job is.

    Oldman here is the worst part. A business is being liquidated the old IT staff might have been fired. So I can have a stack of skilled but highly annoyed personal on my hands. Not all honourable people.

    Remember they cause me 1 failure in the critical time frames the business can end up just having its assets sold off because the business has not proven have the means even under new management to repay.

    Virus strike, Sabotage, Mistake…. Anything that causes a failure could result in business being disposed of as asset value alone.

    Oldman basically running a normal large business IT is easy. Minor errors get tolerated.

    The downtime allowance for 3 months is 5 mins per machine in work hours(yes this is per not add up to produce more so 1 machine takes 6 mins and others take 0 you have failed) when the company is proving if it worth being made a going concern again. Overtime allowance Zero.

    The reality oldman people who do my kind of work are some of the most skilled IT officers you will meet.

    Not only do we have to be skilled with what we prefer to use we have to be able to salvage data from everything.

    If you want something good to put in a IT course to give someone a work out put in a Salvage event case. Machines failed critical files somewhere on the machines. Key business operations has to be restored in under 8 hours like taking customer orders and processing existing customers orders. Problem is all installation discs are lost.

    Oldman something you never wanted to see is what I do makes what other do in general IT management look like a walk in the park.

  12. oldman says:

    So I can image deploy in work hours with users operating the machines that images are being deployed on using Linux.

    I am curious Hammie, since when does someone whose primary job as a liquidator is to facilitate the final shutdown of failed business become an expert in managing large scale IT deployments in real businesses?

  13. oiaohm says:

    Tiberius James Hooker
    –SFU/SUA is not only a better solution–
    Anyone who comes from a full posix modern day back ground will disagree with you.

    cygwin implements newer posix functions than what SFU/SUA does. So newer posix applications can run on cygwin that fail completely on SFU/SUA. If you do really need posix compatibility better off to use a virtual machine with Linux in it than use SFU/SUA.

    Remote Installation Services suxs. There is a reason why Nortons Ghost and Clonezilla get used instead of it. Remote Installation Services does not multi cast to reduce network usage where Nortons Ghost and Clonezilla do. Yes Remote image deployment don’t use Microsoft.

    Windows Deployment Services for Vista + at long last comes up to Clonezilla.

    Even so I almost never use Clonezilla in a Linux environment. Mostly due to the fact it can drift behind its only as good as the last image update and it requires pushing out too much.

    Ever used diskless remote boot linux Tiberius James Hooker. This is a important one Tiberius James Hooker. This means the OS is store on a central file server. This removes having to sync all the discs in the machines.

    Yes this way apt functions only have to run once. Since there is only 1 OS copy and its on central file server you don’t have any machines drifting behind.

    What method can you use for pushing out a image update under Linux. Ie the machine has been imaged with a old image and you have made a new central image. Yes you have clonezilla like windows deployment services. But since its an existing image you have rsync to update the current OS installed image on machines to match want is on the file server. Advantage you could have network booted while this is going on. Yes you can change the OS of anything not running without worrying about disaster. You can do a new image on file server for drbl without disrupting those that have already booted. This system 2 reboot always without question brings systems to current if running local 1 reboot running drbl fully.

    Yes there are methods once you get to building deployment images with Linux that are distribution universal to update those machines without having to use the package management system. The file server access to the image files allows you to drbl the image so allows you to test the machines running the new image before you commit it to the local hard drive. Drbl also provides a fail safe incase of local hard-drive failed.

    Tiberius James Hooker when with WDS do you find out a image is missing a key driver for a machine. After you have written it to that machines hard drive an attempted to boot from it. Little late to find out it does not work. Linux image deployment done right you should deploy the image in background from the same image that you are deploying on the machine running from a network file server. Result is missing key driver shows up before you pave over a hard drive. In fact you can have the staff use the image for a few days before you deploy it to local hard drive. Minor performance slow down running Linux from network file server compared to local.

    So I can image deploy in work hours with users operating the machines that images are being deployed on using Linux.

    Windows is too local install dependant.

  14. TJ Hooker wrote, “there’s no need to go “back to day one” The slipstreamed images does not magically disappear.”

    The slipstreamed image is useless when a new batch of PCs come in with different motherboard, drivers, etc. That’s when the shit hits the fan. You have to install on some random PC that “purchasing” or in the case of schools, some donor, inflicts on you. What’s on the PC could be junk, wrong OS, or a year out of date. When I last did manage XP, I needed three disc images for just 40 PCs and in my tenure we got in two or three more kinds. I only needed one or two with GNU/Linux depending on whether or not I wanted 64bitness. I could also just save the package lists because I did have a local cache of everything.

  15. TJ Hooker wrote of WSUS, “the whole process happens unattended.”

    Nope, with just 100 PCs, I was having about 3% fail to update and required manual intervention to avoid hours more operation without protection from the latest “zero-day” attacks. A lot of people had this problem. My system was set up by a professional with M$’s certification but it did not happen unattended.

    see https://www.google.com/search?q=wsus+failed+client+update

  16. TJ Hooker wrote, “Maybe this flies in your classroom, but holy crap, that would never fly in a business setting. How could anyone possibly thing the latter is a good idea in a corporate, production setting?”

    Ask RedHat or Novell. They are selling management of systems all over the planet.

    If you test the update on one or a few systems it will work on all of them. You can test all you want befor the update cycle. You can stop the update cycle if your testing is not sufficiently advanced. It takes only seconds to change the CRON job on any number of PCs. Where I last worked I had a script called “all”. I would just type all “some command” and it was done, every PC and server updating seemlessly. A large organization can set up its own repository of stuff well tested and update from their. Anything is possible and it works well. Debian GNU/Linux runs a fair percentage of web servers and that’s the way I would handle updates with Debian GNU/Linux.

  17. Tiberius James Hooker says:

    On another note, please tell me you test and regression test updates before actually committing them to the whole network, and set the cron task to update from an internel repository; rather than setting the workstation to update themselves.

    Maybe this flies in your classroom, but holy crap, that would never fly in a business setting. How could anyone possibly thing the latter is a good idea in a corporate, production setting?

  18. Tiberius James Hooker says:

    ” have used WSUS which does that but you don’t want that to happen in the middle of the workday with attendant re-re-reboots, so one sets clients to update outside office hours but they still have to do everyone back to Day One, totally unnecessary with APT.”

    You’re also missing the point, it’s totally unnecessary with AD and GPO. You can even put it on a scheduled task, if it tickles your fancy, it’s the same principal of unattended installers for slipstreamed installation images. If you’re using the right tools and doing it right, the whole process happens unattended.

    And like I said, many times, there’s no need to go “back to day one” The slipstreamed images does not magically disappear.

    “APT brings every package up to date in one operation with at most one or two reboots, perhaps none with ksplice.”

    So you’re saying that the Windows approach that you’ve never used (I’m talking AD/GPO/RIS/WDS, you’re talking WSUS) is terrible because of reboots, and that APT is better… Because reboots. Okay Pogs.

    Not that it even matters, run the updates when the workstations aren’t use use, no time lost to reboots, it isn’t rocket surgery.

    ” It was a pain. I have never had a GNU/Linux PC fail to execute a script.”

    I don’t doubt that in your view, your experience shows you that this is true, though I have my doubts about the claim, I’ve had freshly installed Debian systems have dpkg puke itself to death on recursive dependencies on a dist-upgrade, but to be fair, I’ve seen strange behaviour across the board.

    “I have updated a lot of XP PCs and they always ground on forever, much longer than the time taken to do an installation. Going from XP SP1 to XP SP3 took many hours.”

    Depending on hardware and network connection, I’ve seen it take between 15 minurs to an hours on a home machine. What you’re failing to acknowledge is that in enterprise land, nobody waits for service packs in order to update. There’s updates and hotfixes being tested and applied as they’re needed, the process is really rather streamlined.

    “and I can schedule automatic updates as a CRON job.”

    Cron isn’t an acronym, nor is it something magical. You can use scheduled tasks on Windows, and worse case, if you insist on Cron, you can use it via the SFU or SUA subsystems.

    Again, just because you don’t use something, or know it exists, doesn’t make it so.

    Again, all these systems have their own mechanisms, and it’s all the same when your job description demands being well versed in a variety of systems.

  19. TJH wrote, “you apply the updates once, make sure they work, and push them out to all of the desired workstations at once, from the Domain Controller, you don’t update from the individual workstations.”

    I have used WSUS which does that but you don’t want that to happen in the middle of the workday with attendant re-re-reboots, so one sets clients to update outside office hours but they still have to do everyone back to Day One, totally unnecessary with APT. APT brings every package up to date in one operation with at most one or two reboots, perhaps none with ksplice. With WSUS I always found a few machines randomly not taking the updates to I had to explicitly update them. It was a pain. I have never had a GNU/Linux PC fail to execute a script.

    I have updated a lot of XP PCs and they always ground on forever, much longer than the time taken to do an installation. Going from XP SP1 to XP SP3 took many hours. That’s silly. I can update a GNU/Linux system within a few minutes even if it has not updated in a long time and I can schedule automatic updates as a CRON job.

  20. Tiberius James Hooker says:

    “That’s not fair at all. I wrote about imaging further down. One can administer large numbers of PCs without AD and GPO using LDAP and SSH, just as it’s done in GNU/Linux.”

    It can be done, sure, but it’s clunky and a tad of a hodge-podge, you can rig JumpStart to service other OSes, too, but just because it can be done, doesn’t mean it’s a good idea. As for Cygwin, if you really need a POSIX environment on Windows, SFU/SUA is not only a better solution, it’s bundled.

    Also, I’m not even talking about disk imaging, I’m talking about pushing out updates from the domain controller, though I’ll touch on RIS/WDS (imaging and slipstreaming) later.

    But we’re talking large scale deployments, we’re talking business, we’re talking an environment where you already have a domain, it’s there, use it. Windows workstations already integrate into an AD domain seemlessly and out of the box, SSH and LDAP, while a functional solution, needlessly adds extra complexity to the mix. But hey, if it works for you, by all means, do it. Just don’t act like it’s the only valid solution.

    Installing Windows in a VM on Linux in such a scenario though, it just baffles the mind, you’re doubling the complexity, again for no apparent gain. Use Windows where it’s appropriate, and Linux where it’s appropriate (yes yes I know, in Pogland the former is never appropriate, the latter always is).

    “Even if various tools allow one to work around it, the guy actually doing the work will wonder whether the best use of his time is to babysit a PC installing 837 updates, one after the other, instead of starting from a more recent point. ”

    You’re not understanding the concept of pushing out the updates from the Domain Controller, are you? It’s not like you have to reapply every single update from the beginning every single time. If machines are up and waiting for updates, tehy’re already on the domain, and they’ll receive their updates when the PDC pushes them out.

    If they’re new installations, you’d push out an image with the approved/current configuration and implied updates slipstreamed into it out from the PDC via RIS/WDS. It’s overkill for your needs, sure, but again we’re talking large deployments in businesses, the domain controller is already there, use it. Just because you’re not aware that something exists, doesn’t make it so.

    ” It certainly makes bringing online a batch of PCs much more difficult than “apt-get update;apt-get upgrade”

    I’m not a fan of Debian myself (more accustomed to RHEL/OUL, though Slack holds a special place in my heart), but isn’t it “apt-get update && apt-get upgrade”? But no, it isn’t complicated at all. You just push it out from the PDC with two or three mouseclicks, or from the PowerShell CLI, if you prefer doing it that way. You don’t even need to remote into the client machines. It really is dead simple.

    I am however at a loss as to how APT is more simple than automatic updates. And I’ll repeat again, you apply the updates once, make sure they work, and push them out to all of the desired workstations at once, from the Domain Controller, you don’t update from the individual workstations.

    “Even if the big businesses have the manpower to waste on updating their updates, small businesses don’t. Most people work for small businesses.”

    You’re still operating under the assumption that every update is being manually downloaded and applied one by one on every single machine one at a time. You’re also operating under the false assumption that there are no updates between service packs. Worse still, you’re wrongly insisting that one must, without exception, begin from a clean slate, every time.

    Small businesses can get by on slipstreamed images and Workgroups, but we’re talking large scale deployments. Maybe we have drastically different definitions of what constitutes a large scale deployment – keep in mind, I have a background in multinationals, a large deployment is in the hundreds and sometimes thousands of workstations.

    “Only a few of the many schools for which I worked had proper IT support.”

    Again, one of us is trying to transpose their experience in the classroom to business, the other is applying real world experience in both SMB and enterprise.

    “None of them are going to want to wait for months to schedule IT support so they buy retail and get a hodge-podge of versions as a result making administration that much more difficult. ”

    You haven’t seen hodge-podge until you’ve seen Avaya CMS or anything branded Network Alchemy. If you have no idea what I’m talking about, consider yourself envied.

    “Managing a mess of PCs is easier with GNU/Linux. Doing so with that other OS just puts you in a deeper hole.”

    Linux is what you know, and I have no doubt that it’s substantially easier for you. I work in enterprise IT, Windows, Solaris, Linux, AIX and HP-UX are all the same to me, they all have their equivalent mechanisms, and they’re all equally brain-dead simple when you’re acquainted with them. Not one is noticably more difficult than the other for me.

    “One is boot from network using the boot from network to change the machines own OS while user is using it. This feature is missing from Windows.”

    Hamster, no. It’s called WDS, it was previously called RIS, it’s been in Windows for a long, long time, there’s a distinct use case for it though. Re-imaging new installations, and performing OS upgrades, it’s overkill for anything else.

    ” Where you don’t end up with your main work horse machines drifting behind”

    “Main workstation”? Look, we’re talking business deployments here, not your home office. Clearly you don’t understand how updates are handled in these settings. Even if they did work the way you think they do, you’re not going to fall behind by automatically uupdating a machine while it isn’t in use.

  21. oiaohm says:

    Tiberius James Hooker updating or even completely changing distribution under Linux can be done a few ways.

    One is boot from network using the boot from network to change the machines own OS while user is using it. This feature is missing from Windows.

    Only the recent change to windows update has made it more like Linux update. Where you don’t end up with your main work horse machines drifting behind due to MS requirement to install so many updates then reboot and will not download any more updates until rebooted. Still a head ache with XP.

    Yes windows has had windows update for a long time but that is the problem its been buggy for a long time.

  22. Tiberius James Hooker wrote, “In fairness, those who have in fact, administrated large deployments of Windows workstations, know about Active Directory and GPO. Update one machine, make sure it works, and push the updates to the rest of the workstations.”

    That’s not fair at all. I wrote about imaging further down. One can administer large numbers of PCs without AD and GPO using LDAP and SSH, just as it’s done in GNU/Linux. One can also install Cygwin to make the machines running M$’s OS run UNIX OS commands or one can install GNU/Linux and run that other OS in a virtual machine.

    TJH also wrote, “While this is an unfortunate decision on Microsoft’s part, it’s more of a minor inconvenience, at least for deployments where the business in question has a coherent infrastructure in place.”

    Even if various tools allow one to work around it, the guy actually doing the work will wonder whether the best use of his time is to babysit a PC installing 837 updates, one after the other, instead of starting from a more recent point. It’s a stupid system relying on a stupid registry. It certainly makes bringing online a batch of PCs much more difficult than “apt-get update;apt-get upgrade” and getting the latest packages of everything. Even if the big businesses have the manpower to waste on updating their updates, small businesses don’t. Most people work for small businesses. Only a few of the many schools for which I worked had proper IT support. None of them are going to want to wait for months to schedule IT support so they buy retail and get a hodge-podge of versions as a result making administration that much more difficult. One school where I worked had monthly IT support, six different images and two versions of M$’s server OS on a mess of servers when one image and a decent workstation could have managed the whole thing with GNU/Linux. Managing a mess of PCs is easier with GNU/Linux. Doing so with that other OS just puts you in a deeper hole.

  23. Tiberius James Hooker says:

    “For those who have not administered tons of PCs running M$’s OS, there’s this thing called a “service pack” which is a collection of updates since Day One of each release. The idea is that instead of making hundreds of updates, one can just update the whole thing from a single file and be nearly current, saving a lot of time and complexity when installing the OS”

    In fairness, those who have in fact, administrated large deployments of Windows workstations, know about Active Directory and GPO. Update one machine, make sure it works, and push the updates to the rest of the workstations.

    There isn’t really much complexity when updating a single workstation or a small workgroup, Automatic Updates handles it for you. Though of course, slipstreaming, as you suggests, works too, but it’s the longer, somewhat more painful approach, geared more toward distribution.

    I’ve also heard of people pushing updates via VSS snapshots, like we do on Solaris with ZFS send/recv streams where access to a jumpstart server or daisy chaining via the LOM isn’t possible.

    “it puts updating a system as no more difficult than if it were current.”

    Do you realise that Windows has had automatic updates for about a decade now, right?

    What becomes a hassle is, as a developer, traditionally, you’d just check which service pack was installed, now they’ll have to check against a list of updates and hotfixes.

    “Anyway, if M$ won’t make any more service packs for “7″, the whole business world will be annoyed.”

    Not so much, Seven has minimal traction in the business wold, and XP users are being directed toward Windows 8. A ten year upgrade cycle really isn’t as bad as you guys make it seem.

    “Moving from XP to “7″ will get more expensive in manpower every month.”

    Not really, since we’re talking business, there’s Active Directory and GPO to trivialize the process. Do keep in mind, that enterprises don’t tend to stay on pace with updates the same way home users do, and they seldom wait for service packs, updates, patches and hotfixes are always installed and tested first, then pushed to the clients, at the business’ leisure. in any production environment, regardless of the platform, competent sysadmins handle updates the same way.

    The AD/GPO and JumpStart/AutomatedInstaller methods aren’t much different than maintaining our own internal package repository for updating the Centos and Oracle Linux deployments (no more RHEL, the changes to the terms of the support contract make such shenanigans unfeasible, voiding the support contract and revoking access to updates – also not a fan of having to pay Red Hat for CentOS and Oracle installations – this might have changed since the changes went live, but it doesn’t really matter).

    With all due respect, your experience in the classroom really isn’t comparable to IT in business. The infrastructure isn’t the same, the business-class convenience options aren’t available to you, and your methods (such as slipstreaming images) don’t scale to deployments in the thousands, or where different configurations are needed for different departments.

    While this is an unfortunate decision on Microsoft’s part, it’s more of a minor inconvenience, at least for deployments where the business in question has a coherent infrastructure in place.

  24. dougman says:

    Re: Imagine, M$’s loyal developers having to pay for SDKs, libraries etc., and consumers having to pay for updates or permission to connect any device.

    I am surprised they have not started to do this already.

  25. kozmcrae wrote, “there will still be trolls pulling for Microsoft but they too will be irrelevant”.

    As soon as everyone realizes that M$ is on a downward slope/cliff, there will be no use for trolls and M$ will cut them off the payroll/freebie conveyor. M$ will sooner or later begin to tax all the freebies that the trolls love. No doubt the trolls will find it “worthwhile” to pay more but the rest of us can exult in our freedom. Imagine, M$’s loyal developers having to pay for SDKs, libraries etc., and consumers having to pay for updates or permission to connect any device. On the downward slope, M$ wil tax everything they can to maintain the cash-flow as long as possible. Some of these folks are so locked in M$ can triple the charges and they will still find it cheaper to pay for a few more steps on the treadmill instead of switching. The last licence will likely cost $hundreds of millions.

  26. kozmcrae says:

    Robert Pogson wrote:

    “I don’t think it is likely that M$ will die quickly enough to endanger their employment.”

    Microsoft does not have to die. Look at where they are today compared to five years ago. No troll back then would have believed a description of Microsoft’s present position in the market. Five years from now Microsoft will still be around and will still be a powerful competitor. But they will not be relevant. They could almost disappear and no one would notice. And there will still be trolls pulling for Microsoft but they too will be irrelevant.

  27. dougman says:

    Hmmmm..

    Windows 7 – Build 7601: Service Pack 1
    Released February 22, 2011

    This means every update to Windows 7 since SP1 in February 2011 will need to be applied individually YUCK

    Here is a spreadsheet with all the vulnerabilities that would have to be patched, on all new Windows 7 computers.

    https://docs.google.com/spreadsheet/ccc?key=0AiOAHPPgX5xZdHBIZTU4Z2pYcEJRbFlTSEg0V3NzV0E

    Service packs are a pain for Microsoft, because they divert engineers’ time and budget from building new versions of Windows. In this case, the anticipation for Windows 7’s SP2 comes around the same time as the launch of Windows 8, out later this week. Also, by ending SPs, Microsoft could be pushing customers towards the completely new Windows 8.

    SP1 saw users take to the forums to complain that the service pack was causing machines to boot with fatal errors, was deleting restore points before installing and had unleashed a reboot looping glitch. Microsoft said it was unable to pinpoint the cause of the problem.

    Reading those two paragraphs does not leave one with sense of confidence does it?

  28. kozmcrae wrote, “It tells you your days as a Microsoft troll are numbered.”

    I don’t think it is likely that M$ will die quickly enough to endanger their employment.

  29. kozmcrae says:

    The people who love Microsoft will either have to love it more or start to hate it. At some point there will be this small cadre of Microsoft worshipers left. Fanatics will be a better term. Their world will be the days of 1995. You can see the beginnings of it here on this blog. Their posts are becoming more vociferous by the month. It’s a sign of desperation. You hear that MK/TEG? That’s you.

    They are desperate to crush every positive word supporting FLOSS. It’s obvious. Hidden though it may be, their agenda is clear. They come here to stem the flow of good news supporting FLOSS, but they can’t, there’s too much of it. That should be a warning sign to the Microsoft trolls. Too much good news for FLOSS, too much bad news for Microsoft. What does that tell you. It tells you your days as a Microsoft troll are numbered.

Leave a Reply