M$’s Dominance? Nope.

Certainly, M$ uses Trojan Horses to embed itself in IT but M$ can no longer be “dominant”.“The world where Microsoft has a monopoly or pseudo-monopoly on any platform or technology has all but disappeared. The new reality is a multi-device, multi-platform world. Any attempt to paint customers into a corner and lock them into a specific platform or device is essentially suicidal… By freeing customers to use Microsoft tools on other platforms and devices, though, Microsoft will continue to be a dominant force — even on rival platforms like Android and iOS.
See Microsoft's Trojan horse strategy to rule the world.”
Virtually no one takes orders from M$ any longer. M$ has to work for a living like the peasants it used to own. Owning the peasants and their desktops is a thing of the past. It died with the Wintel monopoly. Once people could access IT by other means they fled in droves. All M$ can do is take some share of IT from now on. M$ doesn’t own the server, the network, the PC, nor the consumer these days. A decade ago it did.

Look how hard it was for M$ to compete against its own product out there in the installed base of legacy PCs. It took several years just to make a dent in XP’s dominance. “7” mostly replaced XP but it barely exceeded half the share of IT that XP had. “8” is even worse, much less than half again. M$ depending on applications and services rather than the OS as the chief lock-in/cash-cow will make it easier for competition to thrive. Meanwhile, GNU/Linux has grown by 50% its share in the last year (not including Chrome OS GNU/Linux, 0.35%, and “unknown” Android/Linux, 0.95%. on the desktop/big screen. Combining those, we see “Vista” has been overtaken only 8 years after its release even with a bunch of salesmen and lots of retail shelf-space. 8.* will be lucky to survive five years after release. Welcome to the 21st century.

Now, look at application/cloud lock-in. M$ is reported to have only 10% share of the cloud despite good growth. People have many more choices, know it and take those other choices, leaving M$ only a sliver of the pie. M$’s biggest application is their office suite. LibreOffice and OpenOffice.org have taken more than 100 million customers away from M$ and have good growth. M$ is aiming to become platform-independent in applications and services. Expect a GNU/Linux version of their office suite this year. Expect other ISVs to follow suit. The browser? M$ has only about 20% share, just a hair more than FireFox which crawled out of the crater M$ made of Netscape back in the day. That’s not dominance. That’s subsistence.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

98 Responses to M$’s Dominance? Nope.

  1. kurkosdr says:

    UAC is not much different to the old sudo.

    And since most Desktop Linux distros still support the old sudo, third-parties are free to abuse it like they do with UAC.

  2. dougman says:

    Windows 10 Upgrades Will Be Free—Even For Pirated Copies

    Microsoft explained that “[a]nyone with a qualified device can upgrade to Windows 10, including those with pirated copies of Windows. We believe customers over time will realize the value of properly licensing Windows and we will make it easy for them to move to legitimate copies.”

    Overtime? LOL…if someone is willing to “pirate” Win-Dohs then why would they pay now or later?

    http://gizmodo.com/all-windows-10-upgrades-will-be-free-even-if-your-copy-1692096375

  3. oiaohm says:

    kurkosdr
    Win users can’t get elevated privs if they are not elevated users. But if the drivers of software you paid for requires it, you ‘ll login as elevated user.
    Incorrect UAC will do a sudo and ask for a password of different user with elevated rights if you are using an account without the rights.

    UAC is not much different to the old sudo.

    The thing about cgroup under Linux is the means to lie to applications that they have privilege when they don’t.

    Yes the third option is providing a sandbox for user to test applications in and if they turn out to be bad be able to dispose of them safely.

  4. kurkosdr says:

    only to certified apps = only to certified drivers

  5. kurkosdr says:

    Users can’t misuses sudo if they are not in /etc/sudoers. Problem solved. Next straw-man…
    Win users can’t get elevated privs if they are not elevated users. But if the drivers of software you paid for requires it, you ‘ll login as elevated user.

    The only solution is to not provide any official way to elevate, thus commercially dooming any software that requires it (Android). Or granting the right to elevation only to certified apps (what WHQL should have been about).

  6. kurkosdr wrote, “be tempted to think third-parties wouldn’t abuse sudo the same way they abuse UAC.”

    Users can’t misuses sudo if they are not in /etc/sudoers. Problem solved. Next straw-man…

  7. dougman says:

    LOL…Kuku you are a dope! Smoking crack again I see.

    One does not have to enter a password on Windows to allow a program to install or change settings, there is no real equivalent for UAC on Linux, because UAC is for people running as administrator (or “root” on Linux) all the time and accidentally run something that could potentially change/damage the system. *BING*

    Most people click Allow on UAC, regardless if it’s a file installer, virus, malware, etc. It’s just a false sense of security for people. Today you have UAC style popups to trick people into “allowing” installation of malware or just bypassing UAC altogether and turning it off, without the user even knowing what happened.

    People get annoyed by UAC, so they disable it which goes back to the problems faced in XP.

    http://www.computerworld.com/article/2509949/security0/malware-turns-off-windows–uac–warns-microsoft.html

  8. oiaohm says:

    kurkosdr 1 remember sudo is old. Sudo is Linux equal to run as administrator.

    http://en.wikipedia.org/wiki/Polkit Is Linux equal to UAC. Polkit is many times more advanced than UAC. Android permission groups for applications is also more like Polkit than UAC.

    kurkosdr the reality here is a lot of Linux developers have looked at Windows looked at Android. Result stick closed source application in a sandbox that we can control how it accesses the outside world.

    If Desktop Linux had a large enough market to receive “offers” about genuine cartridges, accessories, horrible smartphone-syncing software, toolbars, third-party unity lenses (or whatever is the equivalent to other UI shells) and the like, third-parties would abuse sudo to get it through during installation.
    I really would not go saying this too much. There are non windows OS that have had lot that crap and they had less market share than Linux. Think amiga.

    Abusing sudo from inside a sandbox will not get your far. Docker for servers is also about dealing with bad applications using sudo todo major system wide modifications breaking everything else.

    Fairly much forget sudo we need sanboxes.

  9. kurkosdr says:

    BTW, who buys genuine cartidges for “value” printers anyway? Did those guys failed their math classes[1] or something? “Oh yeah, I will totally buy a cartidge that costs 2 times more and has half the ink inside, in order to protect the warranty on this 45€ printer”

    [1] Which of course is passing with a C+ in today’s “no child left behind even if he doesn’t do homework that was explained by the teacher” schools.

  10. kurkosdr says:

    My point, doug is: Don’t, for one uneducated second, be tempted to think third-parties wouldn’t abuse sudo the same way they abuse UAC. If Desktop Linux had a large enough market to receive “offers” about genuine cartridges, accessories, horrible smartphone-syncing software, toolbars, third-party unity lenses (or whatever is the equivalent to other UI shells) and the like, third-parties would abuse sudo to get it through during installation.

    Just look at Android. Whatever loopholes exist (notifications and autolaunch) are exploited to the maximum by third-parties.

  11. kurkosdr says:

    The problem being, why is Win-Dohs in such a sorry state that requires a reinstall in the first place?

    Because of the horrible-horrible culture that exists among third-parties (ISVs and IHVs). Who still assume they have free-reign over the system, as if it’s still Windows XP, where customising your system to the point of uselessness was a thing.

    The UAC promt pops-up, the user clicks OK to be able to use the hardware or software he/she paid for, then the installer has the system at it’s mercy. Good ol’ stupid third-party mentality.

    I ‘ve seen it all: Sony USB drives that employ similar tactics to the Sony rootkit. I plugged it in, and an annoying tray icon showed up in the XP notification area, no confirmation or anything, it just installed itself. But okay, this problem is fixed now (no autorun without confirmation). Here are some more: Drivers that install 3 separate auto-launch processes (Samsung Kies), software that pernamently runs a licensing service in the background (Pinnacle Studio) to protect itself from pirates (who however use the software just fine, I can send you the unlocker), and printers who create their own virtual ports instead of the standard USB ones (HP) and install some more auto-launch stuff. Oh, and bundled crapbars that install on IE and never uninstall, because IE has horrible BHO management. Windows Explorer extensions.

    Since uninstallers don’t always clean the crud, the system gets hobbled by startup processes and other junk, and since most users don’t know how to ferret the junk out, they format.

    Microsoft is not doing anything to stop this. Like, “if your app was compiled after a certain date, you have to ask for a special permission to mess around with the system”. Neither does Desktop Linux, although it doesn’t have the problem because it has barebones third-party support.

    Only Android has solved the problem, by using some simple tricks. 1) No messing around with the system 2)Everything gets installed by the packaged manager and uninstalled by the package manager, nothing the user downloads can be made executable (without root), so no executable installers, no arbitary scripts, no buggy uninstallers, apps must be packaged 3)apps can only mess with their own app folders, not with the app folders of other apps.

    Which is why Android doesn’t “rot” (I am refering to unrooted Android obv).

  12. dougman says:

    http://www.pcworld.com/article/2897595/windows-10-does-away-with-the-reinstallation-headaches.html

    Actually this does not solve the problem, its just fixes the symptom of the problem. The problem being, why is Win-Dohs in such a sorry state that requires a reinstall in the first place?

  13. dougman says:

    Re: Windows updates with peer-to-peer distribution, nothing bad would happen.

    The updates will be digitally signed, however ‘Digitally Signed Software’ certificates can be faked.

    http://www.cnet.com/news/flame-virus-can-hijack-pcs-by-spoofing-windows-update/

    “Microsoft and Symantec revealed yesterday that the virus can up the ante by using the fake certificates to spoof Microsoft’s own Windows Update service. As such, Windows PCs could receive an update that claims to be from Microsoft but is in fact a launcher for the malware.”

    So the point being, this is another stupid feature, that will cause more problems then it solves.

    http://blogs.wsj.com/cio/2015/03/16/complex-legacy-code-creates-security-headaches-for-microsoft-users/

    See M$ once again fails to understand that Windows isn’t a feature – It’s a liability.

  14. oiaohm says:

    Nothing, my clueless friend. Because hashes.
    kurkosdr have you not heard of hash collision.

    Debian has offered p2p apt for years. Look up apt-p2p. Its not default. Never will exactly be default.

    Lot of things Microsoft talks about implementing the Linux world has already tried and had major issue with.

    Anyone using debian who has used http://http.debian.net/ has experienced the horible event of downloading broken files from a broken mirror somewhere.

    kurkosdr basically when Linux world starts laughing at you for doing something sometimes they have very good reason. Why they have already tried it and experienced hell.

    So if Microsoft is implement P2P we want a lot of details historic examples say results can be extremely horible.

    kurkosdr
    Desktop Linux = FOSS apps.
    For me that is not true I have a few closed source desktop programs on Linux. These are native applications.

    Needs don’t go away Pog, but with Desktop Linux you have much worse back-compat for the proprietary software that covers those needs.
    Applications in steam this is not true on Linux. Applications based on LSB standards this is also not true.

    Sorry Back-compatility on Linux is how the application is designed. There are Windows applications that also have equal problems of not running unless Windows version is exactly right. These are the minority. People making applications for Linux who don’t do it right need to stop being allowed to use the excuse it was Linux so it can break between versions.

    kurkosdr remember applications developers can under windows decide to code their application not using SxS dump dll straight in system32 and have everything go wrong. Only reason windows works as well as it does is application developer good behavior. Remember LSB package are allow to use Libraries different to the Distribution provided. Valve games have good behavior as well. The gnome sandbox option is about providing a method to force good behavior of badly behaving applications made by bad developers.

  15. LinuxGentlesir says:

    kurkosdr,

    In a world driven by greed and self-interest there is something inspiring inherent in the sharing and openness that GNU/Linux symbolizes. Happiness is the reason to live. GNU/Linux brings me geninue happiness!

  16. kurkosdr says:

    M$ to speed up Windows updates with peer-to-peer distribution…LOL, what could go wrong?

    Nothing, my clueless friend. Because hashes.

    It’s more of a moral question. With some American ISPs starting to cap connections (with much higher limits than mobile, but still capped), is it morally correct to use the client’s bandwidth to serve other client’s better, and make it a default option?

    Unless there is a warning and clear option to turn it off when you start the system.

    You use Windows Compatibility Mode. Works like a dream for practically any XP program out there. Will also work, with a bit of luck, with W98 and W95 programs.

    Don’t give Pog what he wants, aka the ability to whine about the back-compat of Windows. When at the same time he tries to mask the fact his beloved Desktop Linux offers much worse back-compat, by claiming everyone should use FOSS with Desktop Linux.

    I never got this false dichotomy of FOSSies:

    Windows = blend of proprietary and FOSS apps
    Desktop Linux = FOSS apps.

    As if, in the case FOSS apps don’t cover my needs in Windows, and for that reason I have to use paid proprietary apps, moving to Desktop Linux will magically make those needs go away.

    Needs don’t go away Pog, but with Desktop Linux you have much worse back-compat for the proprietary software that covers those needs.

    Unless those needs go away the Soviet Bloc way, aka if you want anything from outside the wallen “garden”, it will hurt a bit. Choose from the repositories.

  17. dougman says:

    M$ to speed up Windows updates with peer-to-peer distribution…LOL, what could go wrong?

    http://www.pcworld.com/article/2897275/microsoft-may-speed-up-windows-updates-with-peer-to-peer-distribution.html

  18. DrLoser says:

    You have 5 applications that everyone on the payroll uses: 2 are XP-only, 1 won’t work on 8.1. What the Hell are you going to do?

    Well, that’s a distinctly unlikely scenario. Presumably your IT expert used to moonlight in schools in Northern Manitoba, pretending to be a teacher or some such thing. Otherwise you would presumably have invested a modicum of thought over the last ten years as to how you are going to keep your IT infrastructure going.

    But, hey, I live to serve. Here’s the very simple answer, Robert:

    You use Windows Compatibility Mode. Works like a dream for practically any XP program out there. Will also work, with a bit of luck, with W98 and W95 programs.

    That, of course, is the “sub-optimal” solution to this non-existent problem. The optimal solution would be to bork every PC in sight with Debian Wheezy (taking care, of course, to disable system, and run all your mission-critical applications via Wine.

    I’m told this works most excellently.

  19. kurkosdr says:

    In the world of software, guarantees cost you $millions.

    Really? Are you trying to get away with semantics now? A written guarantee costs millions, sure. Microsoft makes a practical guarrantee, and we users like it that way. Windows 7 run every modern app in existence and do does Vista. That’s an excelent track record serving as a practical guarantee to you. If you bought a Windows 7 laptop in 2009 (that’s 6 years ago) or even a Vista laptop, you have access to a wide variety of proprietary and FOSS software (latest versions!).

    I will give you an example, and I will try to use plain english. My mon’s laptop was bought in early 2010, and of course runs Windows 7. Last week, I opened it, went to the MPC-HC site and grabbed the latest, most featurefull, most stable, and most awesome stable version of MPC-HC yet, and installed it. Just. Like. That. No need to care if the latest version is in “repositories”, no need to mess with third-pary PPAs or zeroinstall, no nothing. Grab the official binary, run it. And no need to risk a driver-breaking upgrade (although Windows has much better driver back compat than Linux). There is no coercion to upgrade, support will outlast the laptop’s life for the us 99%.

    Can a Distro from 2010 do that?

    It’s not a “walled garden”. It’s a pleasant valley filled with all the food, water and shelter anyone could want.

    But importing stuff from abroad (proprietary world), in case someone finds those superior, is a bit painful, right? Where have I heard this before?

    You have 5 applications that everyone on the payroll uses: 2 are XP-only, 1 won’t work on 8.1. What the Hell are you going to do?

    First of all, Windows XP was supported all the way to 2014, despite the fact they stopped selling it to the mainstream in 2006 (save for crapbooks). So, that’s 8 years of not being coerced to upgrade, that’s a lot. And the vast majority of Windows XP apps (save for games) run on Windows 7 and Windows 8.x.

    What is your alternative, really? If you designed a new OS, what would your approach be? Telling people to not use proprietary software with your OS? So, the corporation that designs -say- a very expensive clothes design application, or the corporation that designs tax-refund software for a small country (say Greece or Malta), should just open their source and hand it over to the reposit…. bahaha! Not gonna happen. They will just pretend your OS doesn’t exist. And support Microsoft Windows, which allows old versions to run on new OSes, and has good back compat (guaranteed for apps since 2007, most apps since 2001 also run). It’s not perfect, but you know, works in the real world. Where propriteray code may have millions in R&D invested in and is a trade secret.

    I know, I am harsh on you. I am trying to make you understand that your needs (all-FOSS drivers and apps), are not my needs. And the fact proprietary software companies still exist and the fact proprietary drivers still exist, your needs are not the world’s needs.

    Stop recommending solutions unless you take our needs into consideration (the need to run proprietary drivers and apps, and not being forced to upgrade)

  20. kurkosdr wrote, “With Desktop Linux, there is NO guarantee the latest version will be in the official repo”.

    In the world of software, guarantees cost you $millions. Debian’s repo has ~43K packages, usually a few for every kind of computing most people do. It’s not a “walled garden”. It’s a pleasant valley filled with all the food, water and shelter anyone could want. I suppose if Debian had any more, you’d complain of too much choice. There’s just no pleasing some people. Consider the alternative: you have 100 applications not supplied by M$. Most are only used by a few specialists. You have 5 applications that everyone on the payroll uses: 2 are XP-only, 1 won’t work on 8.1. What the Hell are you going to do? Isn’t Debian’s repository looking beautiful now? In Debian, if you do need to change an app, it’s usually a couple of commands and a memo pointing to the new application. No need to talk to salesmen, make a budget-proposal, wait for it, wait for something else, install the application, and then find it doesn’t work for you… Debian’s pretty darned cool if you ask me.

  21. kurkosdr says:

    That is only for the few who use applications outside the repositories of major distros. If you just use stuff from Debian’s repository, you are in Heaven.

    It’s not a walled garden, it’s not a walled garden, it’s not a…

    That’s the reason I hate Desktop Linux and instead tolerate Windows: “Yes, you can have proprietary drivers and proprietary apps, but it will sting a bit.”

    And no, it’s NOT like an app store. In the app store, a developer makes the software update available immediately. With Desktop Linux, there is NO guarantee the latest version will be in the official repo:

    http://www.omgubuntu.co.uk/2015/03/new-features-vlc-2-2-ubuntu-ppa#comment-1883699725

    And the default repositories hate proprietary software anyway.

    —-
    Now, don’t get me wrong, if you do your IT with all FOSS drivers and apps, Desktop Linux is paradise for you, and you are probably wondering why most users are not in too, and you are probably watching statcounter every day, but most people use a mix of FOSS and proprietary in their daily lives. The real world suxxors.

  22. oiaohm wrote, “Linux update system has dependancy hell”.

    That is only for the few who use applications outside the repositories of major distros. If you just use stuff from Debian’s repository, you are in Heaven. This is one of several major contributions of GNU/Linux to IT, freedom from Dependency Hell. OTOH, there’s no escaping M$’s weird system requiring each user everywhere to update individual drivers and applications by separate processes from updating the OS. This exposes one to ISV Hell, where each application may have a different means of update ranging from patching binaries to re-installation. That Other OS ensures a risky and complicated and fragile IT structure. I’ll take GNU/Linux any day.

    I do use a few applications outside Debian’s repository because it takes years for Debian to fix all the dependencies in ~40K packages, but I only use simple tiny packages or very widely used and debugged packages outside the repository. Most of them have builds for Debian GNU/Linux or I can build the source code myself. Occassionally I have trouble finding some dependent package needed but not included in Debian. Rarely has that been a show-stopper. Folks with a huge raft of applications outside repositories do have a problem but a large slice of users don’t. Munich saved money by going to GNU/Linux. It wasn’t an insurmountable problem for them. For them Hell was dependence on That Other OS, not GNU/Linux dependency.

  23. oiaohm says:

    kurkosdr Yep Linux update system has dependancy hell. Windows update including using wsus has the magically I forgot or somehow deleted an update.

  24. kurkosdr says:

    CAUTION:

    To the one person who though about following my Windows Update ritual.

    Do NOT do a Windows Update cleanup using Disk Cleanup.

    It will yank out two usefull updates and you will need to re-install those:

    http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_update/kb3032359-kb3021952-keep-re-presenting-for/6915d9ec-7be4-402b-9cb2-4772e1fce905?page=1

    Oh Windows Update, what else do you have for me?

    #just_windows_things

  25. dougman says:

    Bloomberg and Toyota sell’s closed-source software, and they use this to push Linux?

    LOL….

  26. oldfart says:

    “The M$ trolls will runaway and scurry upon seeing this!”

    Why do you think we are running anywhere? Most if not all of these business use and sell closed source proprietary software as well as commercialized open source. There is nothing wrong with open source, especially when you can use it to sell your closed source extensions and packages.

  27. dougman says:

    When one recommends Open Source Linux OS and Software, you always ask the people/business do they know who the top two-hundred plus hardware/software manufactures are that develops and supports Linux.

    http://www.linuxfoundation.org/about/members

    The M$ trolls will runaway and scurry upon seeing this!

  28. dougman says:

    Meanwhile, M$ gets one million comments on “10”, really “9”, but lets not discuss the reasoning for the jump, as in, no one liked “8” so lets jump a number, so people forget about it…..heh heh.

    http://www.digitaljournal.com/technology/microsoft-has-received-1-million-bits-of-feedback-on-windows-10/article/428357

    I am willing to bet that this was just a marketing ploy and that M$ will not use a single comment and simply ignore it. Don’t believe me?

    http://www.pcgamer.com/phil-spencer-interview-we-ignored-what-was-going-on-with-windows-to-launch-xbox/

    Straight from M$ mouth: “We ignored what was going on with Windows”

  29. oiaohm says:

    kurkosdr zeroinstall is in every major Linux distribution.
    This is one the main things that drives me away from Desktop Linux btw. Even if a solution exists, it either exists only in some obscure distro or it’s something you have to find out yourself.

    Majority of cases it not some obscure distribution. But I will give you missing from default install and intergration at times is lacking. Like installing zeroinstall does not automatically add to the appstore interface.

    Instead, Windows is Windows for every user. WinSxS, despite it’s many flaws, is there for every Windows user (okay, maybe except for the 7-8% using WinXP and using Windows mostly to launch Chrome)

    Problem here is WinSxS is a cause of a lot of security issues. Those issues is why the idea keeps on not flying with Linux Distributions. The policy better break a binary than let it run insecure has been Distribution policy for a very long time.

    kurkosdr you also need to look at systemd and docker. Server side having many different Linux distributions running on one system is being done. Hold up is kdbus and wayland/Mir for graphical applications. kdbus is in fact highly critical little bit for cgroup to cgroup communication without sharing very much information. Pulseaudio and jackaudio can already use dbus protocol to open connection between servers. So audio is go.

    https://plus.google.com/+LennartPoetteringTheOneAndOnly/posts/SkKuBF1XaNF

    Basically we are along last seeing solutions that can get past Distribution security teams and also reducing the difference between distributions for application sourcing.

    Amd is working on Unified kernel mode driver between all closed source libraries and open source libraries for graphical, Intel already has it. Nvidia not on this path yet. So AMD and Intel the container having different version graphical to the host is not a problem.

    Nvidia is going to be on going problem. Lot of people complain about have to reboot to update graphical on Linux this is a AMD and Nvidia problem. Intel made their kernel mode mostly a proxy.

    BTW, I find the “no-reboot patching” of Desktop Linux to be interesting, to say the least.

    But, let’s see how it will work in practice.

    The problem is with ksplice history we know it works in practice. Just like zeroinstall the integration is still horible. Now that the feature is going mainline maybe better management tools will appear.

    kurkosdr most of your complaints should not be that Linux cannot do it its more that the integration is horible.

  30. DrLoser says:

    That’s nonsense. The price of a licence is not of much concern especially for profitable web-sites.

    And my argument was that: “the price of a licence is not of much concern especially for profitable web-sites.”

    The perceived (and probably actual) benefit of Apache is that it is the established market leader.

    Do try to read what your commenters say before opening your gob, Robert.

  31. DrLoser says:

    Finale alternate? How about pen and paper!

    Well, pen and paper would certainly beat out the wretched alternative that FLOSS compositional software has on offer, Dougie. For what that’s worth. Although it’s not worth much unless you insist at least on a goose quill (modern pens post-date most 19th century compositions, at the very least) and quite possibly the use of parchment rather than paper. Oh, and don’t forget to mix your own ink. Expensive stuff, ink.

    Now, your day job as a Snake Oil salesman. Do you rely on a pharmaceutical supply from 21st century herpetological production lines, or do you prefer to collect your own snakes in a large vivarium conveniently placed next to the toxic creek in your back yard, wake up at 5 am every morning, and squeeze snake venom into a test tube?

    The process is a little more complicated than that, and naturally you would need the equivalent of a High School Diploma before you were allowed near the bit that involves “titration,” but I see no reason why you shouldn’t force yourself to abjure all the modern privileges of your everyday job, Dougie.

    You are never really free until you have sucked on the teat of a poisonous snake, are you?

  32. DrLoser, trying to kill me with laughter before April 1, wrote, of Apache, “I think it’s preferred because it’s the in-place market leader. It’s not especially good at what it does”

    That’s nonsense. The price of a licence is not of much concern especially for profitable web-sites. Throughput/reliability certainly do matter. The more efficient the software the less hardware a web-server needs to do the job. There’s major bucks there. While an “el cheapo” server may only cost a few dollars, the big guys spend tens of $thousands on theirs. They do care about the performance of their software. They don’t care what the other guys are using so much. ie. A few years back that other OS and IIS were huge on the web. It all went away, not due to the popularity of that other OS and IIS but the crappiness of it. In 1995, NCSA and “other” were far ahead of both M$ and Apache. In October 2007, M$ had 38% of all active sites while Apache had 46%. Today, Apache has 51% and M$ has 11%. Popularity had nothing to do with it. There was never a shortage of folks who were familiar with M$’s software. M$ paid people to use IIS on parked sites. When they stopped doing that, we saw what happens.

  33. DrLoser says:

    There is a huge flaw in this argument. It’s just not true that all non-FREE software is better than all Free Software.

    Why is that a huge flaw in any argument whatsoever, Robert?

    It’s just not true, for example, that all Canadian metal ore mines are better than all metal ore mines in (say) the Congo. In fact, on most measures, very few of them are. (I’m deliberately using a preposterous comparison on purpose here, because FLOSS doesn’t even come close to the level of metal ore mines in the Congo.)

    But nobody picks a metal ore mine on the basis that it “belongs” to one country or the other. People pick metal ore mines because they satisfy a particular need for metal ore.

    And, back to reality. FLOSS software, with very few exceptions, satisfies no need whatsoever except for the cheapest possible implementation in a particular field, no matter how inconvenient and/or broken that implementation might be.

    I don’t see that argument as a flaw, Robert. I see it as a blatantly obvious fact.

  34. DrLoser says:

    Apache and now Ngix are preferred because they are free, not because they are better.

    Two separate cases, I think. I’m not even sure that Apache is preferred because it’s free (in either sense), or even because it’s “cheaper.” I think it’s preferred because it’s the in-place market leader. It’s not especially good at what it does, but it’s not especially bad, and there’s twenty years’ of experience behind it that makes it easy to find SysAdmins who can handle it with ease.

    I don’t even consider Nginx to be an Apache competitor, as such (and here I freely admit amateur status). It’s a proxy server with some very nice routing capabilities. You could use it as an Apache replacement, but quite honestly it looks best used as a front-end to a dedicated bit of HTTP server kit. It’s particularly good at dividing static web content from dynamic web content …

    … not that anybody uses static web content these days. Which is a shame. I think there’s still a market for it, at least amongst consumers.

  35. oldfart says:

    correction:

    Apache and now Ngix are preferred because they are free, not because they are better.

  36. oldfart says:

    “It’s just not true that all non-FREE software is better than all Free Software.”

    Nor is it true that what is preferred is what is better. Apache and now Ngix are preferred because they are free, but because they are better. And firefox (and thunderbird) are run on by more people on windows than on linux.

  37. oldfart says:

    “If more PCs ship with GNU/Linux, there will be more and better applications made for it.”

    And if they come, and they will accommodate computer users like myself perhaps I will look. As it stands now there are simply not there.

  38. oldfart wrote, “Your Terms of freedom leave me working for my computer and for some developer as I help him improve the application that I would have to use so that I could get back in the “free world” to what I now have in my so called “un-free” world.”

    There is a huge flaw in this argument. It’s just not true that all non-FREE software is better than all Free Software. That’s why many folks prefer Apache to IIS, or FireFox to IE. It may be true that non-FREE software does reach acceptable performance sooner in its life-cycle because more manpower can be applied in a profitable business but profitable businesses do support FREE Software so there’s just no such hard and fast rule. I fell in love with GNU/Linux long before I chose any application for my work. I went the other way around, choosing the best OS and then the applications for my chosen OS. If more PCs ship with GNU/Linux, there will be more and better applications made for it. I find the current set quite wonderful, despite the lack of a few obscure features. I think the world of FLOSS has filled in all the gaps in the most popular and useful kinds of software. The rest will come sooner or later.

  39. oldfart says:

    “You’ve been thinking inside the box too long, oldfart. ”

    Nope.

    Speaking personally, I am simply a computer user who likes to get things done on his own terms. Your terms for “being free” leave me free to struggle with inadequate tools and applications. Your Terms of freedom leave me working for my computer and for some developer as I help him improve the application that I would have to use so that I could get back in the “free world” to what I now have in my so called “un-free” world. Computers are tools, they accomplish tasks – my computer with its commercial OS and commercial software allows me to compose, hear and create CD’s of my music.

    And no amount of name calling and emotional blackmail by you Robert Pogson, or any FOSStard here is going to change that.

  40. dougman wrote, “Vista, 7, 8 and 8.1 are all just as susceptible to malware as XP.”

    I doubt that’s true. M$ did finally slap on some layers of security long after XP was released. While some of these layers did plug holes and reduce the leak rate, M$ also introduced a ton of new features all with their own set of vulnerabilities. XP grew into a monster but M$ did rewrite a bunch of stuff and actually set up some security features. We haven’t had a major wave of malware take out whole continents lately, perhaps because “7”‘s share is nowhere near as high as XP had at its peak. I think the planned security feature is to give the world a moving target rather than a sitting duck with no cover. They plan to continuously upgrade rather than making point-releases. I think they plan to rely on producing obscurity rather than the best code so that they remain a step or two ahead of the malware artists. It remains to be seen whether they can manage that. I suspect they will crash and burn by constantly annoying users.

  41. oldman wrote, “get your windows system hosed by being stupid?”

    Uh, that should be “get your windows system hosed by being using the supplied features in the manner intended?”

    The idea that one should have an OS (supposedly to manage resources and to control processes) that causes pwnage by clicking on something or plugging in a device is absurd. You’ve been thinking inside the box too long, oldfart. Peek out and see that IT does not have to be a scary place where M$ forbids you to do certain things with your PC by EULA and the malware artists are just waiting to flood the loopholes created by M$’s salesmen. Are you any better off than when you used Lose 3.1 with no security at all compared to M$’s latest leaky sieve with thousands of holes? I was not until I chose GNU/Linux.

    oldfart’s position is just the old excuse of the sycophants of that other OS, “blame the user, not the OS.”

  42. oldfart says:

    “Vista, 7, 8 and 8.1 are all just as susceptible to malware as XP. Nothing they do will ever change the fact.”

    So what! LInux has been proven to be chock full of exploitable holes, some which are long standing. But that does not us from using the linux servers that we depend on any more than it stops me from using windows personally. We apply patches, take all appropriate precautions and get on with our work.

  43. oldfart says:

    “LIES…or, and most importantly, what they are stating is, “I never use my computer!””

    Nope. And I use my computer quite a bit both personally and professionally during the day. Nice try though.

    “I say the Old Farter should visit one of the many websites listed at: http://www.malwaredomainlist.com/mdl.php and then report back.”

    Only an idiot would suggest that. You’re not an idiot are you Dougie?

    “How about this, pick up a random thumb drive in front of your residence, vehicle. Plug said drive in and BAM!”

    And why would I do that sir. Again, I am not stupid. Although I should not that Auto play is disabled on my system. What the matter Dougie, get your windows system hosed by being stupid?

  44. oldfart says:

    “Finale alternate? How about pen and paper! All of the great composers used paper, but for some reason you think spending hundreds on software you barely use or need is warranted.”

    While I am no longer a professional composer, I do have music that I regularly commit to digital paper. The 11+ CD’s worth of music that I’ve created over the past 25 years doesn’t seem like “barely using” my software. But then again I doubt that you would understand creativity.

    “Admit it, you’re a lonely oldman thats has no life, but enjoys causing discontent by trolling forums and spewing M$ brainwashing.”

    No Dougie, I am just an IT professional with some idle time (in between work tasks) on his hands who doesnt like the FUD that Robert Pogson puts out on this blog. SO long as Robert Pogson permits me I shall continue to speak up on what I consider his One sided representation of the reality of computing on the Win tel Platform.

    Ah Dougie, what you doubt is irrelevant. Robert Pogson knows who I am – I have helped him in he past with a linu problem, though admittedly not far because of the differences between debian(which he uses) and Red Hat (which I use).

  45. dougman says:

    “I have never had the problem with malware – ever.”

    LIES…or, and most importantly, what they are stating is, “I never use my computer!”

    I say the Old Farter should visit one of the many websites listed at: http://www.malwaredomainlist.com/mdl.php and then report back.

    How about this, pick up a random thumb drive in front of your residence, vehicle. Plug said drive in and BAM!

    See how easy it is visit malware?

    Windows = Malware

  46. oldfart says:

    “Some people win lotteries, too, but that’s far from typical.”

    Actually, its more common to run malware free today than you know Robert Pogson. Windows 7 & 8.1 are far more secure that XP (now a 12+ year old OS BTW) was. This having been said, the reality is that no system is truly secure regardless of what you think. Once you accept this fact along with the reality that people use applications not operating systems, you begin to wonder why your attempts to make the presence of malware attacking windows a deal breaker for using the OS – it’s not and never will be for most windows users.

  47. dougman says:

    Finale alternate? How about pen and paper! All of the great composers used paper, but for some reason you think spending hundreds on software you barely use or need is warranted.

    I highly doubt you have EVER written one piece of music, let alone manage a team of IT personnel.

    Admit it, you’re a lonely oldman thats has no life, but enjoys causing discontent by trolling forums and spewing M$ brainwashing.

  48. oldfart having a very sheltered existence, wrote, “I have never had the problem with malware – ever.”

    Some people win lotteries, too, but that’s far from typical. I’ve seen a few PCs running that other OS without malware as well but far too many were just loaded down, getting some trojan in first opening the floodgates of Hell.
    “the malware infection rate of the United States increased precipitously between the fourth quarter of 2012 and the first quarter of 2013. The Malicious Software Removal Tool (MSRT) cleaned malware on 8.0 of every 1,000 computers scanned (Computers Cleaned per Mille or CCM) in the US in the second quarter of 2013, compared to the worldwide average 5.8 in the same quarter.”
    see United States’ Malware Infection Rate More than Doubles in the First Half of 2013

    That’s what M$ reports with their tools. Kaspersky reports USA is #3 most attacked country in the world, and Kaspersky finds 2million infections per day. How many does Kaspersky not detect? How many TOOS PCs are there in USA? A very high percentage must be infected daily. Uruguay, with one of the highest utilization rates of GNU/Linux is at #98 in popularity for malware artists. They have ~1% of the population but they get ~2K infections detected daily by Kaspersky, one tenth as much per capita.

  49. oldfart says:

    So tell me Dougie, what is your proposed replacement for Finale 2014?

  50. oldfart says:

    “Obviously, your lack of trying to even answer the question, leads me to believe that you are being purposely misleading. ”

    No misleading at all. I stated the truth of my situation -I have never had the problem with malware – ever. But more to the point the presence of malware that attacks windows is no more a deal breaker for using my applications on windows than the presence of all of the recorded security issues with linux are deal breakers for the server applications that I help support. SO as far as I am concerned, your bring up malware is a non issue.

  51. dougman says:

    Re: I dont have any malware..

    Did I ask if you did? Obviously you misread my question. Lets try again..

    “Find me a version of Win-Dohs that does not suffer malaise from malware.”

    Obviously, your lack of trying to even answer the question, leads me to believe that you are being purposely misleading. As the answer would be there are none, all versions of Win-Dohs even “10” is susceptible to malware. What complications this even further is that M$ blindly tosses patches once a month, which causes more problems then it solves.

    http://www.infoworld.com/article/2895022/security/problems-reported-with-microsoft-patch-kb-3002657-and-a-warning-on-kb-3046049.html

    ..and don’t forget to reboot twice!

  52. oldfart says:

    “Farting OLDman,…”

    Shall we start calling you DogBrain?

    ” care to find me a version of Win-Dohs that does not suffer malaise from malware?”

    I dont have any malware. But I am waiting for you to come up with a replacement my missing software. Unless that’s too hard for you.

  53. oldfart says:

    “Farting OLDman, quit being a mooch.”

    Since you seem to want to stand in for Robert Pogson can YOU find the replacement?

  54. dougman says:

    Farting OLDman, care to find me a version of Win-Dohs that does not suffer malaise from malware?

  55. dougman says:

    Farting OLDman, quit being a mooch.

  56. oldfart says:

    “OTOH, with FLOSS and GNU/Linux there is a solution for every problem and no one is a slave.”

    Care to find me a replacement for finale? You never did answer that did you Robert Pogson.

  57. kurkosdr wrote, “This is one the main things that drives me away from Desktop Linux btw. Even if a solution exists, it either exists only in some obscure distro or it’s something you have to find out yourself.”

    That goes against everything the trolls have been spouting since the early years of this blog, that the OS doesn’t matter and that only applications matter, so dependence on any single M$-only application means dependence on M$’s OS. Now, we’re told that a single dependence on a feature of the OS matters… That Other OS is not TOOS for every user. We have wide varieties of releases for Home, Pro, Ultimate and then licensing with “random variation” retail, OEM, builder, select, … M$ has always used “divide and conquer”. In the old days, it was “we’ll beat you less if you do this”. Then it was “we’ll take less of a ripoff if you do this”. Now it’s “we’ll pay you to use our OS”, but the one theme that has remained is that “You must be our slave to use our OS.” Having a variety of slaves is no problem for M$. It’s helpful to the slave-master for slave A to feel better off than slave B. There’s always some loser who is just happy to be a slave.

    OTOH, with FLOSS and GNU/Linux there is a solution for every problem and no one is a slave. No one is second, third or fourth class in IT with FLOSS. We all can run one PC or many with ease. We all can mix and match software as we wish. We all can share the full benefit and we don’t have to take any crippleware, crapware or malware.

  58. kurkosdr says:

    BTW, I find the “no-reboot patching” of Desktop Linux to be interesting, to say the least.

    But, let’s see how it will work in practice.

  59. kurkosdr says:

    kurkosdr personally I have used zeroinstall for years. But I am looking forward to Gnome developers sandbox approach. It has the advantages of Windows SxS without having a stack of old libraries installed for applications that are not installed.

    This is one the main things that drives me away from Desktop Linux btw. Even if a solution exists, it either exists only in some obscure distro or it’s something you have to find out yourself.

    Instead, Windows is Windows for every user. WinSxS, despite it’s many flaws, is there for every Windows user (okay, maybe except for the 7-8% using WinXP and using Windows mostly to launch Chrome)

  60. oiaohm says:

    kurkosdr tmrepositry looks like it might never be back. So please find a correct link to make the point. You will find most of the correct links include management processes to find locate out of date applications. cgroup wrapping by systemd is great here to know what services need a restart command.

    kurkosdr there is one problem if you are doing 99.999 uptime you are only allowed 1 reboot per month at best. You example has 2-3+. Linux is able to put all the updates in then do one reboot covering them all so allowing 99.999 uptime.

    99.999 uptime is only 26.3s a month for reboots and restarting services.

    Due to how slow the boot up speed is on some systems under 99.999 uptime conditions you might only be able to reboot once every 4 to 6 months this is why kPatch and kGraph are coming to Linux to apply security patches into running kernel. Yes I do agree a better interface is required to list to users what services and what logins need to be restarted to stop using expired libraries. Linux can meet 99.999.

    kurkosdr The reality here is nasty. Linux update systems has some issues yes some training is required and some better tools would be great. But the Linux system update correctly operated works to meet commercial requirements. Windows update is broken to meet commercial requirements. So the sad reality is a lot of business pay 99.999 SLA on windows yet to fail to notice the supply is in breach of contract because they apply updates or are not breaching contract because they are not applying updates. 99.99 is as high as a single server Windows SLA should go. Or on other words 4m 23.0s of downtime a month. Next time you do some updates on Windows stopwatch the restart. It is surprising how often even on the single reboot with updates with Windows the 4m 23 seconds is broken.

    Please note 99.999 is not the worst limitation you have in SLA contracts.
    99.9999999% (“nine nines”) 32 Microseconds a year of downtime. Windows and Linux are still long way from being able todo nine nines. 99.9999 may become possible with the new in memory kernel update.

    kurkosdr personally I have used zeroinstall for years. But I am looking forward to Gnome developers sandbox approach. It has the advantages of Windows SxS without having a stack of old libraries installed for applications that are not installed.

  61. dougman says:

    Randomly lose files?? Hah…oh as in one’s system crashes or there is a power loss.Well, a UPS takes care of the power loss and system crashes they don’t happen. Remember Linux=Stability.

    The “delayed allocation” to what you infer in EXT4 can changed by edited the following: /proc/sys/vm/dirty_expire_centisecs and /proc/sys/vm/dirty_writeback_centisecs. Using cat, mine was listed at 3000 and 500, so perhaps I would lose a few seconds worth information, on the chance my UPS did not work? Does it detract or ruin Linux? Not at all..

    At least one can edit the system files in Linux, with Windows you are stuck!

  62. DrLoser says:

    See the thing with Linux, I rsync my system drive once an hour to a spare drive …

    Almost as exciting as spinning up a new point version of the Linux kernel every hour, Dougie.

    Have you considered putting this on a T-Shirt? Uninformed sheeple are always impressed by an obvious Snake Oil Salesman with a T-Shirt that explains what they do in their spare time.

    And, let’s face it, absolutely anything at all could go horribly wrong with your file system, every hour, on the hour.

    Most particularly if you are using Desktop Linux, Dougie. A wise precaution, I feel.

    Not one that 98% of the population could give a toss about, however.

  63. DrLoser says:

    I find it incredulous …

    I believe you mean “incredible,” Dougie.

    And given the rest of your hogwash, yes, what you wrote was completely lacking in credibility. A win for you, I think. Very honest, for once.

  64. DrLoser says:

    Ewww!! And randomly lose files?

    To be fair, Kurks, it isn’t random.

    In fact, it’s quite predictable.

  65. DrLoser says:

    Well, consider Tianhe-2. Think your desktop client can crack a few human genomes before lunch? Oh, yes, M$ says it’s so, by using a server out there in the cloud…

    That’s a very fair point, Robert. Let us therefore consider this completely obscure, one-off, supercomputer in terms of speed delivered to the desktop, which was actually the point at issue.

    It looks like the Tianhe-2 is built on top of “processors by a monopolist company we dare not name” … which, rather unfortunately, delivers the exact same set of processors either for home use or for local server use. So, that’s pretty much the supercomputer argument out the window, innit?

    I suppose I could consider the relative merits of both types, but let’s just take Ivy Bridge and leave you to counter with the other one. Here’s a table detailing the speeds of Xeon CPUs. Let’s go for the TOP SPEED!

    Oh look, it’s 3.4GHz.

    On a six core processor.

    This drops down to 2.8GHz on a fifteen core processor, and if you knew why, Robert, you would be well on your way to understanding the difference between a server and the desktop. Sadly, you don’t have a clue.

    Latency, Robert. Latency. Latency dwarfs every other argument you can bring up about supercomputer speed. (As if the PRC is going to let anybody, let alone you, run a Thin Client off the Tianhe-2 any time soon! Mazel Tov!)

    Nope, sorry, “speed” on the desktop is no sort of issue at all here.

  66. kurkosdr says:

    “Hmmm, lets see.
    ~ $ uptime
    14:20:47 up 87 days, 11:03, 1 user, load average: 0.11, 0.19, 0.35”

    FREE security tip: Your RAM may have unpatched binaries loaded. See here for the gory details: http://tmrepository.com/fudtracker/linux-does-not-require-reboots-revisited/ (contains detailed evidence)

    Since you ‘ve been running your machine without a reboot for 87 days (and I doubt you are running linux kernel 4.0 since it’s not out yet), and since there were probably some CVEs during those ~3 months, I wouldn’t feel particularly safe if I were in your boots right now.

    Of course, I don’t feel particularly safe in my boots either, since I haven’t done the Patch Tuesday Ritual as of today (I have set Windows Update to notify only).

    Which brings me to your other point:

    Two reboots?…why not five perhaps even ten, who knows, just be to be safe…heh…heh.

    I dunno, Windows Update is so bad, that updating Windows is a full ritual. Here it is, for your Windows-hating pleasure:
    1) Boot system just for this purpose (so there are no other processes messing with the disk)
    2) wait for the system to cool off for 10 minutes and for it to check for updates automatically, you don’t want to start a manual check for updates while an automatic is already going on
    3) Do the Updates, hope the .Net ones don’t take too much time (usually they do)
    4) Reboot and wait for some house-keeping process that fires up after first reboot to finish
    4b) (optional) Reboot (for good measure)
    5) Clean the crud from SoftwareDistribution/Download. Run Disk Cleanup to do a Windows Update clean-up and clean more crud.
    6) Reboot otherwise Windows Update cleanup won’t happen
    7) Defrag. Play with your tablet or phone or something.

    Finished.

    There, we have managed to update Windows without leaving 100MB of crud behind (important if you have an old PC or if you have Windows on a small partition) and without impacting performance. If you are not an OCD freak like me, you can skip the second reboot. If you are on an SSD you can skip on the defrag (obviously) but make sure you cleaned up the crud.

    But you see, I am happy to shallow all this, because I need Windows. It makes my hardware work as it should, and it allows me to run new apps on an old Windows version, and old apps in new versions. Instead, Linux has problems like this: http://www.omgubuntu.co.uk/2015/03/new-features-vlc-2-2-ubuntu-ppa#comment-1882701538

    LMAO… use EXT4 a REAL filesystem, then you don’t have to waste time with that BS.
    Ewww!! And randomly lose files?

  67. dougman says:

    Re: OR you can go with Solid State disks and get on with work. My system has dual 512Gb SSD’s. Boot time, when I do it, is under 20 seconds.

    I find it incredulous that you think one needs dual SSD’s to have 20-second boot times. My boot time is far less then that running a single 10K Velociraptor on Linux, s I am not loading a bunch of bloated BS.

    Wait you run your system files from an external drive? Why???..

    See the thing with Linux, I rsync my system drive once an hour to a spare drive and dd once a week to my NAS as an ISO once a week. I can remove the system drive and boot of the spare if I wanted.

    With M$ validation nonsense this becomes a problem, I dare you to try that without having problems such as “Windows Not Genuine” Error 57324 The reason is the hard drive serial number is one of the things Windows monitors to detect piracy. You need to contact M$ by phone and get them to help you re-activate it. So in essence you deactivate Windows then call M$, speak with some Indian running through a bunch of codes, then Reboot a few times just to be sure.

    LOL…crazy!

  68. DrLoser says:

    **Yawn** Attach External 3Tb USB 3.0 disk. Done.

    Well, to be fair, you’d have to spend $50 for a terabyte disk from a reputable manufacturer. Which is quite a lot of money, when you consider that Dougie has probably spent:

    1. $5 gas
    2. $2.99 “Po’Boy” sandwich
    3. $1.50 “Sorta-Cola Big Boy Gulp,” discounted from $2.50 as a package with the Po-Boy
    4. $1.00 entrance fee to the local refuse collection area
    5. $100 for a tetanus jab afterwards — those rusty nails get everywhere!

    … in order to dumpster-dive his latest Mega Beast.

    When it comes to 100GB of C: drive space on a dumpster-dived bit of hardware, these days, you’d be surprised by the unacceptable extra expenses.

    Personally, I blame Microsoft.

  69. DrLoser says:

    How lovely to see Dougie jump ahead of oiaohm in provable knowledge of the relative merits of file systems. How he did so, whilst lacking any sort of relevant education (or indeed skill), is possibly a question for the inevitable HBO biopic. Which will be boffo, obviously:

    defrag-fest??

    Oops, failed at the first gate. Just ran a defrag on my feeble, elderly Windows 7 machine. Don’t know why I bothered: the weekly scheduled defrag keeps it at a constant 0%. However, just as a test: I chose to watch an educational video of Richard Stallman eating his own foot at the same time as running a manual defrag.

    Well, that’s ten minutes of my life I want back. But not because of the time taken to defrag an NTFS file system.

    It isn’t a problem, Dougie. (The defrag, not the toe-fungus.)

    LMAO… use EXT4 a REAL filesystem, then you don’t have to waste time with that BS.

    I hesitate to challenge your expertise, Dougie, but have you considered either BTRFS or (preferably) ZFS? Them thar are real file systems!

    Maybe, just maybe, that biopic will have to wait.

  70. oldfart says:

    “as the OS grows and grows over time, your C:\Windows and C:\Windows\winsxs folders would eat consume everything then you run out of space.”

    **Yawn** Attach External 3Tb USB 3.0 disk. Done.

  71. oldfart says:

    “LMAO… use EXT4 a REAL filesystem, then you don’t have to waste time with that BS.”

    OR you can go with Solid State disks and get on with work. My system has dual 512Gb SSD’s. Boot time, when I do it, is under 20 seconds.

  72. oldfart says:

    “Well, consider Tianhe-2.”

    considering what one can get with stand alone GPU’s these days, Its only a matter of time before one could do the equivalent of crack a human genome at home.Robert Pogson. And the great joy of being able to do it yourself is that one does not have to wait ones turn to get work done when the resource is dedicated to you.

  73. dougman says:

    defrag-fest??

    LMAO… use EXT4 a REAL filesystem, then you don’t have to waste time with that BS.

  74. dougman says:

    Two reboots?…why not five perhaps even ten, who knows, just be to be safe…heh…heh.

    Hmmm, lets see.
    ~ $ uptime
    14:20:47 up 87 days, 11:03, 1 user, load average: 0.11, 0.19, 0.35

    *shrug*

    With Linux, reboots are unnecessary especially now that the 4.0 kernel will allow no reboot patching.

    OS’s are getting big? Oh yes… Win-Dohs is a bloated pig!

    W8 Hard disk space: 16 GB (32-bit) or 20 GB (64-bit)
    W7 Hard disk space: 16 GB (32-bit) or 20 GB (64-bit)

    Bear in mind this is the Install requirements, you would be hard pressed to actively use Win-Dohs with these bare requirements, as the OS grows and grows over time, your C:\Windows and C:\Windows\winsxs folders would eat consume everything then you run out of space.

    Here is a technical breakdown: http://blogs.technet.com/b/askpfeplat/archive/2014/05/13/how-to-clean-up-the-winsxs-directory-and-free-up-disk-space-on-windows-server-2008-r2-with-new-update.aspx

    Now with Linux, the “/” partition I have is 10GB, with a usage of 7GB leaving me 3GB to spare. Compare that with Windows!

    ~ $ sudo tune2fs -l /dev/sda1 | grep ‘Filesystem created:’
    Filesystem created: Mon Jun 10 12:10:28 2014

  75. kurkosdr says:

    trying to do what should have been done with apps (bundled or not) = trying to do what should have been done by apps (bundled or not)

    (OSes try to do stuff that should have been handled one level above)

  76. kurkosdr says:

    ““The RDP Security Bulletin MS15-030 has a patch, KB 3036493, which can cause multiple reboots.”

    Spare the second reboot and spoil the Windows installation…

    I always do a second reboot just in case, so I guess I am ahead of the curve. It’s not much lost time, most of the lost time is the defrag-fest that follows after the Windows Update (damn 5400rpm laptop drives, this is why I cry when your options for new laptops are the same slow 5400rpm harddrive or a tiny SSD instead of a proper hybrid)

    BTW, anyone else thinks OSes are getting to big? I mean all of them. Even Android which started as an app launcher, look what it has become. Modern OSes are the CISC processors of yesteryear (i mean the VAX kind), trying to do what should have been done with apps (bundled or not).

  77. DrLoser wrote, “You can buy an i7 3.4GHz desktop tower with 16GB of RAM for £600, Robert. You don’t even have to try hard. (I didn’t.)
    No server on Earth is going to be faster than that in any appreciable way. And, given the number of cores and the amount of RAM, no server on Earth is going to beat the performance experienced locally.”

    Well, consider Tianhe-2. Think your desktop client can crack a few human genomes before lunch? Oh, yes, M$ says it’s so, by using a server out there in the cloud…

  78. oiaohm says:

    “multi point server” Ok what in hell was my brain thinking for a second.
    Multi seat server.

    Multi point servers windows one are particularly horible at this. Multi seat server tech is still fairly new at spiting GPU between many users.

  79. oiaohm says:

    Robert Pogson
    1km implies ~3.3μs which is wonderful for file-storage but for a remote desktop much higher latency is tolerable.
    You got the limit time right ~3.3μs. Robert Pogson please also note the solution I am talking about here is no using thinclients but direct wired keyboard/mouse/screen to multi point server.

    http://www.aliexpress.com/store/product/LINK-MI-LM-THF107D-Fiber-Optic-DVI-Extender-1km-Over-Single-Fiber-Optic-Cable-Supports-KVM/1164082_2040789515.html

    1Km is about as far as you can pull off a DVI/HDMI screen sync 4μs is busted .

    If you are going more than 1Km you have to start using thin-clients. If you don’t use expensive firber optic about the max you can go using powered extenders in copper is 60 to 300 meters(yes brand and quality is a major factor powered extender equals requiring power point at client).

    This is why schools and other places other than software limitations should be able to deploy 12 computers stations off one multi seat server. Performance difference to end users should be zero to better. Cost difference the hardware todo the multi seat can in fact work out cheaper yes no thin clients to pay for. Power bill works out lower as well.

    This performance fact is something those who say we must have desktops keep on forgetting. The idea of the screen and keyboard on a 1 to3 metre bit of cable is no where near where the functional limits are. Like even do a 30 meter range of cable around a single computer it covers a hell of a lot of area.

    Yes 30 meters you are absolutely not pushing any limits. 30 meters hdmi/dvi cable is expensive as they include powered from hdmi cable extender. 30 meters you can use USB to Ethernet cable extenders that support power over Ethernet. So no power point required at each client location at 30 meter length.

    None of this is thin-clients. This is direct wire to server clients.

  80. dougman wrote, “Install Patch…Reboot twice…Oh darn it broke my box…Uninstall patch..Reboot twice…
    It’s insane that people put up with such nonsense.”

    Yep. It’s the Stockholm syndrome, folks agreeing with their abductors. Show them GNU/Linux running smoothly on identical hardware and mouths drop open in the light of the new reality. These re-re-reboots are due entirely to the complexity of that other OS. It’s totally unnecessary yet M$ has managed to burden much of the world’s IT with it.

  81. oiaohm wrote, “if you cable length exceeds about a 1Km your latency will be too high”.

    1km implies ~3.3μs which is wonderful for file-storage but for a remote desktop much higher latency is tolerable. Humans only start to notice at 0.1s like a couple of characters delay for a good typist. Point-click-gawk usage tolerates much more. I’ve used a GUI to Europe via NX with very little annoyance. This is one of the reasons I recommend thin clients. They are very flexible and by keeping massive data on the server where it belongs can handle just about everything but video quite well. It’s just silly to use thick clients for everything in a large organization with ~1K applications and data stored on servers anyway and have the client seek all over the hard drive to load some application which could be in RAM already on a server. That other OS often preloads applications to avoid this latency but if you have a lot of applications that multiplies boot-times. That really annoys users to the point that they sue their employers. Time is money/annoyance/loss of productivity.

  82. oiaohm says:

    kurkosdr cloud is a remote server solution.

    Local server solutions provide some serous advantages over a desktop as long as you can direct multi seat off it. Steam with its forward output by network is really following the Localish server solution.

    There is a general rule if you cable length exceeds about a 1Km your latency will be too high.

    http://blog.chromium.org/2013/01/native-client-support-on-arm.html

    NACL is on arm by the way kurkosdr. Google final goal of NACL is PNACL what is a byte code solution cpu neutral. Firefox is following the OdinMonkey path at this stage.

  83. dougman says:

    If you manage Windows, this article will make you cry.

    http://www.infoworld.com/article/2895022/security/problems-reported-with-microsoft-patch-kb-3002657-and-a-warning-on-kb-3046049.html

    The best snippet: “The RDP Security Bulletin MS15-030 has a patch, KB 3036493, which can cause multiple reboots. The KB article warns, “If you uninstall this security update, you may have to restart the computer two times.” I’m hearing — but have not yet confirmed — that you may have to reboot twice on installation, too.”

    LMAO…….Install Patch…Reboot twice…Oh darn it broke my box…Uninstall patch..Reboot twice…

    It’s insane that people put up with such nonsense.

  84. Adam Queen says:

    Robert Pogson great post. This made my day 🙂

  85. oiaohm says:

    DrLoser
    No server on Earth is going to be faster than that in any appreciable way. And, given the number of cores and the amount of RAM, no server on Earth is going to beat the performance experienced locally.
    You need be very careful.

    https://wiki.ubuntu.com/Multiseat

    How far way is the server. Remember the server does not need to be connected by Ethernet. Multiseat is just the old direct physical connection terminal servers.

    Multiseats on average to show performance gains due to the fact caches are kept hot. We have got to the point for most application usage we have more than enough ram. Direct connection video keyboard and mouse to a server can run upto 80 meters.

    Experienced locally is correct DrLoser. You just have ignored the multi-seat servers. Remember you have video card combinations allowing 12 monitors per computer and in 12 keyboards and mice and you have enough for a row along a wall at a school.

    Windows fairly much gets in the way of using multi-seat. There is also something interesting at 10G network connection you are fast enough. We are to the point with wifi that you can under 8 metres wireless connect a screen keyboard and mouse using 60 Ghz wifi and be as fast as wired in.

  86. kurkosdr says:

    Shut up DrLoser. Unless you can prove that the nanosecond latency it takes to compute an instruction in local CPU+RAM is in any way faster than the milisecond latency a network connection has, then you are not fit to participate in the discussion.

    There is the joke of the cloud. If it’s computationally-light, it can be done on local CPU+RAM with much less latency. If it’s computationally-heavy, it’s uneconomical to host it on a server without charging the user a lot (while the CPU+RAM he has already paid for stays mostly idle).

    —–

    “The cloud” (oh buzzwords, how much I hate thee), as a content-storage/delivery service and as a file vault is real.

    The cloud as a remote-execution service never happened. Even “cloud apps” like Microsoft Office Online and Google Docs are in fact local applications trapped inside a browser.

    Which brings me back to the point: Google is building an API inside Chrome under your nose, which doesn’t work on Firefox, which pisses on your favorite HTML5+JS dream and which is x86-only (no NaCl apps on “ARMed” computers) and you guys haven’t even noticed it. NaCl is a lot like Silverlight, only worse because it only works in one browser.

    Essentially, Pog’s dream is to either have to wait for milisecond delays for every instruction to be sent to a remote server, or to use Desktop Linux as a bed for launching Chrome. Freedom!

  87. oldfart says:

    “No server on Earth is going to be faster than that in any appreciable way. And, given the number of cores and the amount of RAM, no server on Earth is going to beat the performance experienced locally.”

    But my dear Doctor, we both know that Robert Pogson only intends to deliver HIS notion of the desktop software that people need. The needs and requirements of the sheeple/slaves will no more come into play than the needs of his former students did.

    After all Robert knows best.

  88. DrLoser says:

    Because servers are faster…

    Stop right there, Robert. They are not.

  89. DrLoser says:

    Sorry, you’re wrong. Servers win on all counts simply because they are not limited to sharing space with a human.

    And a table full (for no obvious reason, and with no scale attached) of “plus ones.”

    Servers are not faster than desktops, Robert. Not in any meaningful sense whatsoever. Being “faster” is not even the primary or secondary purpose of a server. To argue otherwise is to act like a dinosaur, I’m sorry to say.

    You can buy an i7 3.4GHz desktop tower with 16GB of RAM for £600, Robert. You don’t even have to try hard. (I didn’t.)

    No server on Earth is going to be faster than that in any appreciable way. And, given the number of cores and the amount of RAM, no server on Earth is going to beat the performance experienced locally.

    Does the word “latency” mean anything at all to you, Robert? Apparently not.

  90. DrLoser wrote, “Because servers are faster…
    Stop right there, Robert. They are not.”

    Let’s see:

    Feature Server legacy Desktop
    RAM +1
    Storage +1
    Network +1
    CPU +1
    Total +4 0

    Sorry, you’re wrong. Servers win on all counts simply because they are not limited to sharing space with a human. Servers can be made larger, more noisy, heavier, hotter, and so deliver more IT power to a user at any given time. It is possible to create a feeble server or to overload a server to nullify these advantages but they are real. The extreme case is the guy doing HPC. He might get a whole wharehouse full of CPU/RAM/Storage all working for him at his cool and quiet little desktop perched on the back of his monitor, perhaps in his living room or bedroom. There is a wide range in between, but typically IT organizations invest a lot more in decent servers than is wise in individual desktops. It is much more efficient, faster and cheaper to have a few servers working hard than a mess of desktop clients idling. When the heavy lifting needs to get done a server is far superior for just about everything. The last proper server I used in schools had a snappy Xeon processor, a little RAM, gigabit/s networking and sweet SCSI drives. It kicked ass being far snappier than any of our desktops, including the new ones. It was about 6 years old at the time… Imagine a new state of the art server in a school… At one time SUN was offering schools sweet deals on Opteron servers. If only I’d had a budget…

  91. DrLoser says:

    Because servers are faster…

    Stop right there, Robert. They are not.

  92. DrLoser says:

    There are a few applications still handled the old way for heavy data/computation at the local level but even some of that is done on servers these days.

    You’re beginning to sound like oiaohm at his most deluded and disconnected from the real world, Robert.

    What are those “heavy data/computation” applications that can only be done by Windows, Robert? The Sheeple deserve to know the boundaries of their pen!

    And what happened to your continual claim that they can equally well be done with the Linux desktop?

    Funny how that assertion never cropped up whilst you were branding oldfart as a slave for using music composition software — I imagine this qualifies as “heavy data/computation,” best done locally — on a Windows desktop rather than on a Linux desktop?

    Still, as I always say … A Foolish Consistency …

  93. kurkosdr wrote, “Why wait for the server to process my data, construct a web page and send it down the internet, when a local app can do it much faster?”

    Because servers are faster, folks have gigabit/s connections these days, and servers cost less per user especially if you include maintaining thousands of installations instead of just one or a small cluster.

    Do the maths. Suppose the user wants to load some application and update a paragraph of text. That application is probably already floating around RAM on a server somewhere, so there’s no need to seek all over creation to start the process. The data might still be in cache if the topic is hot. That server might have 16 cores all waiting to do the user’s bidding while the use might have an old dual or quad-core system. That server might have many gB of RAM and faster storage devices. How long does it take to transfer a MB of web-page at 1000 mbits/s? 0.01s. The user doesn’t even notice that. Render time in the browser? The blink of an eye. The user may notice but doesn’t even care. Now, many users don’t have gigabit/s Internet connections but it’s pretty cheap on the LAN. I expect a good number of organizations use gigabit/s these days. A school I set up 8 years ago did. It’s more about how one sets it up rather than the particular architecture. In that case, my file/auth server cost $1800 and serviced 700 accounts in the blink of an eye. The terminal servers averaged 24 simultaneous users and cost only $1200. That’s way cheaper than putting a hair-drying PC on every desk and it performed better for everything but video, secondary to our global operation. We used multiseat thick clients where we needed video to work well.

  94. DrLoser says:

    That’s subsistence.

    Or, alternatively:

    Today’s move is an important accelerant to this trend,” noted Aaron Levie, CEO of Box, in a blog post announcing the integration of Box into Office. “Microsoft’s productivity technologies are used by a billion people globally, and in nearly every enterprise — its influence on the industry cannot be understated.”

    Or:

    By freeing customers to use Microsoft tools on other platforms and devices, though, Microsoft will continue to be a dominant force — even on rival platforms like Android and iOS.

    I have a five year old niece who picks up on an interesting book and reads further into it than you can manage, Robert.

  95. kurkosdr says:

    Oh no, not the cloud. This thing has been around since forever, it never took of. Why wait for the server to process my data, construct a web page and send it down the internet, when a local app can do it much faster?

    Only fake-cloud might have a chance to work together with traditional apps, aka Google’s trick of downloading the app but trapping it inside the browser.

    In other words, any success that Desktop Linux might have on the desktop is because Google decided to give Desktop Linux APIs the middle finger and develop their own internal API inside Chrome, which is tied to Chrome, and which means that all your future will be hanging on how good Chrome’s linux support will be.

    But even with that supposed level playing field (Chrome apps), Desktop Linux still has the pulseaudio problems and graphics stack problems as a disadvantage.

    And btw, lots of stuff like storing photos, downloading torrents and writing CDs/DVDs happens with local apps. Also most people are not fooled by Google Docs and use a local app.

    I stand my original statement. The biggest competitor to MS Windows is MS Windows and Linux is a mouse in the arena.

  96. kurkosdr, assuming lock-in is forever, wrote, ” What is Desktop Linux’s plan for luring all those risk-averse, loaded with Windows software, Win7 users out of Windows?”

    Wake up! The world is moving to web-applications and the cloud. There’s not lock-in there. M$’s share is something like 10% in the cloud and a bit more on the web. The big organizations that used to have an OS on every desktop and a raft of non-Free licences on every desktop have moved to thin clients and the like. They don’t need M$ on their desktops nor mobile phones nor tables. That’s the new reality.

    What is prying folks away from the Wintel monopoly is that it’s too expensive and too complex compared to perfectly viable alternatives. There are a few applications still handled the old way for heavy data/computation at the local level but even some of that is done on servers these days. If you really need horsepower you want it on a server so that heat and noise is not in your workspace. Businesses are averse to change but they are looking at every opportunity to upgrade as an opportunity to wiggle out of Wintel’s grasp. Consumers have largely done that by buying smart thingies and stopping buying desktop PCs. Businesses are putting almost all “green field” applications on servers, either real or virtual or in the clouds. Businesses are cooperating in the FLOSS community by sharing new software they develop instead of being divided and conquered by Wintel and “partners”. It’s all about efficiency with them. It’s much less expensive to develop a small share of some application in concert with other businesses than to pay extortionate licensing fees to M$, Oracle, Adobe etc.

  97. kurkosdr says:

    Lol, minutes after I made the post that the biggest competitor to MS Windows is MS Windows.

    Now, to the point. What is Desktop Linux’s plan for luring all those risk-averse, loaded with Windows software, Win7 users out of Windows?

    The UI change you say? With classic shell start menu, you can have your Win7-style start menu.

    See, Windows users are like people living under a dictatorship which caters for their needs at a decent level. Sure, the dictator in charge might be a douche, and some of the reforms may be questionable, but not many were willing to leave their way of life, language and most of their property to go somewhere else.

    Which brings me to the point: Desktop Linux companies never understood the value of easing “conversion” in mature (locked-in) markets. Why do they not put more money in Wine. And don’t give the “it’s impossible” excuse. 100% compatibility might be impossible, but they could make a list of “guaranteed compatible” games and apps if someone gave them the money needed. That of course assuming they get PulseAudio and the graphics stack to work && not break drivers

    Similar thing for Libre Office? Why do they not pump money to OOXML compatibility. Sure, full-OOXML compat might be impossible (at least for the “transitional” variant), but they could easily have a reporting feature to iron-out converison bugs.

    Only the ffmpeg guys get the value of conversion in mature markets. They have very good WMV compat, and, voila! Most people were converted from WMP to VLC and MPC-HC, and they even use open formats (mp4, mkv).

Leave a Reply