Keeping an Eye on the Enemy

My enemies are the purveyors of non-free software who try to lock the world into doing things their way and paying for each iteration. M$ is chief among them but many of their “partners” are cut from the same cloth. Apple does charge less for software but it’s still lock-in one way or another. That lock-in and emphasis on keeping the cost of IT high is a terrible waste of resources especially when the enemy is restricting what I can do with hardware that I own.

My enemies are your enemies if you value the good things in IT: finding, creating, modifying, storing and presenting information quickly and at minimum cost.

Here is what the enemy is up to:

  1. M$ plans to provide the same user experience on smart thingies as the desktop/notebook PC with skype and cloud lock-in.
  2. Apple plans to provide the same user experience on smart thingies as the desktop/notebook with cloud lock-in.

I take my enemies at their word. It’s consistent with what we know from other sources and they both have huge networks of partners who depend on things being the way they are.

The FLOSS community is way ahead with major distros like Debian GNU/Linux having been available for smart thingies and stupid thingies for years. Android/Linux now seems to have the inside track on merging experiences with desktop and smart thingy but GNU/Linux is there as well. A lot of the cloud already runs GNU/Linux. So do virtual servers.

So, while FLOSS grows like Topsy, our enemies are playing catch-up but tripping over themselves and their bloat and inefficiency in the process. All the things they have ever done to lock-in end users to their way of doing things on the desktop now get in the way of merging small mobile systems and the larger desktops. While they try to convince themselves and consumers that the world should change, FLOSS will take over the world.

M$, in particular, is in trouble with XP clinging to life and preventing “7” from taking hold while “8” is still vapour-ware and crippled. Apple is in trouble with its share of desktops around 5% and growing at half the rate of Linux while Android/Linux now dominates smart phones and is doing well on tablets.

I expect Apple will try to get OEMs to produce non-Apple PCs with OS X to try to catch up and I expect M$ will crash and burn with “8” only selling on a few disastrous trial runs by OEMs. GNU/Linux will pick up the pieces as the Wintel monopoly crashes and burns. Attempts by Ubuntu/Linux to take over the world will fail as other distros are much more flexible. The old guard of IT relies on inflexibility and that is their downfall. You cannot prevent the world from using its hardware any way it likes if you want to survive in the long run. Wintel will learn that and evolve as will Apple but monopoly will be gone from IT. Everything will become a commodity, both hardware and software. “Platforms” will disappear from consumers’ and businesses’ viewpoints. Service/performance will be king.

The often-heard argument that Apple is a hardware company may miss the fact that last year, Apple produced more smart thingies than all the Macs every produced and the world produced more smart thingies than Apple… This means Apple is rapidly losing share even in its most successful year ever. Similarly, now that smart thingies are bought, used and seen as personal computers by the world, M$ is losing share even in its most successful year ever. M$’s client division actually shrank in units shipped during the Christmas season. Whether selling hardware or software for money is the business plan, monopoly is in big trouble.

End-users want to access their apps anywhere, anyway, at any time. That plays into web applications and thin clients which are not demanding of the client hardware. This eliminates hardware lock-in and walled gardens as frameworks for business in IT. Applications that are accessible on one platform will be accessible on all or they will die. That will cripple lock-in by M$ on software and by Apple on hardware. While the enemies are figuring all this out FLOSS will thrive even more than it does today because FLOSS has known this for decades and is able to deliver the goods now, when people want them.

Linus has the last word on Linux kernel development:
“as long as I want to get my kids through college and not live under a bridge, I will keep doing kernel work.” That attitude spells the death of monopoly. FLOSS just will not die and keeps growing. The world needs software and can make its own. No monopoly can do what the world can do. No monopoly is desirable or affordable. As much as IT has grown since infancy, it has many times more growth ahead and monopoly will only slow it down.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

45 Responses to Keeping an Eye on the Enemy

  1. Ray says:

    True, but most families wouldn’t think that it’s a power supply problem, and would take it to the computer repair service, who would say that the motherboard’s fried, and they have to get a new computer…

  2. oiaohm says:

    oldman
    “The fact that windows lacks some esoteric function is at best a factoid to be considered and then dealt with.”

    Oldman it does not change the fact that Windows as a OS still needs a lot of work to make up for the frameworks the OS should be providing.

    Esoteric no so much. Linux cgroups is about 3 to 4 years behind what Solaris and AIX offers. BSD is also at about the same level as Linux.

    Its a bit hard to say its esoteric when the only OS that does not have lots of the functionalities is Windows. Even OS X from apple that is one of the worse Posix based OS’s I know for secuirty has more management control options to wrap around applications.

    Having to hack around issues does not make good secuirty.

    Linux is now to the point of welding the containment systems forever into the OS core.

    http://lwn.net/Articles/443241/

    This in 3.3 Linux kernel means memory cgroups can never be turned off. The memory management system now is cgroups for ever more past this point.

    Linux is not staying still oldman the frameworks to detect these issues are evolving and becoming more and more always on features.

    Yet on the Windows side were are the cgroup/zones/jails….. stuff. We have not seen hide or hair of it. Doing it properly requires internal alteration of the OS.

    Issue is here oldman at some point secuirty of the OS has to come into consideration. Windows is sliding backwards in these metrics. Most secuirty audited application is worthless if the OS its running on is the most insecure resulting in it breached leading to application being breached.

  3. Ray wrote, “OEMs tend to buy cheap, crappy power supplies that tend to break easily”.

    Some OEMs, yes. Still an ATX PSU can be bought for ~$50 and it takes a few minutes and a Phillips screwdriver to change it. For serious work, you can plunk for a redundant PSU or lash up a stack of ATX PSUs and diodes and avoid most such failures. Periodic cleaning helps, too. I taught these things to all my Grade 10 students.

  4. oldman says:

    “You need to detect early. windows lacks the framework to detect early. So you detect late so you don’t get anything good out your support contracts bar pain.”

    Nonetheless, if an application has passed security review successfully I get to support it. If that application requires windows to run, I get to run windows, As would you Mr. Microsoft VAR, in my position.

    The fact that windows lacks some esoteric function is at best a factoid to be considered and then dealt with.

  5. oiaohm says:

    oldman
    “In point of fact there is no point in doing such a thing. such a geek exercise will not change the reality that if an application that is required for a business function runs under windows, it will be run – period.

    Memory leaks and misbehaved programs become the problem of the vendor who supports them – we have service contracts for that purpose.”
    There is a big point it makes the difference at times between complete secuirty breach and a failed application.

    Attackers depend on programs having defects and those defects can turn up as misbehaver. Like a process starting other processes when it should not be.

    Same with leaking memory this can be a heap attack. Its part of your offensive shield to detect these defects and stop the,

    “we have service contracts for that purpose.”

    Service contracts help you not one bit after you have lost your data.

    When application hit cgroup limits then you are up ribs of who ever made the application before loss of too much data.

    Apparently you have never been broken into and lost key data yet oldman.

    You will learn the lesson one day that the by the time you need the support contract that has that money payment for damages most likely the party that gave it to you is out of business due to massive amount of breaching.

    You need to detect early. windows lacks the framework to detect early. So you detect late so you don’t get anything good out your support contracts bar pain.

  6. Ray says:

    Pog: I’d say under 5 years, since OEMs tend to buy cheap, crappy power supplies that tend to break easily, like what happened to my 2 desktops…

  7. RealIT, denying reality, wrote, “the 3 year desktop refresh cycle”.

    I doubt there ever was a 3-year refresh cycle. I only read that here. Businesses here hold onto PCs almost 8 years, as indicated by the PCs Consumers for Schools recycles. The youngest PC in my home is probably 3 years old. No one in their right mind would scrap a 3 year old PC. They recycle them some way and someone buys a 3 year old PC and uses it five years more.

    Do the maths.

    Forrester claims there were 1 billion PCs on Earth in 2008 and that it will take 7 years to reach 2 billion PCs. 3 years later we see 360 million PCs being shipped.
    n = n0(1+r)t
    2 = (1+r)7
    1+r = eln(2)/7 = 1.104
    r = 0.104
    1(1.104)3 – 1(1.1042)
    1345 million – 1219 million = 126 million
    increase.
    Of the 360 million manufactured PCs, 234 million were replacement PCs, 234/1345 = 17.4% of existing PCs, suggesting an average lifetime of 1/0.174 = 5.7 years, almost double the “3 years” touted. If businesses were keeping PCs only 3 years, consumers would have to keep them much longer.

    So, the premise of the above comment is false. There’s no sign of GNU/Linux declining or slowing in any way and there are many signs that other OS is in serious decline.

  8. ch says:

    “Nope. XP, as released used to run out of resources and crash regularly. You could buy applications that would warn you the thing would soon crash.”

    Sorry, but the version with those “resource” problems was Win3.x. I remember all too well :-(

    These “memory defragers” you are pointing to are pure snake oil, just don’t use them, and you’ll be fine. (If you really used such stuff when you were running XP then I understand why you had such troubles.)

  9. oldman says:

    “oldman start comparing windows and linux on means to control how trouble a badly behaved program can do. You will find this is a department MS Windows needs lots of work.”

    In point of fact there is no point in doing such a thing. such a geek exercise will not change the reality that if an application that is required for a business function runs under windows, it will be run – period.

    Memory leaks and misbehaved programs become the problem of the vendor who supports them – we have service contracts for that purpose.

  10. RealIT says:

    My predictions. XP will be around another 2 years for consumers. Windows 7 will become the de facto standard in enterprises as they approach the 3 year desktop refresh cycle. Windows 8 will move largely in the server field along with their now synchronised product release cycles. Office will remain dominant for the forseeable future in all its iterations. Windows 8 for desktops/tablets will see a spike during launch due to tech heads and early adopters but will level out to where 7 is and 7 will replace XP in installation numbers.

    Linux installs will drop during 8’s launch but will climb back to 1% after 3 years and no more.

    In the mobile space I won’t comment since that arguments is pointless trollbait.

    Source: History. It has a way of repeating itself.

  11. oldman says:

    “oldman start comparing windows and linux on means to control how trouble a badly behaved program can do. You will find this is a department MS Windows needs lots of work.”

    In point of fact sir I have no intention of performing such a comparison as it is pointless. There are applications that run on windows that we are mandated to support and run. They are not going anywhere soon.

    Pog once asked about the distribution of windows and linux in our shop. It is roughly 340+ Linux based applications vs, 300+ windows based applications. þæs færgryres Mr. Microsoft VAR, we run the very mix that you one stated was your current best ideal of computing.

  12. oiaohm says:

    Normally the thing that get a leak is the OOM killer under Linux that Windows does not have.

    oldman I guess you are up on cgroups how you can partition memory so controlling what OOM killer takes out first and upper limits of application usage oldman.

    Systemd enabled using the way nicer solution under Linux to memory issues so you can give a better order of termination.

    Normally a Linux box to be killed by an application needs a faulty driver of some form. Biggest faultly driver is /dev/mem that Linux has been forced to have so X11 worked. This is why I am looking forward to kms so much kills off /dev/men and X11 running as root.

    No matter what we do badly written applications will happen. Selective termination of applications like OOM killer does is out to target one form of badly behavied applications.

    cgroup forking limiting controls another group.

    There are a lot of controls in Linux to get on top of badly behaved programs.

    oldman start comparing windows and linux on means to control how trouble a badly behaved program can do. You will find this is a department MS Windows needs lots of work.

  13. oldman says:

    “What if the applications do nothing??? ”

    Then they are badly written and should be dumped if possible. but Memory leaking programs are an OS independent issue in my area.

    “Why should a faulty user-mode application kill a modern OS?”

    Why indeed, but as I have said I have seen faulty applications kill linux boxes as well.

  14. oldman, here are some links from the day…

    AbpMon

    “Memory Defragmenter also prevents Windows crashes since Windows crashes mainly occur if there is no free memory (RAM). “

    see Memory Defragmenter 1.1

    XP Memory Management

    “Windows XP introduces the CreateMemoryResourceNotification function which can notify user mode processes of high or low memory availability so applications can allocate more memory or free up memory as necessary.”

    So, XP knew it was in danger of crashing and left it up to user mode applications to do something about that… Can you see the problem here, oldman? What if the applications do nothing??? Memory leaks are a big problem in software and XP crashed when faced with memory leaks. Why should a faulty user-mode application kill a modern OS?

  15. oldman says:

    “Nope. XP, as released used to run out of resources and crash regularly.”

    WHen Pog? I pushed my copies of XP pretty hard and I never saw this.

    “You could buy applications that would warn you the thing would soon crash”

    What applications? citations?

  16. Phenom wrote, ” the only way for a lost pointer or memory leak to crash NT is to have it in a driver”.

    Nope. XP, as released used to run out of resources and crash regularly. You could buy applications that would warn you the thing would soon crash.

  17. oiaohm says:

    Phenom
    “I am yet to see an OS resistant to faulty memory”
    You must only handle small fry toy hardware.
    http://www.mjmwired.net/kernel/Documentation/vm/hwpoison.txt

    Yes its a feature of the memory controller to detect defective ram. Its up to the OS to lock the bad ram out and make up mind what todo to application that had the bad block. If you have bad ram is not a major issue. The beeping will annoy the crap out you.

    Of course there a limits to what it can handle.

    “lost pointer or memory leak to crash NTis to have it in a driver.”

    This not 100 true. Interaction with a driver yes. This is the valid but deadly path. There is a bug in NT buffer handling. You request driver allocate buffer and provide to userspace. Linux ownship of that memory is linked to the application that asked for the create of the buffer.

    NT hmm ownship owns to the driver of that read only buffer memory on the idea the driver might share it with many applications. Nice for security not so nice when application with handle to that memory leaves the building forgetting to free it. Luckily in windows direct talking to drivers don’t happen that often by un testing programs wait Digital rights management crap some game use drivers.

    Of course drivers by NT design are ment to check if they created a read only buffer and it so if it still exists share it.

    Unfortuntally lot of driver coders are idiots so create one per request and depend on disconnect from driver message to get rid of it.

    So if you application is stupid and spins around requesting buffer after buffer poor kernel ends up out of ram. Yes while doing that I can be thowing away the pointer each time. Fun part it does not show up that the application is like consuming 4 gb of ram.

    There is reason you want your driver source code audited. So you don’t have land mines like this.

  18. Phenom says:

    Not to mention lost pointers and memory leaks

    Pogs, now you sound like a true Ohio. Not only you accuse software for crashing on faulty hardware (I am yet to see an OS resistant to faulty memory), but you also come up with an complete bogus about lost pointers.

    It might be a complete surprise for you, Pogson, but the only way for a lost pointer or memory leak to crash NT is to have it in a driver. That translates to Kernel panic in your beloved OS and Unix in general.

  19. Conzo says:

    I’m starting to suspect that you DO have the source code of all Windows OSes – feel free to share!

  20. oiaohm says:

    Robert Pogson
    oiaohm wrote, “X86 really you are pushing a dead design up hill.”

    I point out why this is valid.

    Robert Pogson
    “You will see Atoms at 10-14nm widely used in smart thingies because it will use only a few milliwatts even at 64bits and quad-core. Intel is at 1-2W today at 22nm. Two steps of Moore’s Law will bring them to 10nm or so and probably 500 mW. Of course ARM will be at lower power but after a day of battery-life, does it really matter? ARM may be able to run from a solar panelled case which may well trade off against price/weight.”

    You have over looked something critical. A atom the thing we cannot divide. What size is it. Between 0.1 and 0.5 nm. Silicon is closer to the 0.5. 0.1 is hydrogen that is not that useful.

    The smallest in theory transistor is about 1.5 nm. But its not workable you need insinuation and some protection from cosmic radiation.

    Operational safeish size is at least 3 times bigger than the theory. 4.5nm. Add in some tolerance for production errors what is basically double that again. 4.5 nm human body heat can basically start flipping bits so you would be wanting to keep that very well cooled.

    11nm is the hard deck for logic based processors.

    16 nm arm chips will hit 2013 you can already get prototype samples from Toshiba. IBM and Arm are hoping to hit 14 nm in 2013.

    Hitting 11nm hard-deck is 2015. To go past there power saving most likely will not come. Also at 10nm the chip start interfering with itself.
    http://en.wikipedia.org/wiki/Semiconductor_device_fabrication

    The odds of seeing a atom 11 nm are very low.

    Notice the slow down here. 2 years to move 5 nm. To move any more after 11 nm is going to be rare.

    We are 2-3 years to the hard-deck 11nm.

    If we are able to get past that hard-deck. That might give as 1 or 2 more cycles down to 5 nm.

    Going under 10nm with current cpu designs even with current arm is not going to happen. Going under 11nm is going to be insane with current cpu designs. Reason at 1.5-10 nm any radiation even chip generated is going to be a problem. Gates will be flipping all by themselves at under 10 nm.

    A bit of cosmic radiation can flip 6 nm gates 100 percent of the time. So yes add on random number generate to every logic gate operation that you cannot control at 6 nm. 11nm background radiation also start flipping a decent percentage of gates. Basically 11 down it just gets worse and worse until a logic based chip is not an option because you need up needing to run the operations many times to detect error. So the smaller transistors are not really saving you anything.

    6nm is the absolute hard deck no question.

    But 6nm is only for probability based processors.
    http://www.technologyreview.com/computing/26055/
    Yes the do exist. Linux and Windows are both designed for logic based so not probability based compatible.

    Arm is already looking a multi paths made over the top of each other using FPGA tech.

    Basically there are no more transistors to come in very short time. 2017 if we can get down to 6 nm will be it if not the end will be 2015 for current silicon based techs.

    I personally suspect the end is 2015 for logic based processes getting more transistors to a bit of silicon. IBM and others are looking to take Chips more 3d. More layers in the silicon. This is about the only way after hitting 11nm to increase transistor counts for logic based processors in kinda the same space.

  21. oiaohm says:

    startx still exists. Because you don’t leave x11 running on servers all the time. startx is a diagnostic mode. All the way back to 1985 you could run X11 as a service or you could start it by startx.

    ted
    “Vista slowwweeed down basic file operations, just for the heck of it.

    Which was a bug. Which was fixed.”
    I would not claim fixed on this. Made less likely. Its the indexing service that goes nuts in Vista and 7 that was the fault that was causing the basic file operations to slow to a snail. It still happens just less frequently.

  22. Ted wrote, “something inherently wrong – bad hardware, bad drivers, file-system corruption, etc.”

    Not to mention lost pointers and memory leaks…

    Good-bye Ted. You are wasting my time here and adding nothing to the conversation.

  23. Ted says:

    Ted, denying reality, wrote,

    Please stop. Do you have any idea how foolish this inane practice makes you look?

    How would you react if people prefixed replies to you with “Pogson, avoiding the direct questions, ignoring requests for evidence, fudging the maths in his favour, inserting non-sequiturs, going off on tangents and skirting the issue with nonsense, wrote;”?

    Why do you think there was a SP1, 2 and 3?

    Conversely, I could ask why is there a version of Ubuntu every six months? The same reasons – improvements and bug-fixes.

    There were plenty of bugs in XP.

    I’ve never said there wasn’t. You’ve mis-quoted and selectively quoted me to try and score cheap points before now, and now you try to score points from something I did not say?

    In the first year or so it did crash daily on systems I used.

    And it did not on systems I used and administered.
    Maybe I was doing something right?

    If we assume that crashing is “normally distributed”, that 5% might be translated to mean more than 5% crash once a day and that was in 2003 after a SP and lots of bug-fixing.

    You assume incorrectly. Crashing would normally be concentrated on machines where there is something inherently wrong – bad hardware, bad drivers, file-system corruption, etc. Or in a machine where the a user or scheduled process causes a crash. And to clarify, “crash” in the context of Windows Error Reporting does not mean BSOD. WER also reports program and service errors that do not result in BSOD.

    So, in 2001, daily crashing was the norm.

    A completely unwarranted assumption.

    See Windows Metafile Vulnerability

    Irrelevant.

    No, X does not run on top of a terminal. It’s a service, its own process.

    Thank you for the correction. I do remember a “startx” command being used from a terminal prompt in RedHat in late 2000, though.

  24. Ted, denying reality, wrote,
    “XP crashed daily.

    No it didn’t. If it did for you, you had bad hardware or You Were Doing It Wrong. “

    Why do you think there was a SP1, 2 and 3? There were plenty of bugs in XP. In the first year or so it did crash daily on systems I used. A good OS is supposed to manage its resources so that it never crashes. GNU/Linux has always been better than that other OS for me.

    Here’s how XP as issued could crash back in 2001…

    2003 calling“Mr. Gates acknowledged today that the company’s error reporting service indicated that 5 percent of all Windows-based computers now crash more than twice each day.”

    If we assume that crashing is “normally distributed”, that 5% might be translated to mean more than 5% crash once a day and that was in 2003 after a SP and lots of bug-fixing. So, in 2001, daily crashing was the norm.

    No, X does not run on top of a terminal. It’s a service, its own process.

    See Windows Metafile Vulnerability
    “According to Secunia, “The vulnerability is caused due to an error in the handling of Windows Metafile files (‘.wmf’) containing specially crafted SETABORTPROC ‘Escape’ records. Such records allow arbitrary user-defined function to be executed when the rendering of a WMF file fails.” According to the Windows 3.1 SDK docs, the SETABORTPROC escape was obsoleted and replaced by the function of the same name in Windows 3.1, long before the WMF vulnerability was discovered. However the obsoleted escape code was retained for compatibility with 16 bit programs written for (or at least backwards compatible with) Windows 3.0. This change happened at approximately the same time as Microsoft was creating the 32 bit reimplementation of GDI for Windows NT, and it is likely that the vulnerability occurred during this effort.”

  25. Ted says:

    XP crashed daily.

    No it didn’t. If it did for you, you had bad hardware or You Were Doing It Wrong. I’ve personally seen XP with uptimes in the months. (Not networked before you parrot about updates.) There are records of XP with uptimes of nearly two years.

    Vista slowwweeed down basic file operations, just for the heck of it.

    Which was a bug. Which was fixed.

    “7″ still gets malware that plagued 3.1.

    [Citation needed]

    I don’t understand the “Runs of top of DOS” rhetoric from the Penguinistas; it’s not been true for a decade. By the way, doesn’t X run on top of a terminal? I do know RedHat still needed “startx” to start the GUI some time after Windows 2000 was out.

  26. ch, since you cannot follow the breadcrumbs, I will make a list:

    1. In the beginning were various operating systems and monitors and stand-alone programmes. The world of IT was without form and void… Hardware included mainframes, minicomputers and controllers.
    2. In 1969 came UNIX at AT&T Bell Labs, initially on PDP7.
    3. In 1970 came D.E.C. RT-11 on PDP11 minicomputers which did much the same things as UNIX with a similar CLI and a similar world-view
    4. A few years later with microprocessors, folks needed similar OSes and various DOS OSes were developed by people who had used UNIX, RT-11 or PDP11s or all of these things. Naturally, DOS resembled UNIX in some ways. In those days, the DEC PDP11 was affordable by science labs, computer science schools, businesses and the like and were common tools like hammers and saws for carpenters. I know. I was there, at the University of Manitoba, doing physics and working with mainframes and mini-computers and controllers and similar people doing similar things to what I was doing, collecting and analyzing data or controlling stuff. In the 1980s I wrote a control system for a cyclotron lab in Saudi Arabia using such stuff.
  27. ch says:

    “ch, you revise history a lot in that comment.”

    No, I don’t, so would you kindly retract that statement ?

    “These were DOS-like OS on PDP-11 machines.”

    Your claim was that DOS was somehow Unix-like, and now you claim that it “borrowed” from a completely different OS?

    Take a look at this:
    http://www.patersontech.com/dos/Byte/InsideDos.htm

    Quote: “The primary design requirement of MS-DOS was CP/M-80 translation compatibility, meaning that, if an 8080 or Z80 program for CP/M were translated for the 8086 according to Intel’s published rules, that program would execute properly under MS-DOS.”

    Oh, and this:
    http://blogs.msdn.com/b/oldnewthing/archive/2004/03/16/90448.aspx

    “DOS could be found underneath Lose 3.1,”

    Correct.

    “Lose ’95 and Lose ’98.”

    No, not “underneath” anymore. For those, DOS was just the boot loader:
    http://blogs.msdn.com/b/oldnewthing/archive/2007/12/24/6849530.aspx

  28. ch, you revise history a lot in that comment. In the 1970s, I used an OS called DEC RT-11 and later, RSX-11. These were DOS-like OS on PDP-11 machines.
    “The Keyboard Monitor (KMON) interpreted commands issued by the user and would invoke various utilities with Command String Interpreter (CSI) forms of the commands. RT-11 command language had many features (such as commands and device names) that can be found later in DOS line of operating systems which heavily borrowed from RT-11.”

    DOS could be found underneath Lose 3.1, Lose ’95 and Lose ’98.

  29. ch says:

    “DOS was a poor man’s clone of UNIX”

    No, it started life as a clone of CP/M. (We have already been over this point, haven’t we ?) With 2.x, it got a hierarchical filesystem, and that part was clearly “influenced” by Unix, but the rest of it was just too different.

    “There were many DOS-like OS around that time”

    No, there weren’t. The only technical difference between MS DOS and PC DOS was the included Basic interpreter, otherwise PC DOS was just MS DOS with an IBM logo on it. (Only the very latest PC DOS versions offered some exclusive stuff and actually competed with MS DOS, but then came Win95 and made it all moot.) The only real alternative for a while was DR DOS (later Novell DOS).

    “and GNU/Linux when it came along could easily have substituted for any of them without a GUI.”

    Once again, no. Linux came along in 1993 (and was a PITA to install). By then, almost every DOS user on the planet had become a Windows user, so we wouldn’t do any longer without a GUI. Oh, and WordPerfect, 1-2-3 and dBase didn’t run on Linux.

    “did not require much in the way of memory management.”

    Sorry if I was imprecise. The 8088 lacked a Memory Management Unit (MMU). Without it, keeping system memory safe from user processes etc. is rather impractical.

    “Keeping DOS around until about 2000″

    Only that starting with Win95 it was nothing more than a boot loader (unless you used some DOS-mode driver or TSR that couldn’t be ditched). Win95 was designed for PCs that were at best connected to AOL or Compuserve, but definitively not the Internet, and that had not enough memory for WinNT (or any available Unix, for that matter). It served that part rather well, and if you wanted more stability, there was WinNT for you. The “problem” of course was that in 1995 the Internet “happened” and soon brought a whole new threat environment that the Win9x-line was simply not designed to handle.

  30. oiaohm wrote, “X86 really you are pushing a dead design up hill.”

    For anything the consumer needs doing, x86 will likely never die. The only reasons it’s so archaic today are the cost and power consumption. The death of Wintel will dramatically lower the cost and Moore’s Law will get x86 where it needs to be in a few years. Large deployments of CPUs for HPC and servers will likely find x86 unworkable in a few years, but the consumer, only needing a few cores in one chip, likely will find x86 continuing to be useful. You will see Atoms at 10-14nm widely used in smart thingies because it will use only a few milliwatts even at 64bits and quad-core. Intel is at 1-2W today at 22nm. Two steps of Moore’s Law will bring them to 10nm or so and probably 500 mW. Of course ARM will be at lower power but after a day of battery-life, does it really matter? ARM may be able to run from a solar panelled case which may well trade off against price/weight.

  31. oiaohm says:

    Clarence Moon that is the problem your metrics are where you going to hell.

    What archs do you need for the future desktop. Most likely arm and mips. X86 really you are pushing a dead design up hill. There is only so far that can be pushed.

    Now if you were wanting to be MIPS and ARM compatible you need the embedded and mobile phone markets.

    Basically Microsoft might still be doing ok on the desktop but they are losing critical battles in the embedded and phone space. That is going to come back and bite.

  32. Clarence Moon says:

    Well, it is what people see that counts most, Mr. Pogson. The interesting thing about their losing share, if your sources can be trusted to be showing a real decline, is that this declining share of usage has been significantly lower than the overall rate of market growth so that Microsoft overall has grown 150% during that same period of time.

    I think that signals success for Microsoft certainly even as it may show progress for Linux. Would you consider that to be a win-win situation?

  33. Clarence Moon, apologist for M$, wrote, “a constant story of product improvement”

    Let’s see. 3.1 crashed on context changes. ’95 crashed whenever it wanted to. ’98 crashed in minutes of use if memory ran out. XP crashed daily. Vista slowwweeed down basic file operations, just for the heck of it. “7” still gets malware that plagued 3.1.

    Yep. That’s a lot of improvement. The big problem that none of those fixed was that M$ is run by unethical salesmen and con-artists.

    Still losing share since about 2003, though.

  34. Clarence Moon says:

    The consumers I know are confident the logo means their computer will slow down through use, collect malware and die young.

    Not so much of that concern down here in the lower 48, Mr. Pogson. I guess the reason that the market does not behave as you predict is due to that different sort of overall experience.

    What people seem to notice instead is that the evolution of Windows from Version 3.0 through to Windows 7 today has been a constant story of product improvement. People value that sort of trait in a key supplier, I think, and will stick with them as long as progress is being made.

  35. Clarence Moon wrote, “consumers are confident that the MS logo on the machine means that it will perform as they expect.”

    The consumers I know are confident the logo means their computer will slow down through use, collect malware and die young.

    There is lots of data to suggest people are browsing the web with their smartphones and using their desktop/notebook PCs less. On stats.wikimedia.org you will find iPhone=5.47% and Android=2.87% and BlackBerry=0.71% so those three which are mostly used on smartphones are 9.05% and there is still lots of room for growth in established and emerging markets. People love small cheap computers. If only the bandwidth were free…

  36. Clarence Moon says:

    Computer users who want nothing beyond what comes with the machine may very well opt for Linux at a lower price. Why don’t you go into the business of making them and see?

    Meanwhile others who do use additional software shy away from the idea of getting stuck with something that may not be compatible. Perhaps it is only a matter of education, but that, too, is rather expensive. Microsoft has already paid the costs of consumer education and consumers are confident that the MS logo on the machine means that it will perform as they expect.

    If you are going to match or exceed those expectations, you, too will have to pay the cost of consumer education. If you do it on a shoestring, as you suggest, by starting with rural schools that have to take what is offered, it will take quite a while for FLOSS OS to catch up to the commercial versions.

    Maybe a universal acceptance of Android will help. I cannot say for sure since my own opinion is that consumers treat phones, tablets, and PCs as three pretty much separate and independent objects, each with its own use and expectations of what it can do and how it does it.

  37. Clarence Moon wrote, “customers do want to be able to use their previously acquired software on future purchases”

    Not if the software comes with the PC. Almost every PC comes with an OS, a browser and some player these days and many users use nothing more than that.

    Clarence Moon wrote, “At the going rate of about $100 for a terabyte of disk storage, a gigabyte of bloat only costs a dime, though, and the argument becomes rather inconsequential. “

    Bloat costs a lot more than that: errors, bugs and vulnerabilities per thousand lines of code becomes a huge problem with that other OS. Debian GNU/Linux has fewer of those on the whole repository than M$ has in IE and the OS.

    Bloat also slows execution because more has to be done to get most results demanded by the user and more bloat means more seeks to access any software.

  38. Clarence Moon says:

    That is all true, Mr. Pogson, but it is also true that customers do want to be able to use their previously acquired software on future purchases. So there is a trade-off that has to be made. Microsoft chooses, as do many others, to maintain backward compatibility in spite of the downside.

    Perhaps Linux has an operational advantage there although I do not know for sure just what they do to make or break compatibility with past versions.

    “Bloat” is a term used quite often, I have noticed, by those who disparage Windows. It seems to mean that the feature under discussion, whatever it might be, uses “excessive” system resources in terms of memory and CPU cycles to achieve “unnecessary” results. At the going rate of about $100 for a terabyte of disk storage, a gigabyte of bloat only costs a dime, though, and the argument becomes rather inconsequential. As I type this, I looked into my Windows Task Manager and see the CPU Usage graph nailed to zero, so that doesn’t seem to be costing me either. My RAM usage seems glued to 1.5GB, too, and It says that I have 3GB available. (I’m home today and using my 32 bit workstation).

  39. Clarence Moon wrote, “Microsoft has moved forward with vastly improved products since the 3.x days and has managed to provide runtime environments for the newer releases of Windows that accomodate running old applications.”

    Keeping DOS around until about 2000 brought untold misery to end-users.

    Keeping runtime environments for obsolete stuff is a recipe for disaster when it comes to malware and bloat. That other OS is a huge target for malware not only for the number of installations but the number of holes in each installation.

  40. ch wrote, “You couldn’t run a real Unix on an 8088 or even 80286″.

    That’s true, but DOS was a poor man’s clone of UNIX, which for many purposes did what UNIX did. There were many DOS-like OS around that time and GNU/Linux when it came along could easily have substituted for any of them without a GUI.

    Much UNIX ran on PDP11s and the like that had just a few KB of memory and did not require much in the way of memory management.

  41. ch says:

    “All the while the world could have been independent of Wintel by using some UNIX OS.”

    Sorry, but no. You couldn’t run a real Unix on an 8088 or even 80286 (no memory management), and Xenix therefore didn’t really cut it. So in the 80ies and early 90ies, Unix required more expensive HW than DOS and Windows. With 386 CPUs, you could have used SCO Unix, but it was way more expensive than even WinNT and had few desktop apps. Linux only became somewhat viable around 2000, back in ’95 it was definitely not ready for average users. (Does anyone remember when Star Office became available for Linux ?)

    @Clarence:

    “I understand that old 16 bit apps actually will run under the 32 bit versions of Windows 7″

    I’ve run the Win3.x-version of the game “Colonization” and some other 16bit stuff on the 32bit-version of the Win8 preview (in a VM). Since I have moved my machine to Win7 64bit I have created a VM with Win95 for the old stuff – and man, does it boot fast ;-)

  42. Clarence Moon says:

    It still is but M$ has never done that otherwise all their end users would be using Lose 3.1

    I don’t see the logic in that, Mr. Pogson. Microsoft has moved forward with vastly improved products since the 3.x days and has managed to provide runtime environments for the newer releases of Windows that accomodate running old applications. I don’t have anything handy from that era anymore to try, but I understand that old 16 bit apps actually will run under the 32 bit versions of Windows 7 and that you can use Virtual PC to run Win 3.1 environment for the 64 bit versions in a pinch.

    You seem to have a very jaded view of business in general, but I do not see any evidence of any actual actions that could be considered malicious.

    As to the vague problem you relate regarding drivers for older devices, isn’t that the task of the device supplier? If I were selling $14K printers, I would be supplying drivers for anything that the customer might ever want to hook up to them. Who is the manufacturer who is so lax?

    Even then, the easiest thing to do is keep an old PC around as a print server for them if all else fails. Be creative.

  43. Ivan says:

    Wait, so Google is your “Friend” in this paranoid rant, even though their business practices are as bad as Microsoft in the 90’s? Seems hypocritical to give Google a pass just because they use linux.

  44. Clarence Moon wrote, Taking care of one’s customers and protecting their investments in one’s products was something to be praised.

    It still is but M$ has never done that otherwise all their end users would be using Lose 3.1. M$ took care of OEMs by giving them enough of the pie to live on, say, 2 to 10% margins while M$ grew fat on 80% margins. M$ thus locked in OEMs to being dependent on M$ and the end users were dependent on the OEMs to function. All the while the world could have been independent of Wintel by using some UNIX OS. Early on there were no options because IBM lead all its users to the slaughterhouse and then M$ bullied the OEMs once it had a monopoly.

    I know people who bought bunches of $14K printing systems who are stuck on XP and cannot use “7” because there are no drivers for “7”. That’s protecting customers, right? If M$ wanted to protect customers they would have used open standards for printing protocols but, no, the lock-in is deeper if you get consumers using something they cannot get from anyone else because it is proprietary. Then, all M$ has to do to get more money to roll in is to end support for the old system. That has worked until now when XP refuses to die and users of XP can move to GNU/Linux more easily than they can to “7”.

    Consumers investments in Wintel keep disappearing because Wintel needs stuff sent to the scrapyard in order to milk consumers of more “investments”. A licence to that other OS is not an investment. It’s a waste. Purchase of hardware that only works with that other OS is not an investments but a waste.

  45. Clarence Moon says:

    …who try to lock the world into doing things their way and paying for each iteration…

    I have always thought that it was odd for the Microsoft and other proprietary software opponents to harp on this issue. Essentially it is saying that people who use these products to their advantage are abused by the suppliers who continue to supply and extend their value by making things backward compatible for extended periods of time.

    In a kinder age, that sort of conduct was considered ideal. Taking care of one’s customers and protecting their investments in one’s products was something to be praised. Other, less benevolent corporations were often accused of deliberately making new, desirable products incompatible with old ones, forcing customers to invest heavily to obtain new product benefits. “Planned Obsolescence” was the term applied to that practice.

    Now, the FLOSS proponents have turned that philosophy around and essentially claim that those who prolong previous standards are acting out of malice, preventing competitors from having an opportunity to sell something different due to the continued availability of what the user already has come to rely on. By enhancing these products, vendors such as Microsoft eliminate any leverage their competitors might gain from having newer, more desirable function exclusivity.

    “LOCK IN!!!!” they cry, suggesting that people will surely someday be left in the lurch when Microsoft abandons the practice of backward compatibility. But that is just what they insist that Microsoft do.

    As I said, that seems rather odd to me.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>