Robert Pogson

One man, closing all the windows.

Munich has Migrated the 9000th PC to GNU/Linux

  • Dec 19 / 2011
  • 69
technology

Munich has Migrated the 9000th PC to GNU/Linux

This is my fix to Google’s translation of a Munich IT Blog post by Kirsten Böge 14:12:11

The LiMux project is “on schedule”, and “over quota”

The project LiMux

On 12.12.2011 the 9,000th PC workstation in the Civil Engineering Department migrated to the new LiMux client. Thus the LiMux project is proceeding faster than expected: 8500 PC workstations were planned by the end of 2011. Similarly, substantially all MS Office suites are uninstalled, with few authorized exceptions. The municipal government change-over from MS Office to OpenOffice.org 3.2.1 will be a bit longer, because in some cases, the dependence on specialized procedures is still too large. But the more we succeed in being able to run the equivalent open source application to replace it or virtualize it, the more likely it will be the last MS Office suites replaced by OpenOffice.org.

In 2012, the rest of the planned 3,000 PCs will be migrated to the LiMux client and the migration completed. In addition, we shall continue to replace proprietary business applications with open source solutions.

Thanks to Google Translate and Dict. Even with that help, I had to guess in a couple of places. Corrections are welcome.

The meaning is clear: the end is in sight. It has been a long haul but Munich will finally have a GNU/Linux system working for them instead of Munich working for M$. While there has been much cost and pain in the process, the future is forever and the benefits from switching to GNU/Linux, open standards and more efficient organization will continue to roll in. If there is one lesson learned from the process in Munich it is that the sooner migration is started the better. Otherwise, you’re just digging a deeper hole. While that other OS can form a basis for IT it is an unstable one designed to bring profit to M$ above all else. With GNU/Linux, FLOSS and open standards, an organization has much more control over its destiny. Almost every “feature” that M$ created served to lock-in Munich more strongly. They recognized that and took action.

Many of the delays were totally unnecessary and triggered by friends of M$. Those people are not your friends. We see that in some of the commentators here. They come to distract, abuse, and delay, not to contribute. In addition to normal due diligence back in 2003-2004, there was patent FUD, SCOG v World (funded in part by M$), and public criticism based on nothing. The result was that the plan for migration was revised multiple times and a year-long testing period was added. This added costs with little or no improvement in performance. Some of the extra time was used to replace macros and forms, some for reorganizing the IT structure but the software on the clients would have run well from the beginning. Efficiency delayed is efficiency denied. A better plan would have phased in the changes: first change the apps to FLOSS wherever possible, then change the OS where the FLOSS apps permit and use virtual apps for the rest, then reorganize the whole IT system which would have been easier with FLOSS and open standards throughout. Instead requiring all the ducks to line up before moving the first step made everything difficult.

My recommendation is that such projects should be divided into reasonably sized chunks and done as soon as possible. Otherwise the benefits are reduced. In particular, one can put the applications for some subset of machines on servers and replace the machines with thin clients a lot faster than using thick clients. That gives immediate savings in power and cost of equipment and reduces the amount of software changes by an order of magnitude.

One thing that Munich did well was to resist M$’s salesmen. Those salesmen are the best and brightest M$ has to offer. Munich did not just look at the short-term costs but also the long-term benefits. M$ will, in such cases cut prices to zero if necessary, but the price of slavery is too high. The fact that they will cut prices to retain users is proof that their prices are too high, and maintained by their near monopoly. A monopoly can be a good thing if it is well regulated publicly but M$ is an evil monopoly designed for no other purpose than to maximize the cost of IT/revenue for M$. It is irrational to cling to M$’s operating systems or office suites or anything else as a platform for IT. Even if it works for you at some point in time it will work to M$’s benefit and against your best interests. Munich was pressured by M$ to buy XP/2003 for no benefit whatsoever for Munich when Munich was reasonably satisfied with NT 4. M$ made NT 4 impossible for Munich to support pulling the chair out from under Munich with its neck in a noose. M$ made it difficult to migrate from NT 4 by providing lots of hooks in the software that were specific to M$. Those were layers of lock-in that Munich escaped.

Another thing is clear. If Munich with all the lock-in that it had was able to do it, anyone can. Smaller organizations usually have less lock-in. Individuals often have no lock-in whatsoever because they mostly use the browser and a few generic applications, often FLOSS already (FireFox, OpenOffice.org). Don’t worry about initial cost of migration or insurmountable problems. Those difficulties pale in comparison to trudging on the Wintel treadmill forever.

Consider the following table. Imagine how steep the rise for cost of using M$ would be if the organization had a growing number of PCs. The most benefit to changing to FLOSS is obtained by changing as soon as possible. Essentially it costs twice as much to own a PC running that other OS compared to Debian GNU/Linux. This does not include the cost of maintenance, lock-in, malware and an office suite all of which are less with Debian GNU/Linux.

Cost of IT with and without TOS (5 year refresh cycle for TOS, 8 year for Debian GNU/Linux)

M$ ($) Debian ($) M$-Debian ($)
Year 1 Acquisition 500 400 100
Year 5 Refresh 500 600
Year 8 Refresh 400 200
Year 10 Refresh 500 700
Year 15 Refresh 500 1200
Year 16 Refresh 400 800
Year 20 Refresh 500 1300
Year 24 Refresh 400 900
Year 25 Refresh 500 1400
Year 30 Refresh 500 1900
Year 32 Refresh 400 1500
Year 35 Refresh 500 2000
Year 40 Refresh 500 400 2100
Totals 4500 2400

69 Comments

  1. oiaohm

    Robert Pogson you are correct and incorrect.

    Windows 3.1 does not know networking so your thought about it is correct.
    Windows 3.11 is Windows for Workgroups does know networking. Also dos over network worked very well back then. Still does for anyone who wants to run dos by PXE. Since Windows 3.11 run on dos for all disc operations it was no issue running it over network. Scary little fact Windows for Workgroups would network self to self over 100 computers.

    95 was very crippled in networking it was the first of the 10 machine limit. Windows 7 they have expanded to a theory 20 but they are still limiting the OS massively.

    Yes 1995 we took a very different turn. I still suspect worse. Windows 3.11 was able to run as a thick client without issues. Where processing was done client side and disk-storage was all server side. If windows 95 had kept on going down the path of network boot virus control would have been way simpler.

    Yes I did miss the last upgrade to the 1994 machines now that I have checked logs. Firmware replacement on network cards to bring them upto modern day. So they are current day PXE. Makes them nice to handle. That was prior to there end of life as a general thick client workstation before starting life as thin-client workstation. Only just mind you 2003. Before PXE they had a novell form of network boot NwDsk related evil. Yes the central windows 3.11 server was novell.

    Yes I have a long history working with machines.

  2. Robert Pogson

    oiaohm wrote, “network boot from the get go because they were running windows 3.11 from central server in there prior life”

    Network booting did not become popular around here until about 2000 when Intel standardized PXE. I thought Lose 3.1 knew nothing of networks. In those days I used a telephone modem…

  3. oiaohm

    Robert Pogson most of my really old machines are ex cad operators. 1994 with 128 megs ram 486 DX4 100 mhz and had in there first 10 years of operational life been upgraded from 10 base to 100 base. So other than CPU over all decent for light thin client stuff.

    Nice bit is they were all network boot from the get go because they were running windows 3.11 from central server in there prior life. Max life is 20 those will hit it. Role stores computer. Interface very light.

    Of course the machine you were talking about only 10 base yes that would have been high on my crush list. Since it had failed to be upgraded to a decent network speed in its first 10 years. Unless I had a text based terminal role for it.

    I am more likely to keep computer reaching there current 10 years old than the ones that are like the 1994 486 DX machines. Messing around with floppy to make machine boot is serous-ally too time costly.

    Basically I do have strict rules on how much time I put into the old machines. A machine like you described that was not already configured for network boot or did not have a network card to turn it network boot Robert Pogson as soon as I found it the thing would have said hello crusher.

    Pentium II 1998 is the first to cross the 400 Mhz range in intel. Pentium I computers were the first to bring in on-board network cards with remove cmos battery to make network boot. Basically that is up to 12 years old all ready to be thin-clients basically as is for most of them.

    Most of those old Pentium II are fitted with more than 128 megs from new. Yes they have USB 1.1. Not fast usb but still usb good enough to interface with most usb devices required.

    This is the point I noticed a increase in number of machines that were not getting crushed. Pent II is where I came clearly aware that 20 year old could become more common. Yes 2008 wake up call.

    This is the pattern I have seen.
    1990-1994 min was 64megs at 10 years old to avoid crush. Yes at point of conversion 2000-2004 that is a long time ago 128 meg for thin clients was not seen as a requirement.
    Thinking most of these machines barely shipped with 16 megs of ram when new. Crush rate was almost 100 percent.

    From 1994 to 1998 hardware required a min of 128 installed in 2004-2008.
    Crush rate here slowed. But it was still the accept-ion to the rule you are still looking at 70 percent + crushed of the total machines.

    1998-2000 has min of 256 with a preference to 512 to avoid crush 2008-2010 old.
    This changes to only the dead and prick of bios versions being crushed. This is the line in sand oldman and other has missed. At this point recycling to thin clients is fairly straight forwards and cost effective.

    Old than 1998 the machine if not already configured for network boot for some reason its most likely not worth working on.

    2000 on is 512 megs to avoid crush and I suspect that should be pushed to 1G.
    Basically here on it only getting better.

    I only have these numbers because here has been running a recycle to thin clients for a long time.

    PC have evolved faster than what humans need as a interface. The interface requirement was crossed with 1998 hardware.

    Yes the stuff crossing 20 they are normally not worth anything other than crush. So a lot of the 64 meg stuff is drifting into scrap.

    I would not be working on a 15 year old machine. Last hardware change is at 10 years old. If it not up to its role past that point is crush or change role. Not even open box. Reliability will be a issue in that 15 year old machine you worked on Robert Pogson.

    The past 10 years old do not touch rule is purely about reliability too easy to crack stuff so making the machine unstable. No point having unstable machine. Basically a 10 year+ old computer is treated like a device here. Either its upto a job or crushed.

    Some of the 64 meg machines have some odd usages some places only text based interfaces are required.

    I have rules I am working to. They have shown a difference.

    Kozmcrae you are forgoting how old 15 din vga is. 1987. So yes a lot of computers that are 20 years old can drive a modern day LCD. Heck lot of the res are lower on LCD screen today than 25 inch CRT screens used in cad work from 1992.

    1990 is XGA (1024×768) quickly followed by other sizes. By about 1994 we had basically hit max screen res.

    This is the problem screen have basically stood still in res for 20+ Years. Only major changes has been the change to wide screen and increased means to gpu process.

    Also due to the fast rate of change 1994 on most card have programmable res due to not being able to predict was res screen they might be connected to.

    Serous-ally look at the back of most LCD screens 15 pin vga is still there. So changing over the CRT’s don’t require touching the rest of the computer. At least for most of the machines Munich has. Thinking they updated after 1994.

  4. Kozmcrae

    CRTs cost $15 USD to recycle where I live. Some low-lifes toss them in the woods. I don’t know about LCDs. I suppose they’re loaded with nasty stuff too.

    For a municipality like Munich there must be a crossing point where the cost of recycling a CRT meets the savings a new LCD would bring. Either way with GNU/Linux and open source they are free to make that decision based on hardware alone and not on license fees and forced upgrades.

  5. Clarence Moon

    All of this borders on the useless, I think. It is fine to have a hobby, of course, and bringing antique computers back to life is as interesting of a pastime as anything. But that is all that it is.

    In my home I have, as I count them, 10 individual computers, all of which are, I believe, as functional as they ever were. Several are in daily use, namely my wife’s Dell all-in-one, my desktop workstation, and my “good” laptop.

    One (HP Slimline Media PC) is still hooked up to the family room TV where I once used it for Netflix, but then I got a Blu-Ray DVD with WiFi that I use instead. Another net book is in its bag and hasn’t been used except on rare occasions for nearly a year since I got the new Dell laptop. 4 more are sitting in a storage closet unused for years except for one Compaq that I loaded with Ubuntu Linux last spring to see what the fuss was all about. These are the predecessors for my “home office” or my wife’s. One last machine is a Dell Latitude notebook with XP that works just fine, but I just don’t have any more use for.

    Your post about using an old computer as a “door stop” got me wondering about the whole idea. I don’t toss the old computers out because they still work. I would give them away, but no one wants them. Charities around here only take newer computers with LCD monitors. I’m a little reluctant to just scrap them anyway, since data on the hard drives relates to years of personal finances and email. I read tales of how malicious folk have used these things against citizens like myself.

    Now I have added a Kindle Fire and an e-Ink Nook Touch, not to mention my cell phones.

    Where will it all lead?

  6. Robert Pogson

    The oldest PC on which I have installed GNU/Linux was 15 years old. Last year, I had two dead machines of that age. They were the last relics of a previous roll-out of IT from the Dark Ages. I managed to get one machine working by swapping parts. Installation was a dog because the video card was primitive and there was no CD drive nor USB ports and not much RAM. I had to work a bit to get a floppy to boot a network installation.

    Based on performance, the box was past end of life. Based on price it was too. It took many hours to get less performance than a newer donation. Even as a thin client it sucked because of tiny RAM and 10 megabits/s NIC. The CPU was an order of magnitude too slow as well. Still, it could have found some role in IT: a doorstop, perhaps, or a DHCP server. Reliability would have been an issue as a DHCP server. No ordinary GNU/Linux distro would install easily on it.

    So, I doubt any PC more than 10 years old could be said to be viable. There are some features that are just too important for most uses: RAM, networking, and USB for instance. It’s even hard to get IDE drives these days. As a thin client, if it PXE boots and has more than 64MB RAM I could go a few years more than 10.

    One should consider what features an older machine lacks before deciding on obsolescence. Speed of CPU is not that critical if it’s more than 400MHz. 100mbits/s should be a minimum requirement. Even LTSP likes more than 64MB since version 5. 1024×768 is still useful and that’s almost 20 years old. Humans don’t evolve as fast as PCs so PCs can be useful a long time. Newer machines with SSD and no fans could well be useful 20 years.

  7. oiaohm

    oldman

    “That is in the end your opinion sir, nothing more.”

    Not at all. Useful life the true Useful Life of a item in assessments is a testable item. Yes useful life is based on the ideas if science. It must be testable must not be an option or your useful life idea is worthless. Basically you should be able to demo that the item is past the end of it useful life for the role its being asked to perform.

    If a useful life assessment is an option the person you are talking to is idiot. There are metrics you create to test if something is still useful to your current business.

    Of course different usage has different metrics. What I was pointing out was the broad range of metrics depend on where the computer goes to what it useful life is.

    This is why a item that is junk to one person because it past it useful live is a viable item to another. Different workloads different requirements different point the item turns to junk.

    I live in the real world oldman I accept that metrics change based on requirements.

    The problem is what you are stating is an option oldman. Can you demo that the hardware you can junk cannot perform a useful task for someone??? No you cannot. This is proof that what you have is a option not a proper assessment that something is junk.

  8. Kozmcrae

    “My point was not that one does what one has to in a bad situation, it was that one should not have to make due at all.”

    However “one” comes to use GNU/Linux is not as important as the fact that they are leaving Microsoft. The London Stock Exchange chose Linux because it delivered the goods where Microsoft failed utterly. It wasn’t a “hand out”.

    It would not make sense to purchase new computers to put GNU/Linux on if the old ones would do just fine. Put the money to better use. That in your end sir makes perfect sense.

  9. oldman

    “Problem here what you call junk has a useful life.”

    That is in the end your opinion sir, nothing more.

  10. oiaohm

    Even dead junk computers can have a value. Computer Hardware diagnostic classes.

    Again is the right education usage. Junk is not junk until it don’t work any more.

  11. oiaohm

    oldman
    “My point was not that one does what one has to in a bad situation, it was that one should not have to make due at all. Your students deserved properly provisioned and funded IT, not handouts. They were not given it, and only received it by accidentg because of your unique combination of skills.

    That is education denied.”

    If I was to give you the exact budget to provide students with properly provisioned and funded IT. No hand outs. You most likely would fail.

    Waste not want not. Oldman. Exact budgets don’t happen.

    Lot of schools are education denied because they have acquire PC’s then don’t use them out for there full life. I was at a rural school. When pent machines came in the old xt did not go to the tip.

    They might have green and horrid screens but they were more than good enough for teaching typing. Yes those old XT today are still used for that. Sperry XT computers.

    Microbee computers from the high school end up in the primary school there to teach basic programming. Everything was run until it died.

    Better usage of the hardware more creative usage of the hardware more value in teaching achieved.

    oldman
    “Why not donate computers that they can actually use instead of junk that locks them into FOSS mediocrity.

    Sending junk to people who deserve better is not being helpful.”
    Problem here what you call junk has a useful life.

    As a multiplier to make a few good computers do more work and be more productive.

    If they still have there cdrom drives still in them nothing stops a livecd being inserted into the machines either.

    Schools have many computer roles to fill not all of them have to be done by the latest and greatest.

    Just like you say the applications oldman. I say the jobs. If a old machine does the jobs required perfectly as good as a new machine you might has well use the old. This can apply a lot in schools.

  12. oiaohm

    Clarence Moon
    “Meet JeffM = Telic = Oiaohm = Peter Dolding.”
    I will correct something here.

    I have posted in COLA but it was as Oiaohm to be correct Peter Dolding from a oiaohm account in the last 10 years. There is only 3 posts. Mostly correcting people on who they thought I was.

    All prior 2000 posts in COLA have always been as Peter Dolding all the way back to 1994 when I first got my own internet access before that my stuff only appeared on local BBS servers. Just I don’t post often in COLA. Last nym change was from Peter Dolding to oiaohm.

    JeffM and Telic for some reason have english styles like mine. But they are not me. If you read them enough you with notice neither makes dyslexia based mistakes. Yes dyslexia is something I cannot remove.

    oiaohm I have used for over 10 years everywhere. My handle before oiaohm was not unique it was always dark something. oiaohm is a joke. Ok I Am Over Here Mate=oiaohm. To be correct more of an Australian radio prank. Ok I am Over Here Mate Where Are You. Yes you can do that as a looping prank to annoy the hell out of anyone asking where you are instead of using triangulation. Handle dates back to when I use to play mud on-line I would go from mud to mud and not be able to log in so I made something unique.

    All my mud and online gaming handles are acronyms. Yes you will find me from time to time playing xonotic as oiaohm.

    Just to be more funny JeffM is a USA cit and I am Australian. I don’t know who Telic at all never seen any posts by him. The very channel they mention I hang out in on IRC is not my only one. JeffM and me have had a dispute in there of all things. Might be shocking but I believe in the idea of GPL and JeffM believes in the idea of BSD we don’t get along at all.

    If you had read my posts in my blog you would find that I am not 100 percent pro FOSS.

    Basically what you just found was another case of MS trolls bundling what I have done with other people who are not as competent to attempt undermine me.

    Basically I know more about the people Anti-Microsoft than most. Yes they like to call me nuts because I am logical. I am hard to defeat yes.

  13. oldman

    “Working PCs are not junk. ”

    Beggars can not be choosers, eh Pog?

    My point was not that one does what one has to in a bad situation, it was that one should not have to make due at all. Your students deserved properly provisioned and funded IT, not handouts. They were not given it, and only received it by accidentg because of your unique combination of skills.

    That is education denied.

  14. Robert Pogson

    oldman wrote, “Sending junk to people who deserve better is not being helpful.

    Working PCs are not junk. Many schools still have higher than 3:1 student:PC ratio and any PC is being very helpful. Putting them to work as thin clients is amazing to most students because they get superior performance to a brand new PC (typically with a single hard drive) on old hardware. Responsiveness to clicks is helpful. Last year students were far ahead to use 8 year old thin clients compared to a new PC or nothing. Nothing is what they had before I arrived. Education delayed is education denied.

  15. Clarence Moon

    I became curious as to what “oiaohm” might mean, that is, is it a word used in Australian slang? I didn’t find any such thing, but I got tons of hits for Mr. Oiaohm on Google in regard to his posting elsewhere. Apparently he is rather prolific and has gotten attention far and wide.

    He has/had his own blog, and posts in a number of venues such as something called “boycott Novell” and “tech rights”. Some synopses are found at:

    http://us.generation-nt.com/answer/meet-jeffm-telic-oiaohm-peter-dolding-help-204554431.html

    http://www.itwire.com/opinion-and-analysis/fuzzy-logic/22130-linux-haters-redux-dead-long-live-oiaohm

    I did not realize that he was so famous.

  16. oldman

    “Sending down the road to a school to use a thin-clients for another 10 years is being helpful.”

    Why not donate computers that they can actually use instead of junk that locks them into FOSS mediocrity.

    Sending junk to people who deserve better is not being helpful.

  17. oiaohm

    Clarence Moon this is where you are wrong.

    The one thing that may see the end to the old machines before 20 years is in fact power effectiveness.

    When the day comes that you can by a new arm thin-client for the price of the metal in the old machine. Then life will shorten very quickly for the old systems. Lower power cost at the cost to nothing on the budget.

    I am not past buying new Clarence Moon. Note that none of my machines go to land fill. They do get crushed to recover there metal value.

    oldman
    “Meh, its still crap after 8 years no matter what you say sir.”
    Do you run thin-client in any areas. I do have some areas that are known hazards to computers. So you don’t put good computers in there worth money. Yes where they can catch fire due to the fragments of material that stack up in them or around them or crushed by truck slightly off course.

    Yes I have some very nasty locations to place computers. You don’t place full PC’s at those locations its not common sense.

    If you are not running thin-client network. I would agree with the idea of planning replacement at 8. So they can be transported to be used in a thin-client network run by someone else. 8 years the PCB boards still have enough health in them to move.

    My life frames are a guide oldman. Like no point giving to schools machines that are 10+ because they most likely will have more duds than working. So its a gift that you are wasting there time with. Its nicer to give something that will transport to them and live.

    Of course nothing says machines you have at 8 could not be converted to thin-clients for schools or other locations. Oldman. Even better you don’t have to pay for disposal.

    This is reduce, reuse and recycle. Particularly reuse.

    20 years is the max life you can expect. But Max life is not achieved in all usage cases. Knowing that the last 10 years the machine makes a thin-client enabled it to be got rid of the right way. Batteries on motherboard does have to be removed at 10 so they don’t leak.

    There are lesson from us who do use stuff full length oldman. On what is worth giving as gifts and how it should be given and how far you should be considering sending it. Loading on a ship and sending to a overseas country forget it you are doing no one any good.

    Sending down the road to a school to use a thin-clients for another 10 years is being helpful. Since it could be given there students seats at a computer they would not had.

    Oldman if you do have a thin-client network I would be wondering why are you wasting perfectly good thin-clients.

  18. Clarence Moon

    I have to laugh at the extremes that these discussion go to in terms of philosophy and policy. If one says that the newest Windows microwave is convenient and power efficient, Mr. Oiaohm goes into a discussion of how it is more economical to build a fire by rubbing two sticks together to facilitate heating one’s jolly jumbuck. He will throw in some anecdote about a failure 15 years ago in their toaster oven as well, alone with a tirade against the ecological waste inherent in creating heat with resistive wire.

    Meanwhile, the new microwaves sell like hotcakes.

  19. oldman

    “This is the difference I know the hardware I have designed the plan around its weakness. So I make the most out the hardware.”

    Meh, its still crap after 8 years no matter what you say sir.

    End of story.

  20. oiaohm

    oldman
    “But the simple reality is that mainstream computing equipment is by most people’s estimate junk long before your 20 years life. This is especially true if the equipment was purchased originally on the cheap instead of being future proofed.”

    Even some of the cheep stuff from 20 Years ago is still going strong. Heck I have 30 + years old microbees for people to play with that still go strong and they have run every day for 30 years started life in a school now live in a museum.

    Time weeds out the weak oldman. Don’t not matter if it was the cheapest or the most expensive machine of the time. Some of the most expensive from 1995 you will not find in my networks. They are dead. Reason they were overheating parts due to the higher clock speeds they were running stuff.

    Future proofed more of than not are the cheaper computers that are not pushing there chipsets to the limits so they are future proof by design of being cheep. Basically the more penny pinching the place is the more likely the machine will run for 20 years.

    Oldman I build specialized ruggedized custom built computers. Yes most of what you use in them is the cheap. If it new expensive and fast its most likely going to be a dead before you get to 6 years old.

    To get 20 year old machines you don’t have do anything special in the acquirement process other than being cheep. It is letting the machines run out there natural life.

    Linux thinclient setup if you are running thin clients is not that much a extra.

    Basically what us out rural do is nothing special in acquirement. “future proofed” perchance is not done.

    If you by a big powerful computer top of line on the idea it will be future proofed that is the kinds of machines that will not normally reach the 20 year mark. Will be lucky to reach the 10 year old mark running.

    Yes shocking right the ideal place to run out to 20 years is places that buy cheap computers.

    This is wrong oldman
    “But the simple reality is that mainstream computing equipment is by most people’s estimate junk long before your 20 years life.”
    Note what I said about PCB boards. The early End of life normally comes from moving computers large distances that that no longer can tolerate it.

    Estimate of about 11 to 12 is right with movement that happens due to turn over idea that are wrong.

    The estimate you are using is general. Its not taking into account facts that alter life of computers. This is the difference I know the hardware I have designed the plan around its weakness. So I make the most out the hardware.

    Clarence Moon
    “But in most of the world, it is not efficient to spend so much time trying to resurrect old equipment or keep it running beyond its economical life.”

    I am not spending any extra time on the machines than what is covered by the value of the hardware I remove at each stage. It takes just as long to configure a 10 year old machine to be a thinclient as it does to configure a new thinclient to work and some cases less with the 10 year old machine.

    Hard-drive removal and destruction has to happen at end of PC usage of the machine anyhow for data protection. Its a minor extra step in the destruction process. It is getting simpler with the fact that when they stuck network cards on motherboard they placed network boot in there. Even better default state on most.

    How turn a current day 10 year old computer into thinclient with on board network card(that almost all have the few percent that don’t I crush) is remove the harddrive remove the cmos battery. Done no other setting required to the thinclient. Of course you have thin client time set from server anyhow. And if it don’t work that way crush.

    The older ones requiring firmware added to network card was trickier. To recycle into thin-clients has become a very cheep process. Yes Linux thinclient setups have generic images that cope with most hardware straight up.

    Again I am brutal if it don’t work with standard image for thinclients crush again.

    Really what is the difference between ripping harddrive and ripping out battery and harddrive at the same time. You have to remove the battery before you feed it into crusher as well. The acid in batteries don’t do the crushers teeth that much good.

    Only thing extra is plugging it back in after stripping to see if it works. In fact the strip process can be done at the machines desk. So avoiding the unplugging.

    Harddrives go threw a finer grade shredding process.

    The metals and materials recovered from the hard-drives and the batteries are paying for this operation. Any thin client part that does not make it is pure profit.

    Clarence Moon does a computer loss all its value because it stops running. No it does not.

    This is the simple point I am not doing what is not profitable. Either profitable because of what I will be paid for the metal or profitable due to extended life of the machine.

    Really will the computer that is 10 year old lose any more sale value. Answer is no the most profitable way to sell it is for its raw metal. So every extra year it run is not costing the company anything on the final sale price of the machine. Its just a delay on it meeting the crusher. Yes each one of the thin clients being crushed is profit.

    Yes we are running a very closed loop.

    Note the image I use to run out to 20 year old machines is the same image I use to provide backup thin-client services on new machines. No extra labour at all keeping the old machines alive from a software point of view.

    Hardware work side its all paid for by the hardware itself.

    Toledo, Ohio could apply what we are doing without issues. Yes modernish hardware(just coming up to or just past 10 years old) is cheaper todo what use to be hard. The custom network boot firmwares for network cards is what use to make the process non profitable too much mucking around unless stuck. Yes some of the pre 2000 hardware is pricks. Heck some of that hardware don’t know how to boot from cdrom drive let alone the idea of network boot.

    Yes the statement that time moves on is true.

    Time of the hard to recycle to thin-client computers is basically over. This does change the complete economical life factor.

    Yes we had reasons to try harder to make old hardware work. Yes some of what we use to do was non profitable but forced by location. That has changed. Problem is the city slickers has missed the hardware change. Motherboard with embedded network card that can from get go network boot without issues equals fast to deploy thinclient. That is insanely cheep conversion process in fact profitable conversion process due to metal reclaim.

    Yes by end of turning to thin clients the computers are perfectly ready to go into the crusher to end their life when they show a issue.

    I have a live cycle the computers are following.

  21. oldman

    “The object of the game is education not entertainment. Video is great for entertainment but marginal for education. It’s a matter of bandwidth alright, the bandwidth of the visual cortex. Information overload is not educational.”

    I know of quite a few high powered educators who work in my place of employ who would disagree with you. And their views will carry more weight. Burt even IF I concede the point, it does not make the limitations of the modernized dumb terminal that is the thin client go away.

  22. Robert Pogson

    Schools have other resources for movies: televisions and projectors. Some schools block YouTube and other sites even with thick clients. The object of the game is education not entertainment. Video is great for entertainment but marginal for education. It’s a matter of bandwidth alright, the bandwidth of the visual cortex. Information overload is not educational.

  23. oldman

    “I have done that hundreds of times and it never seems to amaze users that they get better performance from the same old hardware than new thick clients with a single hard drive.”

    Did you show them how well it streams movies Pog?

  24. Robert Pogson

    A ten year old thin client can still show the pix and send the clicks to a brand new state of the art GNU/Linux terminal server. I have done that hundreds of times and it never seems to amaze users that they get better performance from the same old hardware than new thick clients with a single hard drive.

  25. oldman

    “There is no advantage to polluting Earth just to enrich M$ and Intel.”

    Once again, you keep ignoring the reality of computing, which is that it does not stand still. that software eventually becomes obsolete, and that its replacements generally take more resources. Eventually even the most future proofed computer becomes incapable of keeping up.

    The fact that you personally Pog will be perfectly happy using the same software with the same level of function and feature forever really has nothing to do with the rest of the worlds computing practice.

    Time moves on Pog, get over it!

  26. oldman

    “Like having a 20 year old machine somewhere should not be 100 percent strange.”

    If we are talking about the specialized ruggedized custom built computers that ran the US space shuttles for close to 30 years, then I might accept this. But the simple reality is that mainstream computing equipment is by most people’s estimate junk long before your 20 years life. This is especially true if the equipment was purchased originally on the cheap instead of being future proofed.

  27. Robert Pogson

    Clarence Moon wrote, “its economical life”.

    A few years ago, that meant 3 years after purchase. People were replacing perfectly good idling CPUs with newer perfectly good idling CPUs for no increase in performance, except for faster hard drives. They could have changed hard drives in 20 minutes or so. Also, they were replacing single drive boxes with single drive boxes, when they would get huge performance increases by converting the old box to be a thin client and using the faster drives on the server. I have done that many times. The CPU in the old box is not the problem. The hard drive can be easily replaced, yet people were changing the whole box every 3 years. Now they do it every 6 years or they convert the old boxes to thin clients. Prevailing wisdom drifts.

    The argument is made that hard drives, fans and PSUs age but one can use parts with 100K MTBF so that’s another lie. A year 24×7 is less than 10K hours.

    “its economical life” was a lie invented to stimulate Wintel, nothing more.

    Even if using older equipment had an economic cost it also delays recycling, a plus for the environment, and encourages local workers, a plus for the economy. There is no advantage to polluting Earth just to enrich M$ and Intel.

  28. Clarence Moon

    “What works in Australia may well work in other parts”

    If you replicate the conditions that affect determination of “works” that may be true. But in most of the world, it is not efficient to spend so much time trying to resurrect old equipment or keep it running beyond its economical life. If you have a lot to time to waste and/or your time is essentially free, then you can make a case for fussing with things and installing Linux to get the last gasp of use out of an old machine.

    But if you have a successful business to run, that is a poor use of your time and it is not the way that most of the world operates. Experiences gained in rural Australia, even if they are true and there is nothing to show that is definitely the case, are not applicable to downtown Toledo, Ohio, and it is wrong to try to force them to fit.

  29. oiaohm

    oldman really how does what I do compromising productivity.

    Everything time someone says FOSS on PC you say it has to compromise productivity. This is not the case.

    Lower cost. Nothing says that the machines turned into Linux thin clients cannot be providing applications from a windows terminal services or equal.

    Basically all I have done is produced less waste by using the life span the hardware is able todo and using what those machines are still able to do up to modern-day standards.

    The most important lesson you can take from us guys who work in what you call edges of civilization is that we have a low wastage rate. Lower wastage rate we do is even profitable when you have better supply lines. In fact its less risk where you have better supply lines because if something dies it faster to get a replacement anyhow. Oldman

    Oldman even if you don’t care about the environmental damage you should care about the damage to the companies bottom line.

    Item having two lives once as a full PC and one as thin client does help budget.

    If you take up some of the things we do you can lower how much it costs per seat. FOSS is really in my systems to allow me to run computers to death.

    Oldman we have lessons you have not had to learn the edges of civilization force you to learn what is usable and what is not usable hardware. Even how to sort that hardware into how it can be used today. Not how it was used when it was made.

    The one problem you cannot normally out source running the way we do. Places like dell want to sell more machines not have people keep them in use for 20 years.

    The last 10 years of the machines life really does not need a support contract. First issue in that time frame you destroy and replace. The machines are past the point of repair.

    Mistake have happened where places have tried running Linux on 10+ year old hardware and stupidly paid people to repair failing 10+ machines.

    Run the model I am you will waste less make budget to more. Of course you don’t have to place thin clients were they don’t fit. Of course it will normally mean some FOSS in the mix.

    Some cases we have used it as PR as well. Just before the machines get too old to travel set them up at a school or equal with a take back dead plan.

    Yes I have had the issue of too many thinclients for number of locations that need thinclients.

    The max life of the machines is ~20. Not every one is going to make it.

    I accept that there always is a percentage of waste that is not avoidable. Oldman.

    That the stuff us on the edges of civilization as you call it do should be part of normal IT operations ok maybe at lower rates. Like having a 20 year old machine somewhere should not be 100 percent strange.

    As long as a computer doing it job without effecting productivity it should be left be. Something that is normal to me and most likely robert.

    I don’t know you on this oldman.

    I see a lot on treadmill doing stupid things like replacing all thin clients every 5 years. That is like what the hell are you thinking. Wasteful is wasteful.

    Really think about your actions oldman are there any that are stupid because you had the wrong idea how long computers do last.

  30. oldman

    “Basically oldman you are a environmental vandal just you most likely never knew it.”

    You’re right sir. I think green is mostly bushwah in a world where the have nots insist on the haves go green as they go ahead and pollute even worse than the haves.

    I also have zero respect for those who use green to advance their personal agendas. There are quite a few things that can be done towards conservation before one has to start compromising productivity.

  31. oiaohm

    Clarence Moon
    “By the same token, it would seem that you know nothing of how things are done in the more populated parts of the world whose purchases of IT drive the world’s markets.”
    and oldman
    “But in an environment where supply IS good and one has budget and staffing in place, the case you makes is far less compelling. In fact most system architects would consider it a corner case that does not need to be considered.”

    Both have missed something why I am crushing the computers. They are high grade ore higher grade that what you will normally mine out the ground. Even our city operations function the same way so supply is not the issue.

    The issue you guys are missing I am operating with the true life span of the hardware. Let me layout what happens in west wasteful world. It is totally not good.

    Companies use computers that could be functional for 20 years only for 3 to 5 at the out side 10 years then dump them off in the second hand market. From there they end up in land fill or sent to some third world country to be come a blot on the land scape.

    Problem is 5 to 10 year old machine is really not going to take long distance travel to well. PCB boards are too fragile at that age. Yes the we will send our old computers to a third world country to help them out becomes really lets send them a stack of broken junk the will not work that becomes a blot on their landscape.

    Would have better to crush them sell the metal from then and send new “humane readers” each computer contains enough value in metal to make a few. At least they would work when they got to the third world country. Less functional but works.

    I don’t even transport old machines between locations. They are either at the site or they are not. If a site is being closed down all the old machines past 12 years just get crushed not worth transporting them since a good percentage will break in transport. When I say not worth transporting I really do mean 15 Kms down the road is too far. 10-12 years the max of about 100 Kms of transport. 5-10 years the max of about 200-300 kms of transport if you don’t want risk of failure to increase majorally. Yes that transport tolerance really does drop very quickly.

    Do you think for one min a mining company would be doing what I do unless it worked and was profitable.

    Yes they have the money to replace the machines every 3-5 years but there is no valid reason to. Other than more disruption to operations and cutting into profit without any good reason.

    Wasteful system architects that are the majority don’t do the right thing so are creating environmental damage as well. Yes you could call those people you are talking about oldman and Clarence Moon are environmental vandals due to the waste they are generating. Most don’t do proper disposal on the dead.

    The simple fact of the matter most system architects are trained what the true life span of hardware is and how to treat it the right way so old hardware is issue less to operations.

    Also you see this offices that alter there desks around all the time have higher computer failures most system architects don’t have explanation for difference. Real age of machine is key. When you get a computer it can already be a few years old. So a machine you think is 8 years old can in fact be over 12 in some of the boards so fracture when moved.

    Lot of computer failures are not random but are in fact computer abuse for the age parts in it.

    In fact the old hardware gives you more sets to work from for less money. In mining operations where you are having to operate clean less waste produced from a site the simple it is to manage as well.

    I run as environmentally clean computer operation you can. This is also great for secuirty. All hard-drives get crushed so no data is recoverable. Selling computers off the second hand market is not possible in most cases if they don’t contain hard drives. Of course people like me are going to buy nothing from the second hand market because we don’t see second hand computers are worth transporting due to the fact they are too fragile.

    Yes running computers for 20 years on sites means that some computers may only over there full operation live span get 2 to 3 new computers compared on top of the 20 new when the site started. Now if this was done normal way you are looking at 40+ machines that would have had to be shipped in and shipped out. Add fuel of transport cost of production…. Yep one huge environmental foot print avoided by using computer to there proper life span and treating them correctly so they have the best chance of a full and long life.

    Clarence Moon you also need to learn just because something is done in populated parts by environmental vandals living there does not make it right. It very expensive in power and resources to make computers. Then we waste them by not running for full life is straight up stupid.

    Lot of people who call themselves system architects are idiots not trained about the materials they machines they are planing for. Normal building architects if you ask them basic property of wood or steel they know. If you most computer system architect the basic prosperity of a PCB board that all the devices they are using depend on they don’t have a clue. Yes PCB boards get brittle with age at a predictable rate.

    Basically system architects are mostly trained how to manage MS software but not trained how to manage the hardware they are managing on for good health of the hardware.

    This is one of my big differences. I have been trained far more completely.

    Basically oldman you are a environmental vandal just you most likely never knew it.

    There is worse were are the computers you use made oldman how far have they been transported just to get them to you. Every extra one you require has a major environmental bill.

    Its bit like buying a diesel car in Australia instead of a one of the new electrics over the life of the machine the diesel car was the most environmental clean selection.

    If you company wants to call itself green it will have to go down the same kind of path I have. The correct options are very limited.

    There are many paths for those who don’t care about the damage they do and the cost to the bottom line they do.

  32. Robert Pogson

    Clarence Moon wrote, “Testify as an expert on rural Australia practices, even. Just do not think that such prediction applies anywhere else.”

    The world is made of regions and there is no basis to believe that Bill Gates’ vision of one M$ licence per hardrive applies anywhere. He had to make exclusive deals to accomplish a gobal monopoly. It will be short-lived just as was Hitler’s or Genghis Khan’s. One cannot push markets uphill indefinitely.

    Brail, Russia, India and China have already rejected M$’s monopoly in whole or in part and are 40% of humanity. In my own country, Canada, GNU/Linux is widely used in education in Quebec, British Columbia, Saskatchewan and in scattered use everywhere. Where I have taught it was mostly welcomed. The federal government is giving FLOSS a serious look.

    What works in Australia may well work in other parts.

  33. Clarence Moon

    Well, Mr. Oiaohm, I will admit to not having much of an idea as to what they do in rural Australia and your odd approach to things may very well be applicable to that region. By the same token, it would seem that you know nothing of how things are done in the more populated parts of the world whose purchases of IT drive the world’s markets.

    With that stipulation, go ahead and present any and all wild theories as to what the future may bring. Testify as an expert on rural Australia practices, even. Just do not think that such prediction applies anywhere else.

  34. Robert Pogson

    Clarence Moon wrote, “Linux is not frozen in a time warp, either, and whatever existed in 2003 has long been superceded by what exists in 2011. Certainly something in Linux 2011 requires hardware changes from what existed in 2003. Who would want it otherwise?”

    Certainly that is true but the same IT framework that worked in 2003 will work now, like Debian GNU/Linux. I can take a release from that time and run it today if I wished. It will run on modern hardware better than on the older hardware. All I should have to change is the kernel and perhaps a few drivers. With that other OS everything has to change, hardware, software, and apps. Even if the libraries become incompatible, I can run the old ones in a chroot or virtual machine.

    Munich could have done its migration much sooner if it had used thin clients as the target rather than as a stop-gap measure. Then fewer systems would need to be replaced. GNU/Linux gives a lot more choices than that other OS and is more flexible. The foolish mistake was to make the system ever more dependent on M$ from the beginning. At the time it seemed like the only choice or the best choice but later on it became obvious that other choices were better and could have been made sooner. GNU/Linux would have met their needs around 1995 or so when their IT system was no doubt much simpler and it would have been much less costly to migrate. I migrated in 2000 and would definitely have been further ahead to have done it sooner. In 2000 GNU/Linux was far superior to Lose ’9x and possibly NT which they were using. Clearly they did not consider the cost of doing a future migration nor the infinite cost of staying on the Wintel treadmill in those early days. Imagine how much more difficult it would be if they were now locked into “7″ and all M$’s cloudy baggage.

    Fortunately much of the world is only at XP and can easily migrate to GNU/Linux using existing or newer hardware.

  35. oldman

    “Clarence Moon I operate in rural Australia. This is key. I have to be able to build want I require from what I have at hand at times. This might include 20 year + hardware that still works and it might be my only option without having the person there wait for next repair cycle that might be a few months off.”

    So in the end what you, like Pog, have laid out a case for is how to do IT on the edges of civilization. As I have already observed to Pog in his case, it is hard to argue with what works when one has to get the job done.

    But in an environment where supply IS good and one has budget and staffing in place, the case you makes is far less compelling. In fact most system architects would consider it a corner case that does not need to be considered.

    The simple reality is that most institutions have neither the staffing, the culture nor frankly the inclination to pursue your “use it until it drops” philosophy, especially to the extreme that you suggest.

  36. oldman

    “Nonsense.”

    Sorry Pog.

    It is very clear to anyone reading up on munich the beginning that the ideological fix for a linux conversion was in. And the fact that the project continued with its original team essentially in place even after it became apparent that the implementers were complete incompetents who totally misjudged the reality of the environment that they were converting demonstrates the presence of this fix.

    “So, the migration has already saved them a bundle and it will keep on saving them a bundle. That’s a success by any measure and they have not skipped a beat.”

    How much of a success Munich has been remains to be seen. By taking their IT out of the desktop computing mainstream and putting it into what amounts to a technological ghetto. Munich has now limited its options. How much this limiting will effect them also remains to be seen.

  37. oiaohm

    Clarence Moon I operate in rural Australia. This is key. I have to be able to build want I require from what I have at hand at times. This might include 20 year + hardware that still works and it might be my only option without having the person there wait for next repair cycle that might be a few months off.

    Yet I might be doing high end server deployment the next day.

    So yes I might have just done a “high end Linux server installations, large networks” with 20 year old clients connected in places. Why the reasons are:
    Number 1 they still work.
    Number 2 they are more cost effective since I don’t have to ship items to replace items that are already there. So I could have increased the number of seats at least until the old machine dies.
    Number 3 the users are use to that machine being there so don’t suffer from new machine fear of breaking it.

    Clarence Moon
    “On the other, you talk of how your equipment is cobbled together from junk box parts and will in your estimation last for decades, working as well as any brand new machine.”
    This is the true fact of the matter. The amount of processing power most office machines require was passed 2000-2004. Thin-client processing power was passed in 1989-1995 so anything made after 1995 is good enough for thin-clients for sure and anything after 2004 is good enough for all general desktop in PC class machines.

    You are also presuming like oldman. Do I repair thin-clients Clarence Moon. This is key. A thin-client plays up its next stop is the crusher to be smashed into powder or warranty if still under get sent for replacement. new/old get the same treatment basically. I don’t repair them not worth time. The Comptuters last 10 years of life is a non repairable run. Hard-drive stripped and network booting normally. Basically run to death show issue destroyed no mercy.

    Yes they do run as good as brand new thin clients in fact can be better since some of those old machines are more powerful than some of the new thin clients. Yes buy new thin clients and truly deploy worse that what had been ripped out of a office needing full PC. Basically what I am doing is correct re-deployment of assets with usable value.

    No I never do this “cobbled together from junk box parts”. The machine either got to 10 year old+ with min repair or it did not. I don’t waste time mixing and matching parts. I don’t use a junk box. Machine is either was working in its last location perfectly or crushed.

    Yes anyone working on 10+ year old machines other than stripping hard-drives is a idiot. Reason the PCB have dried out too much to be handled without cracking so developing defects simply.

    I am expecting up to a 50 percent loss by 10 years due to crusher getting machines that had issues. Brutal is key. You only want the best left. Any machine that has been worked on a lot at 10 years sees the crusher automatically due to damage from the repair process PCB cracks from inserting ram and so on. Yes I only want the ones that have run perfectly for at least 10 years as thin clients.

    Most are highly dependable without harddrive most of the defective have been weeded out by 10 year point. When they hit 20 years old then the next round of defects start turning up. So then they normally all get crushed and disposed of for gold and metals after 10 years as thin clients due to showing defects.

    Basically Clarence Moon you have no clue what the real operational live spans are. If you did you would find my statements not strange at all.

    Mind you any computer going into the crusher is cash from the metals reclaimed.

  38. Robert Pogson

    oldman wrote, “I didn’t know that meaningful Linux applications were around in 1991

    The GNU tools were around. Many are still in use. Linux used them heavily to get started. oldman, you know GNU was used with UNIX and there were tons of applications that migrated easily to Linux.

    e.g. ftpThe original specification for the File Transfer Protocol was written by Abhay Bhushan[4] and published as RFC 114 on 16 April 1971, even before TCP and IP existed.[2] It was later replaced by RFC 765 (June 1980) and RFC 959 (October 1985), the current specification.

    The current ftp package in Debian contains this copyright notice:
    “This package was split from netstd by Herbert Xu on
    Fri, 18 Jun 1999 21:41:52 +1000.

    netstd was created by Peter Tobias on
    Wed, 20 Jul 1994 17:23:21 +0200.

    It was downloaded from ftp://ftp.uk.linux.org/pub/linux/Networking/netkit/.

    Copyright:

    Copyright (c) 1985, 1989, 1990 The Regents of the University of California.

    The license can be found in /usr/share/common-licenses/BSD.”

  39. Robert Pogson

    oldman wrote, “Munich happened and continued because of the socialist political orientation of its government. “

    Nonsense. Munich happened because an organization kept growing in use of IT slapping one solution after another offered by OEMs, ISVs and M$. It was only when they reflected on where they had come from and where they were going that change happened. The mistakes they made were in installing non-free software in the first place. They helped the jailer lock themselves in and they organized a jail break.

    The migration has not cost more than anticipated. They have changed the migration to do more. Over the years they have added 1000 PCs to their system and now they are rationalizing the structure of IT more generally than just changing the OS and apps. This is the crux of the matter. It costs a lot of money to stay on the Wintel treadmill but it’s a one-time expenditure to get off. Every organization that I have heard of spends less on IT after migrating to GNU/Linux. How much have they spent on M$’s licences in the last few years? Much less. How much will they spend in the future? Much less. That’s a lot of less compared to the migration costs. Certainly I could have migrated them faster but their system would have broken in the process. The most work was made in keeping the transistion smooth. That was their choice and it comes with a greater cost but it’s certainly less than migrating to XP plus migrating to “7″, two cycles on the Wintel treadmill. The numbers compared were between the XP step and GNU/Linux. Migrating to GNU/Linux did not cost twice as much as migrating to XP, just a little more. So, the migration has already saved them a bundle and it will keep on saving them a bundle. That’s a success by any measure and they have not skipped a beat.

  40. Robert Pogson

    The lifetime of a PC depends on lots of factors. Modern electronics is mostly silicon-based, just a mineral with intricate forms. It fails only if overheated or cracked. The connected components, capacitors, PSU, fans, hard drives etc. are usually the limiting factor. Fans and hard drives can easily be changed as part of normal maintenance or refurbishment.

    The performance of any PC made in the last decade is probably quite likely good enough for use as a thin client. Just compare the modern low-end thin client with RAM, clockspeed, bandwidth of the motherboard/network. oiaohm is correct that if 1024×768 or whatever resolution is used is adequate, there will be some useful application of the hardware.

    I have used PCs in schools as thick clients that had woeful memory bandwidth. They sucked but still were useful as thin clients. The modern CPU that idles when used as a thick client will certainly be fine as a thin client. All a thin client has to do is show the pix and receive the clicks. Where a thin client has heavier loads like encrypting the streams or running local apps or showing video, performance will be weak but of course there are tons of situations where that is no problem. For instance, screens that mostly display text and static images were available from 386 machines and certainly could be shown by a thin client with a P4ish CPU. The CPU is hardly the limiting factor. The display is. 1024×768 came into popularity in the early 1990s and is only now beginning to fade in popularity. That’s nearly two decades.

  41. Robert Pogson

    @Clarence Moon

    I cannot speak for oiaohm but I can tell you my career with IT spans more than 4 decades and all kinds of IT was used from analogue and TTL circuits lashed up ad hoc, to micro, mini and mainframe computers, paper and magnetc tape, current-loop, transmission line to wireless networks. The only consistent feature of all that experience is that we did the best we could with the resources available. The same goes with my recent experience with modern PCs and schools. GNU/Linux is much more flexible for the purpose because of the EULA, M$’s dirty tricks, M$’s bloat/slowing down/re-re-reboots, price and the wonderful characteristics of FLOSS.

  42. oiaohm

    oldman
    “You can make all the excuses all you want, but you cant get around the fact that the munich implementation was a mess that has taken way too long and which HAS cost more than anticipated.”
    Stop saying lies oldman. Costs of the munich migration has not exceed first allotted budget at all.

    Yes its over time but under budget. Right money was allocated todo the conversion from Windows to Linux. Right amount of time was allocated but how that time had to run was wrong. Yes the money to pay for the numbers of hours require was allocated from the start correctly.

    Yes you can fairly complain about over run of time being bad planning or error in planning what it is.

    Lack of document migration blocks migration to Linux from proceeding. This is why in 2010 when documentation migration had complete the OS migration speed up.

    The speed the Linux migration is going at now is the speed it was ment to start going at in 2006-2007 to complete 2009-2010. This would have worked if real world fact that documents has to be migrated first did not come into the problem. So OS migrations were blocked by lack of document migration.

    8 years to spend a budget that was allocated for only 5 max. Nothing has reached max expected limits.

    Over time and under budget is always that something was theory though possible to be done side by side that in real world cannot be. So the budget todo the work is held until it can be done so money todo the work exists.

    Yes the important lesson of Munich and others don’t attempt Linux migration at the same time as migrating documents is a sure way to either screw everything up or run over time badly waiting for the documents to be ready so you can migrate the OS.

    All trail blazer cases have lessons.

    Really oldman you should be skilled enough at reading project reports to have spotted the only way the money spent could have worked. 2010 only half the budget spent when they run out of time to spend it. Budget new issue less than half the budget. So its under budget. People miss reading are saying over oldman.

  43. Clarence Moon

    Mr. Oiaohm, your stories suffer from a lack of consistency. On the one hand you prate of your vast experience and deep involvement with high end Linux server installations, large networks, and even insider VAR relationships with Microsoft. On the other, you talk of how your equipment is cobbled together from junk box parts and will in your estimation last for decades, working as well as any brand new machine.

    The very fact that you do not seem to sense the gross inconsistency of the two is evidence that you have totally fabricated the former. As the Oldman has pointed out, such time frames do not even apply to the period that Linux has been in existence and capable of performing such operations, so one can only wonder about the latter as well.

  44. oiaohm

    Of course your normal presume error has sneaked up on you oldman. You have presume I was referring to machines starting with Linux. No where do I say they start with Linux. So why do I need to change it my statement is correct.

    Linux world the computers have a 20+ year operational life even if some of that time is running windows or some other OS it still a 20 year operational life. And 20 years operational life is 20 years of usage out the computers.

    Sorry oldman you need to work on your reading and read what is written and don’t add presumes.

    Note my current machine is 7 years old and for desktop usage it still good for anther 3 to 6 before having another 10 as a thin-client. This is max of 23 years on this machine I have if something don’t blow up.

    20 years is low ball. Reason for machine to end usage is hardware death with Linux environments to keep them going in most cases. There are a small percentage that out live Linux driver support like i386 computers. Hard drives are not even a issue with network boot.

    Basically Linux world tech allows you to get close max life out the hardware you acquire oldman. A factor people over look. Huge cost savings not having to throw away machines before they die or start acting defective.

  45. oldman

    When push comes to shove, Munich happened and continued because of the socialist political orientation of its government. You can make all the excuses all you want, but you cant get around the fact that the munich implementation was a mess that has taken way too long and which HAS cost more than anticipated. Perhaps those entities who are as ideologically dedicated to the notion of converting to FOSS on linux as the munich government will will learn for their mistakes and proceed forward. But any institution that doesnt have an idealogical axe to will look long and hard as what has happened there, and most likely look elsewhere.

  46. oldman

    “Really Clarence Moon who would not want 20 years usage out there computers? Yes Linux world computers last as long as most cars.”

    Really? I didnt know that meaningful Linux applications were around in 1991?

    Would you like to modify that statement?

  47. oiaohm

    Clarence Moon
    “Certainly something in Linux 2011 requires hardware changes from what existed in 2003. Who would want it otherwise?”
    This is false in fact. You have just applied a windows cycle to Linux. Linux hardware replacement cycle is way different.

    I am sitting right now using a computer build from parts made in 2002-2004. Main reason why I am looking at replacing is price to buy new ram. Yes its now to the point its cheap to buy almost new computer for price of ram. Still run Linux perfectly. In fact the KDE 4.6 desktop is running on it at perfect pace. This is not some kind light desktop. Yes it has a old Nvidia video card from 2004 A geforce 6800 in it. That is in fact the newest part. Only thing more modern is the hard drives those are still quite old. Running current day debian and Ubuntu without issues. I could a stack of different Linux distributions.

    Of course if you have choosen a windows manager that really is not opengl using you can go older than the geforce 6 series.

    My video card only does opengl 2.1. There are a lot of older cards that do that. Its not something super powerful by today standard but its decent enough.

    So my machine is still perfectly descent to be a business workstation. 7 year old machine from youngest part that I cannot replace with file server. It still will be decent for another 3-6 years atleast so a 13 year+ life cycle as a decent business workstation running the software directly. After that it could spend another 10+ years as a thin-client. Basically run until it explodes. Going to be taken out by part failure or powersavings not by end of functional life.

    Written off support starts at 486 in most Linux distrobutions. First generation pentium are only now moving up to be written off from general distribution support sometime in 2014 maybe.

    Yes pentium computers from 1993 can be turned into thin-clients running current day debian.

    Debian and slackware in fact still support installing on 486. Does not support installing on 386. http://www.debian.org/releases/stable/i386/ch02s01.html.en#id583669

    Yes it would be possible to find a computer built in 1989 that will run the most current debian in existence. Ok only good enough to be thin-client but its still usable for something. Yes 20 year old hardware still usable in a Linux network as long as it placed in a compatible location for what it still can do. Yes running the latest software.

    Linux hardware support life is about 10 to 20 years. This is why Linux is more environmentally friendly. Compared to the general 3-5 market tries to push.

    So there is no reason for Munich could not be running the same hardware they had NT4 on before Linux migration. For the simple reason that hardware is not outside the Linux support so can be used for something. By Linux scales that hardware still had over 10 years on the clock to go before it had to be disposed of. Only 8 years has passed.

    Longer hardware rotation time frames apply while keeping OS current. Ram requirement and cpu requirement of Linux has not changed much over 10 years.

    Yes the true shocking factor of Linux is most common reason for hardware having to be replaced with Linux is it died. Not that the hardware was not supported by the OS.

    10 to 15 year hardware rotations is possible with Linux without major problems other than the fact you will not be able to get spares past a particular point at a profitable value. Failing machine replace model can be used.

    Re-purchace slows down with Linux.

    There would be no requirement to dispose of the old machine but move them into lighter processing roles like thin-clients or web interface somewhere. Yes 12000 starting machines at munich could be still in full use today. They have added 3000 new in the time frame. This would be your general expected Linux rotation rate. Every 10 years you aim to replace half basically. That normally works out perfect for numbers of machines that have died.

    Complete replacement 20 years cycle. Yes double the max recommend MS cycle.

    Yes Robert Pogson is 100 percent right about spending money foolishly that companies do with Windows. Not getting full life span out there hardware.

    Really Clarence Moon who would not want 20 years usage out there computers? Yes Linux world computers last as long as most cars.

    So yes Munich is ahead in so many ways. Slower hardware replacement rate is another cost saving.

  48. Clarence Moon

    Why would they be changing “foolishly”, Mr. Pogson? It seems to me that the Windows OS comes with the hardware and continues to work the same as it did on the first day it was used. Time marches on and tastes change and technology improves, of course, and to avail oneself of the changes requires a re-purchase of whatever it is that you want changed.

    Linux is not frozen in a time warp, either, and whatever existed in 2003 has long been superceded by what exists in 2011. Certainly something in Linux 2011 requires hardware changes from what existed in 2003. Who would want it otherwise?

  49. Robert Pogson

    In 2003-2004 if Munich had gone to XP/2003 they would now be going to “7″/2008 and spending money foolishly twice. They are far better off having spent some money foolishly once and got a system with which they can live.

  50. oiaohm

    Clarence Moon really Munich has some brains.

    localized distribution is not really localized.

    Rule 1 don’t make you own distribution from nothing. It will kill you.

    There was another migration attempt started at the same time as Munich who made the stupid mistake of attempting to make there own distribution from nothing of course the blew budget in the first year. To running a distribution from nothing you are talking a few billion a year in developer time require at min. Of course that is not too bad with debian and other large distrobutions where that bill is broken down between all.

    What they have been using are Ubuntu and Debian standard distributions. Customised to use there own internal mirror server and with there own preferred custom settings deployed out box.

    http://linuxcoe.sourceforge.net/ Basically this tool is what you could use to make a Munich style deployment.

    If you are supporting windows images there is in fact more work than supporting the solution Munich uses. Reason you local mirror tools exist for debian ubuntu…. To makes these. How do you think Linux distribution mirror server work. Solid and well tested the mirroring system.

    If you notice about linuxceo you can basically place a template in it of your customisation and basically stamp out a new disk with all the latest updates embed at any-time without very much effort other than asking it to produce you a new disk. Fast simple slipstreaming of disks. Something Windows really suxs at. MS does not really have a good slipstreaming system. Cost saving because newly deployed machines don’t have to be spending ages downloading updates because they were current because the install disk is under a week old.

    Yes ageing install disk costing time after reinstall is commonly forgotten about.

    When it really does come down to maintenance costs taking care of a Munich style distribution is easy. Far less than the windows solutions.

    Clarence Moon there are a lot of distributions that are just customisation of other distrobutions out there. Maintenance staff they require since they are just customisation of settings in the distribution and private mirror is 1 person with a life working for someone else at least part time.

    As you say license is 3 to 4 people is what you paying for in the MS solution. So same money you can employ a web developer or two and maybe and application developer.

    This is why MS is just so far out on costs. The distribution maintainer could be working as a coder solving issues you are suffering from.

    Even better distribution maintainer you have on staff can be working with upstream directly so having his package work reviewed by independent third parties.

    12,000 is the object they have to hit. 80 percent conversion. They don’t have to stop at 80 percent. So yes question is how far past that.

    Administrative staff work normally ends up less with Linux. Again this is partly Linux. No activation crud and most drivers straight up embed and compadible with each other. Means with linux you can straight up duplicate drives. Even just unplug a drive and plug in a spare to solve a software fail on a machine.

    Windows lovely beast. Activation can fail. Not fun. Drivers evil you install two drivers for two different devices they drivers can fight. So ruining day. Reason why this happens is lack of central driver development.

    Yes We hate Linux central driver development you hear but remember that cures a problem.

    You missed anti-virus in your costs Clarence Moon. As a government or a company you cannot use free anti-virus software on windows that does real-time scanning. Linux yes you do have a free anti-virus that will do real-time scanning.

    Anti-virus is basically up another 50 dollars per year. Yes the anti-virus is more expensive than the OS when you get into volume licensing. So you really do want to stop paying for that as well. That takes you from 3-4 staff to 6-8 staff on 12000 machines.

    What we can take away from this if you have more than about 2000 machines Linux is most likely a viable option directly due to licensing cost savings. So should be investigated.

    You can do a hell of a lot when you have 6-8 more staff.

  51. Kozmcrae

    “Are the bureaucrats less bureaucratic?”

    Was that the primary goal of the changeover? Or even secondary for that matter?

    Take my advice Ivan, just pretend that Munich never happened.

  52. Clarence Moon

    If all the accounts are to be believed, then Munich paid more for their in-house Linux solution than they would have had to pay for the Microsoft solution that was ultimately offered by Ballmer after his legendary effort to save the account in 2003.

    What is the upside to having paid more and taken so much time? It appears to be a belief that the future expenses of upgrades will be avoided and will eventually put Munich in the black ink from a total cost over time. It is possible to make an analysis of that potential, I think, based on the 12,000 workstation figure that seems to be the final count for Munich’s conversions in a year or two from now.

    Site volume license contracts for PCs at this sort of level for Windows and MS Office seem to end up at the $40 to $50 range per seat, so the total license savings for Munich would be on the order of $500,000 per year.

    It is more difficult to calculate the ongoing costs of Linux support for such a localized distribution as used in Munich. If it were the United States, it would be the case that the licensing money theoretically saved, $500,000, would justify a staff of 3-4 people dedicated to whatever task it was to keep the Linux version in the same state of support as Windows. Now that is over and above the day to day administrative staff which can be thought of as equal, Linux or Windows, or some argument can be made that one is less costly than the other.

  53. oiaohm

    Ivan again surveys of the people around that area say that Munich is providing better services than lot of the other in the same area.

    Munich web services for self service in that area are some of the best. So people now have more options for applying for stuff avoiding as as many bureaucrats as possible.

    Also lower application rejection rate due to bureaucrat not being able to open document sent.

    So the job of providing services to citizens has improved due to the project.

    Sorry IVAN there is not one metric about the Munich migration that says failure. Lot says others should do equal and soon.

    Of course setting up to spend less on IT will leave more to spend on providing government services. So you want more services from you government you want MS out.

    Ivan you are getting desperate trying to find something.

  54. Ivan

    “the simple point is they pulling the solutions off without any downsides.”

    Unless you are paying the taxes. It says a lot about the state of Munich’s Government when money is wasted in migrating operating systems rather than doing their damned job of providing services to their citizens.

    Now that they’ve got 9000 work stations migrated, are the clerks more efficient? Are permit applications handled quicker? Are the bureaucrats less bureaucratic?

    Of course not.

  55. oiaohm

    Dr Loser the simple point is they pulling the solutions off without any downsides.

    Lower cost, more uptime, software updated every 12 months. System faster to redeploy in case of disaster.

    Many advantages one of the shocking ones is happier users. Most people want the computer just to work don’t care what the OS is as long as it works.

    They’re doing it. In itself is a up Dr Loser think how many people said they would have to return to windows to be productive and cost effective.

    The downs are mostly all short term. Software that has to be migrated is a down. Possible on going training expense.

    Very minor downs really. Key thing here is cost is lower end result will work. None of the trial machines had to be reverse. None of the converted machines have had to be reversed.

    Really how is it a rip off Dr Loser if the spend is still less than MS deployment for the same time frame. Remember to stay currently they had todo two in 8 years. Project has run for 8. People missing that at about 5 years old MS want you to pay again for updates. So even with MS discounts paying for on going access to XP now would put budget cost over the cost of the total conversion project.

    Next cycle in 5 years they are ahead. Even now they are technically in profit.

    Microsoft is basically the rip-off Dr Loser.

  56. Dr Loser

    OK, Koz: then what are the “downs”?

    Frankly, “they’re doing it” doesn’t count for much of an up, unless the “it” is a mind-altering drug.

    Not that I’m suggesting anything of the sort, of course.

  57. Kozmcrae

    “An interesting metaphor. I fail to see the “ups” in this.”

    They’re doing it.

  58. oiaohm

    Dr Loser the debian might disappear argument is killed by what Munich has already done.

    Munich has proved you can migrate between GNU/Linux distributions with min effort or disruption. Debian disappears they could migrate to slackware or anything else.

    So the complete GNU/Linux world would have to disappear Dr Loser before they end up in a location of trouble. Stop listening to MS trolls and start reading Munich documents. They have already migrated from debian to ubuntu and back again.

    Also there is a reason why it so simple for them Dr Loser.

    So DR Loser you are a TM Repository posting moron. Most people I find posting to TM Repository are in fact MS paid PR staff. Are you DR Loser?? You complain about me being insulting at times you use TM Repository as a weapon. So turning yourself into a complete laughing stock DR Loser.

    Munich Linux Watch author has basically run for it. Because it has become very clear that the final report it going to make him look like a complete idiot. Read his first posts. He had the idea that the project would fail completely. He has not updated the milstone page. 3000 Linux June 2010 were are over 9000 now. This is matching exactly the speed the plan writes that the migration would take. Just the start of that process was delayed. If he updates his Milstone page it comes deadly clear there was something delaying the migration that has disappeared. What was delaying migration. MS Office to OpenOffice. Exactly when that part of the project basically completed the Linux migration has accelerated to expected rates.

    What is in written in the LiMux first plan is the Linux migration would take 2-3 years to convert 80 percent machine require once it moved into active deployment of Linux. Yep its exactly on time table todo that even that the 2-3 years was only a 11000 machines to migrate not 12000. When was the change to achieve migration the middle of 2010. Linux migration stage is going to clock work.

    It was written it would take 1 year to deal the forms and documents. Someone badly underestimated size of problem for a MS Office to OpenOffice conversion.

    Really if you read the full plan you will understand the visualisation and what it is for. Mostly its not for MS Office. But for custom applications that have been built over the years.

    “At that point, the remaining proprietary solutions will be moved over to open platform solutions.”
    You cannot read you only translation DR Loser. This is not using Wine long term. This is do away with the solutions that are not open platform. Yes I can tell you that some of those proprietary solutions are running wine because I have done support with one of the LiMux people in winehq on freenode. They are not MS Office because they have not been asking for that.

    Most Linux custom distribution build tools for companies are like this one from HP http://linuxcoe.sourceforge.net/ Supports most distrobutions. This is where fragmentation is a shield. End users can in fact migrate very quickly between distributions with min issues.

    “lock-in to Debian” is a figment of your imagination Dr Loser. In one of the public documents is how long to migrate to a non deb based distribution. They can do it one cycle. Because some in the government has already asked them about lock-in to debian so they did document how little they are locked in. Heck the paper is a good read I wish I had not miss placed the link it include what if they had to migrate to freebsd or solarias. Posix is mostly Posix.

    Clarence Moon
    “So they are giving up on migrating the the last 3000 workstations and they will remain with Windows?”

    Project LiMux only has a 80 percent conversion requirement. Clarence Moon. So the remaining 3000 was always attended to be a different project. So anything past 12000 is better than expected. Reason for 80 percent is that it was argued that 100 percent was impossible. As the project has gone on 80 percent looks like a underestimate.

    Thinking that will hit project requirements in 2012 with a year to run. I am really interested to see how far they run past the 80 or if they will be completely stuck up Germans and start a new project because objectives have been achieved.

    Clarence Moon if you know the project requirements the 3000 makes perfect sense.

    This statement that DR Loser says he translated
    “At that point, the remaining proprietary solutions will be moved over to open platform solutions.”
    says clearly when they get to the last 20 percent they are not stopping Clarence Moon. It will be more a question of how long.

  59. Robert Pogson

    I’ve worked in places where every purchase order requisition had to have three quotations and they took the lowest.

    Do you really think that accounting does not look at the life-cycle cost of stuff? They’re paid to do that. They have to do it for taxation as well as budgeting and securities filings/prospectuses. The bean counters know what something’s book value is until disposal and then they hunt for tax credits related to the purchase and disposal. They even can tell you what it’s worth to donate stuff to schools and charities. They pay even more attention when there are hundreds or thousands of the littlest things. I once worked for a place where a memo came around that we should put all out nuts, bolts, transistors, etc. in central storage and write purchase order requisitions any time we needed something, for economy. We ignored that directive because we needed parts 24×7 to run a multi-million dollar system but the bean-counters did care enough to write the memo.

    As I wrote, there are other costs. GNU/Linux wins on all of them, including performance.

  60. Dr Loser

    @Robert:

    Oh well, let’s get down to the numbers.

    Do you really think that any sane accounting department would look forward forty years and decide that it’s imperative to buy Debian over Windows because the first one costs them $60 per year and the second one costs them $125?

    Seriously?

    Good Lord.

    That’s even without considering TCO, lock-in to Debian, the fact that Debian might disappear, etc etc. Now, I know you would like M$ to disappear. And I know you believe it will. But even given that, no sane accountant would do more than laugh uproariously at your silly little assumptions here.

  61. Dr Loser

    @Koz McRae:

    “Munich has been a roller coaster for sure.”

    An interesting metaphor. I fail to see the “ups” in this.

    It’s actually more like a rip-off of an Otis lift with the safety catch removed, plummeting from fifteen storeys.

  62. Dr Loser

    For anybody who wants to look into the eight-year history of fail that is the LiMux project, I recommend Gnu/Debian^W^W this site.

    Not necessarily as the Gospel truth. Just as a counter-balance, which isn’t actually written by the pointy-headed techs who depend on the thing for a job.

  63. Dr Loser

    Not a bad translation, Robert; probably better that mine. Feel free to chuckle at it if you want.

    What you didn’t mention is that the German is execrable officialese. (It seems to have affected your English, too, which is typically far better than “Not necessarily will they keep that other OS but for the time being.”)

    I’m generally suspicious of people who cannot express themselves clearly, whether it be the LiMux mob or Oiaohm. Or, I suppose, me when drunk.

    Anyway, we had a good old laugh at this nonsense over at TMR five days ago. Nice to see you’ve finally caught up.

    (Did you spot the magic word “virtualization,” btw? I wonder what applications they’re going to “virtualize?” Surely not M$ Office, on a M$ Server?)

    It’s doomed, Robert. Eight years and this is the best they can boast about. It’s doomed: get over it.

    Heck, play Siegfried’s Funeral March in the background if it will make you feel any better.

  64. Robert Pogson

    Not necessarily will they keep that other OS but for the time being. There will be no need at all for it if/when they find/create apps that will do what they want on GNU/Linux. It’s the same PC running that other OS or GNU/Linux. It can do the logical operations. It just needs the list of instructions, a programme. I expect that after they have completed the migration they will find a bit more energy to do that.

  65. Kozmcrae

    “So they are giving up on migrating the the last 3000 workstations and they will remain with Windows?”

    There’s always hope Clarence.

  66. Clarence Moon

    So they are giving up on migrating the the last 3000 workstations and they will remain with Windows?

  67. Kozmcrae

    “Corrections are welcome.”

    I’m sure Hanson would love to supply some “corrections”.

    Munich has been a roller coaster for sure. But it’s a given that it was expected to be. The Cult of Microsoft latched onto every apparent setback like it was the last gasp of a drowning man. It was not to be. Munich move on to the next level with determination.

    The pain of switching over was not due to problems with GNU/Linux but with the talons of Microsoft gripping into the bureaucratic flesh of Munich. No doubt The Cult of Microsoft’s alternate reality will continue to find problems with the Munich revolution until well after it is over.

Leave a comment