Munich, Revisited

“The consultants report no problems or criticism with the use of open source on servers, for development and for enterprise solutions. Here, the situation is very comparable to what is common in many other public administrations and in the private sector, they note.
 
Commenting on the interim report, Florian Roth, leader of the city’s Green Party wrote on his Facebook page that the report confirms that the use of open source is not the issue here.”
 
See Munich publishes interim report on IT performance
News of the death of GNU/Linux in Munich’s local government is exaggerated, apparently. A thorough review of the global IT-system finds nothing to report. What it does find is that Munich is still using too many applications even after pruning them back severely in the migration to GNU/Linux.

Perhaps this will finally cause the nattering nabobs of negativism to shut up. GNU/Linux does work for people.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , . Bookmark the permalink.

64 Responses to Munich, Revisited

  1. oiaohm says:

    Dr Loser here is something fun starting a Migration from Windows to Redhat is likely to have less success than Windows to Debian or Ubuntu simple due to lower number of applications.

    http://annex.debconf.org/debconf-share/debconf15/slides/341-linux-in-the-city-of-munich-aka-limux.pdf
    Also there is a key little statement in there.
    –22 independant IT departments–
    This creates nightmares. these 22 independant departments made running windows there a mess and makes running Linux a mess as well. Nothing was done to fix this.

    Dr Loser have you ever bothered reading what Munich guys think are good ideas.
    300+ patches to Libreoffice not mainlined. Custom patching Firefox and Thunderbird.

    Remember to compare the release cycle to
    http://www.juntadeandalucia.es/educacion/cga/mediawiki/index.php/Guadalinex_Edu
    Since 1 year of upstream release Guadalinex follows suite.

    Next is all the successful never skip forwards 4 LTS versions they max out at 2 LTS versions. Why the changes between result in lots of work.

    Other clues to problem is Openoffice 3.2.1 mentioned as an API break causing them problems. 10 June 2010 is that release date. Its offical replacement was 26 January 2011 with Openoffice 3.3.

    The big thing about this OpenOffice fault Munich has here is this is not just a Linux mistake. This means Windows clients had to keep out of date OpenOffice around with security faults. Also means staff were not getting access to the best version of Libreoffice/OpenOffice. Q4 2013 to find out something that was broken at the start of 2011. This means there is not quality control being performed using development branch of Libreoffice/Openoffice so faults are detected early and reported early while developers upstream are working in that section of the code base. So now Munich gets left with faults to fix because they are detecting them too late.

    Rolling something like WollMux means you should have a QA process in place.

    Lot of the things sending Munich internal distribution south is stupidity. The KDE patches for firefox and thunderbird are about making a unified look. They don’t have a unified look on Windows so making them have a unified look on Linux is adding patches to cause yourself trouble. If a person in a company suggested making a custom version of firefox/thunderbird to integrate more neatly into Windows they would be shot on the spot for stupidity and that is what Munich IT guys have done here on Linux in attempt to reduce resistance.

    Backported Kernel, DRM, Mesa, Xorg, xorg-drivers
    This major alterations start having to be done once you start getting Distribution too old.

    Dist-upgrade 5.0 to 14.04 to get security updates until 19.04
    This if it was not serous should make you laugh you guts up for being so stupid. Remember 2015 presentation. 15.04 LTS was released at start of year. Yet they are talking about upgrading to a year old release. Thinking QA could take a year its going to be 2 years old before end users get to use it.

    While you are in the testing stages if your core has got past 2 years old you scrap it and move to the newer LTS when your core is Ubuntu. So when release V5 in 2014-12 this should have been based on 14.04.1 LTS if Munich was maintaining internal release correctly but its based on 12.04.x.

    -Ubuntu release LTS 12.04 (Q2 2012)
    -Port initial SW distribution server and minimal client (Q4 2012)
    These two should also make you smell trouble.
    Initial SW distribution and minimal client should be done Q2-3 in a even year with Ubuntu so that found faults can be reported before and fixed before the first point release. Not starting at the correct time puts the complete release cycle out by over 6 months right at the start. The internal betas should land before the end of the LTS year.

    -LiMux development started (Q1 2013)
    What the heck that was meant to start Q2-3 2012.

    Dr Loser the reality is lot of LiMux issue is not making LiMux but its the time table that is being used. You have to remember by the time you get to 13.04 most of the developers who were working 12.04 are now focused on preping 14.04. This behavior at Ubuntu does not change by paying them. So they start making the distribution at the point upstream developers are disappearing. This is strictly when you don’t want to be doing it.

    Ubuntu Hardware Enablement Stack they note about it.
    –Supplied every ½ year until next LTS (2 years)
    Note until next LTS. So you want latest hardware support without massive work you better complete on time. There are 3 per LTS release. If you are right on time you will get 2 of them. Waiting to the first point release has cost you 1 in most cases.

    So when Ubuntu releases it next LTS the prior LTS no longer supports new hardware perfectly. So the current v5 release a Munich does not support new hardware.

    Even choosing Redhat you have time table you have to stick to when making internal distribution to have the best support level from Redhat.

    Fault detect early you will have active developers at other companies and governments to help you out. Fault detect late you will be on your own.

    I guess Dr Loser you would not have spotted all those faults. Knowing that all those failure to be on time table effect if you pay or not for a distribution means you know that Munich problem is not that they are rolling a distribution. Its that they don’t know how to-do it properly.

  2. oiaohm says:

    Dr Loser you have told me that you don’t want the Malaysia cites.

    Have you yet provided a cite that compares your random choice of some bit of Malaysia against Munich? You have not.
    I don’t need a cite that compares.

    The cite you are looking for is how Malaysia own internal Distribution is made. Its another debian/ubuntu and its another one that has always been on time.

    I’m pretty sure that either Red Hat or Canonical could have done better. And, in the spirit of FLOSS, they would have “shared” and “distributed.” Ironically they would have been compelled to do so by commercial imperatives.
    Funny you want to make me laugh. Neither RedHat or Canonical make up custom distributions for clients. They provide services that will train staff how to-do it properly if you don’t pay for the training Canonical and Redhat will allow you to stuff it up the same ways Munich did.

    Munich Distribution mess is example of why you should not let untrained staff roll your own internal distribution. When rolling internal distribution there are a stack of things that look like tempting short cuts that are lethal mistakes. Paying Redhat or Canonical for distribution support does not prevent your staff from doing any of the list of lethal mistakes. Having your staff trained does prevent as the training on rolling custom distributions provided by many parties provide list of examples of stuff not to-do.

    The reasoning behind supporting a small coterie of self-interested neck-beards over the ten year development of a sub-standard localized “distro” of no interest to anybody outside said small coterie still escapes me.
    Dr Loser you cannot read can you. Munich internal distribution was not sub-standard the complete time. Due to lack of training Munich staff is having to learn everything the hard way. Munich Internal Distribution compare to other successful migrations Munich been up and down in quality massively where all the other successful migrations using same basic internal distribution design have constant quality. Munich in it recent internal has lifted back up in quality again. But since Munich has not had the formal training you do have to question have they stuffed up enough times yet to learn everything they should not do.

    All Munich has right now is an expensive dead end.
    Even with all the stuff up Munich has done the cost has still been less than using Microsoft. This has been why when new government took over migrating back way from Linux is not possible.

    I’d recommend Red Hat.
    Germany Government you cannot use RedHat for anything other than training. You cannot have Redhat constants come on premises. SUSE that is a German company would be possible.

    Next problem is there current system is dpkg based and you are suggesting RPM based Dr Loser. Final problem is packages Debian/Ubuntu provides more applications packaged up for install than any RPM based distribution. So attempt to take a system from Debian/Ubuntu system for desktop to Redhat ends up triggering “I am missing X application” problem all over again. Taking an operational setup from Redhat and migrating to Debian/Ubuntu is mostly ok because most cases the applications will exist on both sides. Taking an operational setup from Debian/Ubuntu to Redhat is add 3-4 years to migration time frame due to the infighting it will cause. Yes migrating from Debian/Ubuntu to Redhat is almost as bad as Migrating from Windows to Linux for disruption.

    You want to suggest commercials to fix Munich problem that can legally supply to German governments that will take over rolling the internal distribution until staff are trained.
    https://www.collabora.com/open-first/open-source-projects/debian.html
    Yes you suggest Collabora. That is training, On-site support the complete 9 yards. Collabora just classes Ubuntu as a sub form of Debian. So they would be able to walk in and fix up what Munich has existing. So your dead end arguement is wrong. Munich mess is not pass the point of no return.

    Also Collabora is who French police used when early on their internal distribution was having issues.

    Problem is sending in Collabora would not fix all Munich problems. All the rapid migrations have something in common a true neck-beard action of not truly caring and going ahead anyhow. What is the critical action that was never performed at Munich.

    The critical action is choose a department that as a organization you can afford to have fail. Then give 3 to 6 months notice of migration. Then migrate there complete computer systems and never migrate back. Due to this not happening Munich IT department has got bogged down in migrating documents and other things because the sub departments will have the idea they can argue back against the Migration and push all the work of the migration back onto the IT staff instead of doing what they can to assist the migration. Untrained mistake of being way too friendly. Too friendly equals excessive resistance.

    Remember attempting to change from dpkg based to RPM based is going to run into excessive resistance at Munich due to the fact department heads have the idea they can push back. Basically because Munich did not have a true neck-beards so critical actions has not been performed and any attempt to change anything to new is going to run into excessive resistance. So what ever you plan now has to keep modifications to a min.

    DrLoser there are ways to get Munich out of their mess but its not matching anything you are suggesting.

  3. Dr Loser says:

    Me: Is there any point at all to Limux?
    Fifi: This is pure bad form Dr Loser attempting to start a new topic in an area that has a topic.

    I haven’t heard the phrase “bad form” since I was at Oxbridge. How very splendid of you, Fifi.

    Awfully decent, old chap. Topping!

    If I may offer a mild remonstrance?

    The topic of “Munich Revisited” is basically Limux….

    … you pathetic ignorant dissembling little creep.

  4. Dr Loser says:

    Oh, and regarding Munich.

    The reasoning behind supporting a small coterie of self-interested neck-beards over the ten year development of a sub-standard localized “distro” of no interest to anybody outside said small coterie still escapes me.

    Ten years, multiplied by a FLOSS-oriented staff of, say 100 people.

    I’m pretty sure that either Red Hat or Canonical could have done better. And, in the spirit of FLOSS, they would have “shared” and “distributed.” Ironically they would have been compelled to do so by commercial imperatives.

    All Munich has right now is an expensive dead end.

    And you know what? Were I the consultant hired to lead them out of this self-imposed mess, I would not recommend Microsoft.

    I’d recommend Red Hat.

  5. Dr Loser says:

    Providing cites about Malaysia and other places that have been successful and how they did it compared to what Munich did shows that.

    Well, obviously, oiaohm.

    Have you yet provided a cite that compares your random choice of some bit of Malaysia against Munich? You have not.

    It is a Friday. This is a tradition.

    Bwahahahahahahahahahahahahahahahahahahahaha!

  6. oiaohm says:

    Dr Loser here is Malaysian Government in 2010 they complete operations was 97% Linux including Desktop.

    I believe the subject is Munich, not Malaysia, Fifi. Stick to it.

    Your WallsO’Gibberish(TM) are not welcome here. In fact, they’re not welcome anywhere.

    Hmm
    Let’s start with the “example after sample.” Apparently “most” make Munich look “incompetent.”

    Your words, not mine.

    Justify them with the relevant cites, please.
    Providing cites about Malaysia and other places that have been successful and how they did it compared to what Munich did shows that.

    Yet Dr Loser you say I cannot talk about Malaysia.

    Sorry Dr Loser your an Idiot. You have told me in one post I cannot give the type of posts that show Munich is incompetent then you ask for the cites. Part of showing incompetent action in migration operations is comparing to what the successful have done. Dr Loser you want to attack Munich without understanding what the successful have done.

    Pop quiz for anybody at all who may have followed this thread.

    Is there any point at all to Limux?
    This is pure bad form Dr Loser attempting to start a new topic in an area that has a topic.

    Dr Loser remember
    Insofar as there are successful examples of Linux distros in government organisations, you’re looking at something like the Gendarmerie in France. You are not looking at a Muenchen equivalent. Why not? Because the Gendarmerie did not “roll their own,” Jane, you ignorant slut.
    Totally bad language. Its not even factually correct.
    http://www.techeye.net/software/french-police-switch-to-desktop-linux
    In that cite from the start Gendarmerie in France use a custom Linux Distribution.

    Yes it based on 1 version of Ubuntu. Their distribution contains software that general Ubuntu does not. They work on joint development with a lot of different forensics tools that are not in fact in Ubuntu main. If you have watched video presentations by the French Police they have built custom Linux kernel at times for there internal Distribution. So you would say French Police are running a Ubuntu relation not Ubuntu. French Police is running something in the same class as Google successful Goobuntu.

    Dr Loser the reality is you are unable to make a factual arguement against Munich because you don’t know the facts. You repeatably incorrectly state have not done stuff X way when that is exactly how the did it.

    Munich has not in fact completely rolled own. Munich took debian or ubuntu most of the time only 1 version then extended it like what the French Police did. Munich slowest update cycle ever comes from when they stupidly attempt to take 2 versions of Ubuntu and merge them into 1. So compare Munich to French Police is compare using same method at core except for 3 years of stupidity on Munich part attempting the stupid of course for that 3 years they were not able to get a stable release out door.

    Since Munich, French Police and Google are all using same kinds of methods for making their internal distributions and the French Police and Google internal distributions are always on time and current. Neither Google or French Police pay outside party for Distribution support.
    http://www.juntadeandalucia.es/educacion/cga/mediawiki/index.php/Guadalinex_Edu
    I know not english. Guadalinex also is not a pure built from nothing distribution. Instead it a customized distribution that is publicly released. Built in the same ways Munich internal Distribution has been for a government yet it works and is always on time.

    So you have 2 examples with Google and the French Police where you have to take there IT personal talking in video and news reports on how it works. And 1 example being Guadalinex that is public FOSS that shows that Munich internal distribution maintainers are incompetent due to there failure to be able to meet timely delivery. Its not that Guadalinex has larger number of staff behind it.

    The existence Guadalinex totally disproves the idea that customizing your own distribution has to be problematic or that you need to pay outside companies for it.

    Ubuntu new snappy packages should hopefully in time prevent amateurs from attempting mixing of 2 different versions of Ubuntu or any other distribution and causing themselves massive amounts of pain.

    This is only one of many self inflicted pain points Munich IT guys have done to themselves. Most have cost them lots of Migration speed.

  7. DrLoser wrote, “Your WallsO’Gibberish(TM) are not welcome here. In fact, they’re not welcome anywhere.”

    I think I’m an authority on what’s welcome here. Despite some of oiaohm’s comments being quite strange (psycho-whatever) or contrary to my own beliefs (firearms etc), he obviously has an eclectic interest in many relevant topics. e.g. I’ve written about Malaysia and FLOSS. MH-370 revealed they have a long way to go in some ways but they did improve their IT with FLOSS in a systematic way when all the naysayers could manage was “It can’t work.” So, I declare oiaohm WELCOME!, officially. It’s very interesting to see how diverse the people of former colonies of England have become.

  8. Dr Loser says:

    Pop quiz for anybody at all who may have followed this thread.

    Is there any point at all to Limux?

  9. Dr Loser says:

    Dr Loser I can give example after sample of governments doing successful migrations to Linux. Most make Munich look incompetent most of them did the migration inside Munich operational time frame and defeated exactly the same list of issues and are over 90 percent Linux.

    A fine concept, oiaohm.

    Let’s start with the “example after sample.” Apparently “most” make Munich look “incompetent.”

    Your words, not mine.

    Justify them with the relevant cites, please.

  10. Dr Loser says:

    Dr Loser here is Malaysian Government in 2010 they complete operations was 97% Linux including Desktop.

    I believe the subject is Munich, not Malaysia, Fifi. Stick to it.

    Your WallsO’Gibberish(TM) are not welcome here. In fact, they’re not welcome anywhere.

  11. oiaohm says:

    http://techie-buzz.com/foss/malaysian-government-97-open-source-software.html
    Dr Loser here is Malaysian Government in 2010 they complete operations was 97% Linux including Desktop. The percentage now is approaching 99% almost 100 percent no Microsoft product. Please note Malaysian Government started their Migration in 2006.

    Do Malaysian Government have a custom Linux distribution to suite their needs answer is yes they do. Do they have custom built kernels for different reasons yes they do.

    How did Malaysian choose to address the issues it had with Libreoffice/Openoffice. Employed existing people working on the code base to fix up the Office suite for them. Due to Sun and Oracle issues most of those alterations only become fully public with Libreoffice.

    Dr Loser I can give example after sample of governments doing successful migrations to Linux. Most make Munich look incompetent most of them did the migration inside Munich operational time frame and defeated exactly the same list of issues and are over 90 percent Linux.

    Dr Loser successful migrations like Malaysian Government you don’t want to compare Munich against. Why because if you do your nit picking list gets quite short.

    1) Under Skilled Personal for task at hand.
    That is fairly much where you stop. As most of Munich issues are Under Skilled Personal seeing problem and attempting completely wrong solution. Most things Munich attempts are on the correct idea just wrong method.

    Rolling own distribution correct idea at this point as there are control advantages to-do this that many other Government deployments show. Then Munich IT guys attempt to roll own distribution out of 2 versions of a distribution into 1 completely nuts idea that most skilled people would avoid. Only skilled example that has ever worked in production of this is done at Google with a Debian/Redhat high-breed. Even Google admits that its nuts but they only did it because they could not have down time and they would do everything to avoid ever doing it again. This was documented 3 years before Munich made the mistake.

    Mixing 2 versions of the same distribution is one of those things at first seams like it might be a short cut to the under skilled. But is fairly much a trip straight to the gates of hell with issues.

    Something else to consider is the rate of document errors with Libreoffice has reduced a lot. The fact MS Office now has produce PDF that also help because now you can mandate all incoming be PDF.

    Most of the major deployments don’t choose Redhat or Canonical because when you do the license cost its still cheaper to employ your own staff to-do the job.

    There are quite a few fully successful government migrations to Linux Desktops and almost none of them use Redhat or Canonical.

    Mind you
    http://www.zdnet.com/article/iceland-swaps-windows-for-linux-in-open-source-push/
    Iceland is nicely on the path to be disorganized mess that makes Munich style look tidy.

    UK government is working with Collabora so their move away from Microsoft dependence are going fairly smoothly.
    https://www.gov.uk/government/news/collabora-deal-will-provide-savings-on-open-source-office-software

    So there are multi-able migrations to Linux at the moment that are going forwards that will not be stopped because you complain about Munich. Why most people selling these migrations don’t use Munich as example when you have Malaysia and other better examples to use.

    Dr Loser with Munich you need to compare their actions to the other successful to sort out when they were on the correct path and when they took the trip off into hell.

  12. DrLoser wrote, “I’m not even convinced that Munich has a “custom built kernel.” I can’t even imagine why they would want to do such a thing.”

    Custom built kernels have many roles in IT:

    • for greater security one can remove all unnecessary code and audit heavily what’s left,
    • for greater performance one can set defaults and turn on options that optimize performance for the task at hand,
    • for thin clients one can make a kernel that’s smaller and loads faster, and
    • for greater reliability one can turn off options or features that are unlikely to be useful in the intended role.

    So, one must have very limited imagination not to see the advantages of customization. My Beast has a custom-built kernel. I did that so I’d know how if it were ever necessary, and yes, to speed loading a bit by eliminating code I would not need like many drivers and features. I built in my drivers, except for ECC. Even that saves some time at boot since modules don’t need to load. Customization my way greatly reduces the time to build a new kernel too. There’s just less code to compile. There’s nothing particularly wrong with Debian’s kernels but they lack a few features I wanted and included stuff I won’t use simply because of my hardware environment. There are disadvantages like having to rebuild to add new hardware but that happens rarely.

  13. oiaohm says:

    Funny how absolutely nobody has seen the cost-benefit of that, outside a bunch of self-interested neck-beards in Munich who have hijacked the system, Fifi.

    Oh, and by the way, I’m not even convinced that Munich has a “custom built kernel.” I can’t even imagine why they would want to do such a thing.

    Andalusian Autonomous Government started there own distribution for the reason I stated. So I was not talking about Munich with that information. I was referring to a group that got it working properly.

    Its little things like 16 bit DMA that is no longer included in Redhat, Ubuntu, Suse and Debian default kernels. Why is that feature disabled is that it is a security risk. Some times you wish to dial back security for hardware compatibility.

    Munich does have old enough hardware that they would need 16 bit DMA and other options that are deprecated on security grounds.

    Please note Windows 7 also has 16 bit DMA locked off. This is running hardware too old to run current windows. Yes there is hardware that is too old to run current Linux Distributions without custom kernel.

    Andalusian Autonomous Government own data on the topic says being willing to dial back kernel security by enabling some features disabled on security grounds extends possible hardware operational life by about 4 years(at that point the hardware is normally all completely failed). That has saved in replacement cost over 200 000 dollars per year for Andalusian Autonomous Government in reduced hardware replacement. So the effort of custom building kernel can kinda pay for self in these complex environments.

    Dr Loser as you stated you had never read about Andalusian Autonomous Government so you don’t understand what a working major Linux install has of the complexity or larger complexity than Munich.

    Please stop calling the people neck-beards at Munich they don’t have the experience that suggests. Munich started custom distribution that was the correct thing for hardware compatibility but then did alterations way past what was required to achieve that objective including alterations no true neck-beard would ever do.

    Hardware Compatibility and Security are a trade-off due to faults being in hardware a times.

    I don’t mean to be mean here but if you read Microsoft deployment guides you will find out auto updates off is not truly an option.
    Dr Loser you are not getting a cite for this because you called me fifi. I do have the cites for it. Do note the point cites. To work it out is 3 bits of information about how windows auto repairs self in case of detected error. A mirror hardware error can trip this off. Result is you go from bad location of minor hardware failure to worse of complete sections of the OS configured back to defaults. Fun part is Microsoft did document it all.

  14. Dr Loser says:

    Once you have enough hardware variation maintain your own fork of a distribution with possibility of your own custom built kernel becomes less problematic than using a upstream distribution unmodified.

    Funny how absolutely nobody has seen the cost-benefit of that, outside a bunch of self-interested neck-beards in Munich who have hijacked the system, Fifi.

    Oh, and by the way, I’m not even convinced that Munich has a “custom built kernel.” I can’t even imagine why they would want to do such a thing.

    I imagine you have evidence with which to back your pathetic dreams up. Present said evidence, bitte schoen.

  15. Dr Loser says:

    I don’t mean to be mean here but if you read Microsoft deployment guides you will find out auto updates off is not truly an option.

    A cite, please?

  16. oiaohm says:

    Robert Pogson what you just described is why WSUS with Windows. WSUS you don’t turn updates off at the client machines but block clients being informed and provided with updates at the WSUS server. Microsoft basically forces you to run a WSUS server when you want to block Windows updating or you have clients self repair the auto updates off option to on at random horrible times.

    I don’t mean to be mean here but if you read Microsoft deployment guides you will find out auto updates off is not truly an option. WSUS means running Windows Servers. So the is serous lack of control problem with Windows clients without paying Microsoft for servers.

    To take 100 percent control of the update process and select when updates in fact get provided to clients is why you run a WSUS server with Windows.

    The reason for deploying WSUS in windows environment is the same as rolling a internal distribution in a Linux/BSD environment and its totally about control of the update process. So its not exactly special or odd as a task to be done.

  17. oiaohm wrote, “So arguing against Munich or anyone else rolling their own distribution for internal usage is exactly like saying companies should never run their own WSUS and never customize the Windows installation disks.”

    Not “exactly”. With FLOSS one knows exactly that one can do whatever one wants with the code and copies as long as the licence is not changed. That’s much more flexible than simply omitting or including a few files for drivers. Also, M$ can simply overwrite whatever it wants once it’s connected to the Internet. M$ can undo years of effort in a few minutes. I know at one lab we had turned off automatic updates and at 0300 M$ turned them back on and wrecked the lab so badly we had to reinstall 24 machines. We never did get the scanner to work again. Life is much simpler with GNU/Linux. The user has control of the software, not the other way around.

  18. oiaohm says:

    Dr Loser I will answer you question here it was already answered by the way if you read carefully.

    Once you have enough hardware variation maintain your own fork of a distribution with possibility of your own custom built kernel becomes less problematic than using a upstream distribution unmodified.

    The core reason for rolling your own distribution in company or government setting is why governments companies rebuild windows install images with altered embedded drivers list and run there own WSUS servers. Basically rolling your own distribution is the Linux equal way to doing both of those other things as Windows Admin.

    So arguing against Munich or anyone else rolling their own distribution for internal usage is exactly like saying companies should never run their own WSUS and never customize the Windows installation disks.

    So rolling own distribution for major FOSS deployments is normal behavior just like for windows setting up WSUS and customizing installation disk.

    Munich issue is how they have customized.
    1) Attempting to roll your own distribution by using 2 different branches of a distribution at once as Munich had done is open to major hazard.
    2) Using under skilled developers to make custom software.

    Dr Loser basically if you had done your research and found the working examples you would have never ever made the fuss over them rolling their own Distribution. Ok made a fuss over them being stupid and attempting to cross two distributions into one that you should not. Made a fuss over poor QA setup for the complexity they are dealing with. There are things to complain about Munich for.

  19. DrLoser wrote, “2% up in half an hour is meaningless Trying to attach meaning to it is dangerous”.

    Nonsense. There are a lot of 2%s that I’ve accumulated. My banker called me up to arrange an appointment to change a couple of my accounts. She understands that on a big enough base they amount to a lot of money. Thousands of people are trying to buy my assets by raising their bids. They are real people with real money. I ended the day up over 3% and that’s not been unusual.

  20. Dr Loser says:

    I’m happy today. My investment portfolio is up 2% in the first half hour of trading… I’m tired already making so much money before lunch.

    I’m beginning to get seriously worried, Robert. You do understand the concept of variance, don’t you? You do understand the time-scales involved? You have heard of the Dunning Kruger effect?

    2% up in half an hour is meaningless Trying to attach meaning to it is dangerous

    Seriously. Talk to TLW. TLW will offer you a degree of sanity here.

  21. oiaohm says:

    Robert Pogson when you are talking 750000 machines you are talking a few thousand+ different different supply orders.

    Well, folks with that many computers probably have inventory controls so they could do their testing on one or a few of each kind.
    Shock horror that does not work at there scale.
    1)IT staff spending time testing a few thousand configuration costs too much time.
    2) Vendor repaired machines under warranty may have pick up different parts. So the few thousand buy batches that make up the supply of the computers expands in numbers.
    3) Most evil problem is some hardware fails in not 100 percent fatal ways but enough to make a non important driver throw an kernel panic error.

    The only serious problem of that kind I ever had was Nvidia NICs at Easterville. That’s a good reason to avoid Nvidia. I found a work-around by reconfiguring the driver without tweaking any source code.

    This is basically it 90%+ of the time you don’t have do anything custom. Its the less than 10 percent problem stuff that is mixture of hardware configuration and hardware failures that cause problems. Like a desktop computer having failed internal part that is not used by any user tasks is not grounds for it to be scrapped.

    The issue with hardware failure is you can have 2 machine configurations that look absolutely identical on inventory list. The problem is the difference is one has a bad batch of parts the other does not. The bad part may be perfectly workable around by simply black listing that driver. Of course by the time this happens the machines are out of OEM support contract converge normally.

    Andalusian Autonomous Government machine numbers are growing a such a rate because there system is waste not want not.

    If you want to run a waste not want not solution it really does pay to copy Andalusian Autonomous Government test image system.

    Really have a test image run and report to central inventory system of some from that X machine passed certification for new distribution X revision is not hard. You could say running the test image makes sure you do really know the inventory of hardware you are working with.

    Robert Pogson you could say you were party lucky not to have a bad batch of something in the mix.

    Munich in there 15000+ machines most machines were acquired in lots of 20. So 700+ different configurations. 3-4 different configurations like you had Robert you can get away with not using a test image. But once you cross 25 different configurations test image deployment becomes critical not to be running round attempt to make deployed OS image work on machines that fail for some unknown reason after update.

    4 min cost to have user tick of a test image vs machine not working for 24 hours+ for the staff member is not hard maths to work out what one will be a cost saving. 4 min per user to run a test image to avoid percentage of machines not working and disrupting productivity by more is worth it. Test image fails user reboots machine and machine boots up as per status normal.

    Please note Andalusian Autonomous Government buys in batches of 1000 commonly. So even that Munich is a smaller number of machines the number of hardware configurations they are dealing with is the same as Andalusian Autonomous Government so they really should be doing the same things. Since Munich is not they are going to be suffering some extra pain. This kind of stuff also come out in different times at conference Munich IT lead has been questioned.

    The number of hardware configurations at Munich and Andalusian Autonomous Government kinda mandate custom distribution. Yes a lot of both is hardware being run until it dies.

  22. oiaohm wrote, “for hidden evil is that departments would normally have all the same source machines. So the first few machines you deploy may be fine until you hit that department and every one of their machines fails due to some hardware driver issue then you have a huge stack of unhappy people.”

    Well, folks with that many computers probably have inventory controls so they could do their testing on one or a few of each kind. I’ve never had fatal problems of that kind in schools with 3 or 4 kinds of PCs. I guess if there were 100 kinds there could be a problem but almost all OEMs of desktop PCs used very standard parts. The only serious problem of that kind I ever had was Nvidia NICs at Easterville. That’s a good reason to avoid Nvidia. I found a work-around by reconfiguring the driver without tweaking any source code. The authours of the driver obviously had seen such problems but it was silly having servers polling NICs instead of servicing interrupts… Still I had hundreds of happy users.

    I’m happy today. My investment portfolio is up 2% in the first half hour of trading… I’m tired already making so much money before lunch. Simple things do work, just like GNU/Linux.

  23. oiaohm says:

    Robert Pogson for hidden evil is that departments would normally have all the same source machines. So the first few machines you deploy may be fine until you hit that department and every one of their machines fails due to some hardware driver issue then you have a huge stack of unhappy people.

    Its a scale thing once you get in the the 10 of thousands of machines small failure can equal quite big mess. 1% failure rate does not sound like much until you have 750 000 machines and that is 7500 machines. So they are needing like a 0.001% failure rate of deployment or better with correct systems fairly simple to achieve.

    The question how bad can it be is really relative. Larger the number of computers small failure rates start being really huge. So its hunt down the minor niggles for smaller networks and solve them because those are now major issues due to the network being larger. Yes looking at those running larger network designs that you can help you achieve the quality your size network needs.

    In the beginning, they did it overnight using thin client technology.
    There was a few months of the IT bedding the thin-client servers down before they dropped them on the staff. Yes to the general staff it was an over night change. It does show how little training general staff need. Yes they invested absolutely nothing in retraining existing general stuff the training budget was spent purely on the IT Staff.

    Andalusian Autonomous Government does demo all the fastest methods of change over what makes them very interesting reading. 3 to 6 months bedding down thin client servers 1 day doing instant migration to thin . Then deploy Linux thicks back in future where need. Yes total by by Microsoft plan. Interesting enough massive sudden change saw a increase in productivity not a reduction.

    It so totally different to the Munich mess. Very much different read what the true old beards of Unix did with a rapid migration.

    Really a Andalusian Autonomous Government style migration does not leave time for it to be bogged down in political debate. Due to IT staff being ready from the day after the political side vote to have Linux desktops they did. So showing that in basically a snap of fingers a areas dependence on Microsoft could disappear.

    French Police used mostly the same lighting migration method.

    The fact that Munich did not use any thin-clients solutions could in fact be part of the reason why they have so much complexity. Thin client an application to desktop can in some cases avoid having to upgrade the desktop. Munich restricted options IT department was allowed to use as well.

  24. oiaohm wrote, “Paying for a Distribution compared to customizing own Debian on costs as soon as you cross 1000 seats it cheaper to-do it yourself if your staff is trained”

    The way I did it, the break-even point was near 1 machine. For a computer lab for instance, it could be done in 1h with LTSP and using the teacher’s computer as a server. I would walk around setting the machines to PXE booting while the installer was working, reboot the teacher’s PC, and then turn on the lab-computers one by one to get their MACs registered. Done. Of course it took me many times that the first time but I was getting on the job training which worked, obviously. Still, my first lab cost me way less time than installing TOOS on even a few PCs. The thing that always puzzles me is why anyone would continue to do things M$’s way when GNU/Linux is so easy and flexible.

  25. oiaohm wrote, “For volume convert per month successfully and maintained successfully Andalusian Autonomous Government is one of the biggest and fastest at migrating.”

    In the beginning, they did it overnight using thin client technology. That’s the patch I recommend. Yes you can break thousands of machines at once but you can fix them equally fast and how badly broken must something be that the first few copies are not detecting the breakage?

  26. oiaohm says:

    I first used GNU/Linux on a few machines in a school in 2000 as a complete novice. I couldn’t even get NFS working… By 2004 I and my students built a server and were doing client and server installations and SSHing all over the place. So, 5 years of part time experience is enough to be dangerous in this department.
    Robert Pogson that is so true 5 years enough to be dangerous.

    Just on scale Andalusian Autonomous Government has over 750000 computers under there management system. Serous-ally Munich is nothing at less than 20000. Yes they deploy the two image(1 32 bit 1 64 bit) on all 750000 machines and this is only possible due to a very solid and time effective QA process. They are expecting in the next 5 years to cross the 1 million computer mark. Please note that 750000 is not counting out side users machines that is machines Andalusian Autonomous Government owns. When they first migrated to Linux they had over 250000 machines and were completed in 2 years.

    Here is the killer when Andalusian Autonomous Government had only 250000 machines they had the same number of IT staff as Munich has now. So basically 3-6 months should have done 15000 machines if the processes were effective.

    Yes at 250000 machines the odds that a dud image will only break one machine alone is insanely low. It would be more likely to be about 1 hundred at this point your support desk is swamped in hell. So at that scale get QA wrong don’t last even 1 distribution update. 15000 at Munich was complain about 10-20 machines giving problems per distribution upgrade that at Andalusian Autonomous Government size is unsupportable so systems must be in place to prevent it ever happening.

    You want dependable system copy what the largest do.

    Robert Pogson as you scale up how effective your processes are become critical to how much time it costs you. Solid QA process taking advantage of the normal staff using the machines means the IT office QA time to get a distribution ready for deploy does not keep on expanding with each newly added machine type if it works perfectly. Properly designed solution basically scale to massive.

    Robert if you think about it you with see points in your process were with your limited experience you had infective QA process.

    If you spend the time tracking down the Governments who have done successful migrations and the time taken. Munich is the slowest by massive amount it insane. For volume convert per month successfully and maintained successfully Andalusian Autonomous Government is one of the biggest and fastest at migrating. Of course they don’t outsource to anyone. Really Munich staff size is big enough if the right plan and design had been used.

    Linux Desktop in a lot of ways is ready. Problem here comes having the staff with the correct skill and knowledge to deploy the Linux Desktop effectively. There are many large examples doing it perfectly every single time.

  27. oiaohm says:

    1) I contend that Munich would have been better off, given the stipulation, working with either Red Hat or Canonical (or similar FLOSS provider). That way they would not have been working on their own. It’s easy to imagine them getting a massive discount.
    The reality if Dr Loser arguments against Linux does not have this on price he is screwed.

    Paying for a Distribution compared to customizing own Debian on costs as soon as you cross 1000 seats it cheaper to-do it yourself if your staff is trained. Different FOSS migrations prove this point.

    FOSS support companies like Callabora do not make a distribution. The will provide training to your staff to make your own internal from existing correctly. They also provide coding services to do extensions correctly and maintained for a cost.

    If Munich had contracted Callabora at the start to train and set them up they would not had the issues as they would have been training in what was the list of things to never do when customizing a distribution and extensions would have been coded correctly. Cost of using someone like Callabora most casts is not per seat..
    https://www.collabora.com/services/planning.html

    Most of the Collabora costs are one off costs with light on going. Light on going being training for new IT Staff and commissioning Collabora to customize stuff or fix stuff upstream.

    Red Hat or Canonical and other commercial distribution providers are quite expensive compared to the Collabora path. Of course depending on how much work you need done Collabora path can be more expensive than going out and employing quality FOSS developers and staff directly.

    Dr Loser you have never done the training on custom distributions because you would have learnt that making a custom distribution is quick. Validating custom distribution can be time consuming. Andalusian Autonomous Government/French Police/Collabora showed a really good method with test image that costs less than 4 min of general staff time to tick machines off as suitable for new distribution version or not and produces list of machines that need closer inspection before doing migrate. Key word general staff being staff that use the machine normally not going out specially to find out.

    Munich IT lead was asked the question about sending out test image over the network to find out what machines could take the new image and work. Due to sub department IT split mess that Munich is they cannot do that???

    Ok how did Andalusian Autonomous Government deal with that problem of sub department management issue. The first thing they had made was CGA (Advanced Management Center). What is basically like a per site WSUS solution to the problem with the means to push out test image and report back if there are any issue machines out there.

    The difference of this path is the means to be actively testing every real computer in your network for compatibility with what you are doing. No point spending hours attempting to make a perfect distribution if it is not going to have the hardware compatibility you need.

    Thing to remember going Red Hat or Canonical does not remove the possibility that some update by them will be incompatible with the hardware you have. So not having a framework like this could hurt you if you roll your own distribution or pay some someone like Red Hat Or Canonical.

    What I have just stated is about having the frameworks in place to have success not paying money over and over again to third parties. Imaging windows out to a stack of machines has it same share of issues. Correct frameworks means you know that your images are broken before you render a machine not usable.

    Correct frameworks to find out if the distribution your are building will work on your machines saves many wasted hours.

    Dr Loser if you have watched the videos of Munich IT talking take note of what he says is impossible then look up what other successful migrations did with same problem you see huge problems that Munich IT don’t know how to-do the job well and have not commissioned work on the software they need to-do the job well because they don’t know they need it.

  28. oiaohm wrote, ” please look at Andalucía they started with experienced staff in 2004. They even had their own properly customized deployment solution in under 5 years. The reality here is Munich started with Linux novices who had big dreams. Andalusian Autonomous Government core staff in 2004 already had 16+ years experience each dealing with Linux, Unix and BSD.”

    I first used GNU/Linux on a few machines in a school in 2000 as a complete novice. I couldn’t even get NFS working… By 2004 I and my students built a server and were doing client and server installations and SSHing all over the place. So, 5 years of part time experience is enough to be dangerous in this department. Still *nix is a very broad field that likely has no one fully cognizant of everything. Just read LKML. There are so many dialects in the one project with so many specialized people. To do an installation and just add the packages needed for a particular working environment is not much of a challenge. I like to do it with apt-get and dpkg. Others use point and click stuff. It just takes a bit of time to learn what packages are available and how to find packages that do certain things, just searching. A good head start can be had by making a minimal installation and then adding a few key packages that suck in most of the dependencies needed to make a usable desktop. A knowledge of X helps too. If you are building a terminal server, you don’t actually need to install an xserver there. You can install that on the clients instead. If you use multiple servers you can use ssh -Y and specialize servers by application. There’s no end to how simple or complex it can be made, whatever works in the situation. The beauty is that GNU/Linux is flexible enough for just about anything and even an incompetent nincompoop [SARCASM] like me can do it rather easily.

  29. oiaohm says:

    The regional Andalusian Autonomous Government of Andalucía in Spain developed its own Linux distribution, called Guadalinex in 2004
    Number you need to follow up on what happened here Dr Loser because they really did release own distribution they were using non customized from 1998. So this is a really old government usage.
    4) But they didn’t. They insisted on releasing their own distro
    Dr Loser this is kinda wrong. LiMux from Munich is an internal only distribution not a released distribution. One of the key differences between Andalucía and French Police vs Munich is the fact that both Andalucía and French Police in have open FOSS project sites for their own created distributions open them up to third party auditing.

    Basically Dr Loser there a big difference in meaning between making own distribution and releasing own distribution. Munich made own distribution did not in fact release it. So truly releasing their distribution may of made a lot of differences because lot of options for excuses to be late don’t cut it when publicly releasing.

    2) You contend that you could have built the Limux distro in an hour or so. You are being somewhere between disingenuous and senile. That was never a realistic possibility.
    French Police spun up their first custom distribution that went into production in 2 hours.

    To pull it off is made up of many bits of software and sneakyness.
    http://www.instalinux.com/ from HP makes your custom install images quickly including allowing customizing repository update location. Making a test image and a production deployment image in under 2 hours using HP solution is straight forwards.

    Next part of the trick is network boot. So machines in network load up a test image from server exactly once and ask simple questions on screen like do you hear sound and do you see image as confirmation that hardware support is working. French Police had the network boot in place to restore windows when it broke.

    3) The neck-beards in question could easily have customised everything up the wazoo. After all, they were allowed ten years to do so.
    Neck-beards suggest long term Linux/Unix usage experience and training when they started. This is not the case a Munich please look at Andalucía they started with experienced staff in 2004. They even had their own properly customized deployment solution in under 5 years. The reality here is Munich started with Linux novices who had big dreams. Andalusian Autonomous Government core staff in 2004 already had 16+ years experience each dealing with Linux, Unix and BSD. French police in 2007 core staff has 12+ years experience each dealing with Linux Unix and BSD. When Munich started core staff had basically 0 one guy had 12 months. So Munich is only just now getting to the point of saying they have experience staff. Yes Munich has had to live though the learning curve.

    I contend that Munich would have been better off, given the stipulation, working with either Red Hat or Canonical (or similar FLOSS provider). That way they would not have been working on their own. It’s easy to imagine them getting a massive discount.
    Dr Loser out source to Canonical or Red Hat would have not solved the fact the Munich staff were under-trained and under-skilled for the problem at hand. Formal training or long term experience was required. Munich had neither. There are prior examples with failed SUSE migrations in EU that the under-trained support staff caused the failure. So out sourcing to a third party distribution does not solve the problem of on site support staff not being skilled enough.

    Any good reason to assume that it would take ten years to customise software?
    This is in fact answered by the same thing. How many of Munich staff had years of programing experience or formal training in programming when they started the answer again is 0. So they have had to attempt to learn to program on the fly. Result is very slow customization and poor code design in places leading to even more problems.

    Dr Loser there is a complete Asian country where everything government runs on Linux. The software customization for that was done in 4 years internally no outsourcing. Skills staffing examples the customization of software is done in under 5 years. That country completing 100% migrating of government systems in 2 years.

    Munich is a pure example of what not to-do. Yet for some reason there is not cost blow outs bigger than the cost of paying for Microsoft products. Munich total spend on migrating exceeds Andalusian Autonomous Government and French Police migration costs by a factor of 100 at min. So for every 100 dollars Munich spends the other two only had to spend 1 for the same final result. Yet some how Munich is still cheaper than paying for Microsoft. HP quote on what was required for migration for Linux got the training quote wrong as they presumed you have to train everyone. You want cost effective you will either employ experienced staff or get your IT staff trained.

    Dr Loser the reality is Munich is fairly much worst case example that works somehow. Its likely watching videos of people being hurt and laughing so it makes interesting news. The ones that in fact work don’t make much news.

  30. Dr Loser says:

    Incidentally, Robert, we have all customised software. I have customised software. You have customised software.

    Any good reason to assume that it would take ten years to customise software?

    I mean, even FLOSS software doesn’t have that much technical debt to overcome.

  31. Dr Loser says:

    No, they didn’t. They used FLOSS so they didn’t have to write a lot of software, just the stuff they wanted customized.

    This is a hard furrow to drive, Robert. I can’t imagine why. I have already given you the stipulation that FLOSS would be better than proprietary alternatives, and as you know I don’t believe that for an instant. But the essence of honest discussion is to consider the alternatives, in this case the FLOSS alternatives, as stipulated.

    1) I contend that Munich would have been better off, given the stipulation, working with either Red Hat or Canonical (or similar FLOSS provider). That way they would not have been working on their own. It’s easy to imagine them getting a massive discount.
    2) You contend that you could have built the Limux distro in an hour or so. You are being somewhere between disingenuous and senile. That was never a realistic possibility.
    3) The neck-beards in question could easily have customised everything up the wazoo. After all, they were allowed ten years to do so.
    4) But they didn’t. They insisted on releasing their own distro

    My theory here, Robert, is that the Muenchenistas have sucked tens of millions of Euros off the public teat over the last ten years or more in order to pleasure themselves by releasing their own distro.

    You have yet to offer a credible alternative.

    Feel free to do so. Try not to generalise. Imagine that you are the guy in charge of IT in Munich. Consider what value might have been delivered by the neck-beards.

    Just be honest for once. I am still stipulating a Linux solution. Do you seriously believe that the Muenchenistas have delivered one with an acceptable Cost/Benefit, even in FLOSS terms?

  32. DrLoser wrote, “The neck-beards in charge created a whole new distro.”

    No, they didn’t. They used FLOSS so they didn’t have to write a lot of software, just the stuff they wanted customized.

  33. oiaohm says:

    Deaf Spy
    Then you claim Munich actually helped LO. Good, but their mail-merge is still broken, and LO is not helping them in anyway for some weird reason.
    Exactly why do we need to fix what is fixed.

    By the date August 26, 2015. Yes the date of the techrepublic article it was already fixed. The conference video its quoting is 3 months before that. That is another issue with techrepublic not putting information up in a timely manner and not updating that stuff has been fixed, One of the libreoffice developers in that video mentions proto type work to fix Libreoffice mailmerge for good and that was applied before the techrepublic article. That developer helps Munich out fixing there extension properly. That developer works for Collabora. Munich could have employed Callabora to fix the Open/Libreoffice problem in the first place and it would have avoided all the issues.

    So you could say the Mail Merge issue is nothing more than over zealous doing it in house.

    There is a substantial difference between a school-teacher burning a custom DVD “distro” and a municipal government electing to spend ten years or more with an internal software house — for all intents and purposes — that creates its own “distro.”
    What about the french police Dr Loser they rolled there own Distribution from the start. Yes internal custom forensics software .

    The french police that is a bigger system than Munich they employed Callabora todo their OpenOffice/Libreoffice work and their staff undertook training in how to correctly run a distribution.

    Brazil schools do the same kind of thing but instead of Callabora they used broffice team what they have since fully employed internally.

    This shows the two working methods.
    1) outsource to a company that has the FOSS developers to extend it.
    2) employ existing FOSS developers with track record working on the project you need modified.

    Munich takes the 3 that has higher failure rate. Use unseasoned fresh developer to project to extend it then have everything go wrong.

    Outsourcing to extend a project is different to maintaining own Distribution.

    There aren’t any figures for this that I know of, but I’d guesstimate that the “distro” effort alone must have cost at least five man-years. Probably closer to fifty.
    Google and French Police has done figures on maintaining their custom distributions. Well setup costs are as follows 2 servers running 24/7(1 server running automated QA on every new change) and 2 “5 day” weeks of man hours are year. HP has in fact written the software to automated the process and it is FOSS and it includes automated chroot for old applications on new and new applications on old so avoiding the crossing of streams explosion.

    That the fact that Munich has cost themselves way more what it should and they are so slow to deploy newer versions.

    There is only a single example of a governmental organisation
    Funny how incorrect.
    The regional Andalusian Autonomous Government of Andalucía in Spain developed its own Linux distribution, called Guadalinex in 2004
    Yes they rolled they own Distribution did it correctly and have had no problems since 2004. So have not got any news covering Dr Loser.

    The French Police based their deployment model off of Andalucía in Spain.

    Basically Dr Loser none of your points in fact hold up if you compare to Andalucía instead of Munich. Basically copy Andalucía not Munich if you want cost effective.

  34. Dr Loser says:

    <Poggux.

    “It only took an hour. It’s neat, it’s sweet, it’s svelte, and you’d hardly notice the difference between Poggux and standard Debian!

    “In fact, I defy you to tell the difference. Just as good as standard Debian, but with the added goodness of Miracle Fraudu-Pog!

    “Accept no substitute!

    Poggux has been successfully deployed over several school districts in Manitoba over the last ten years. Limux has only ever been deployed once.”

    I trust this makes clear to you quite how much of an imbecile you are making yourself out to be, Robert.

  35. Dr Loser says:

    By creating these distros, less than 1h’s work, I saved hundreds of hours installing software.

    Apparently you need this obvious fact drilled into your thick head, Robert.

    There is a substantial difference between a school-teacher burning a custom DVD “distro” and a municipal government electing to spend ten years or more with an internal software house — for all intents and purposes — that creates its own “distro.”

    Your experience is not remotely similar to the Munich experience. One man hour does not remotely equate to something that is quite clearly more like ten thousand man hours.

    Which part of this blatantly obvious comparison are you unable to follow?

  36. Dr Loser says:

    It’s not insane. It can be very efficient. At my last school, I created a distro consisting of a minimal installation of Debian GNU/Linux on some old PC kicking around.

    You really aren’t addressing the question, are you, Robert?

    Any old fool (I am an old fool) can “create” a “distro” by taking, say, Debian — your choice, let’s go with it — and stripping out the bits you don’t like, want, or need.

    But this isn’t actually what Munich did, is it? The neck-beards in charge created a whole new distro. There aren’t any figures for this that I know of, but I’d guesstimate that the “distro” effort alone must have cost at least five man-years. Probably closer to fifty.

    And for no discernible gain whatsoever, compared to say hiring a decrepit old man from the boonies in Manitoba to slim down an already extant distro.

    I’m with you on this one, Robert. I see no reason why Munich gains by investing (my multiplier is $50,000 per man year) something like $250,000 or even $2,500,000 on their own distro.

    Why, they could have hired said decrepit old man from the boonies in Manitoba at a tenth of the price, and they’d have got the same result.

    How very sad. In so many very different ways.

  37. DrLoser wrote, “Explain why any organisation, anywhere, would roll their own distro. Go on, think for yourselves, little flossie sheeples. Why do that? It’s insane.”

    It’s not insane. It can be very efficient. At my last school, I created a distro consisting of a minimal installation of Debian GNU/Linux on some old PC kicking around. I then added basic packages useful for desktops in schools. Every package that went into that installation was cached on a local proxy server. I included a little script to name the PC based on its IP address and copied over a public key for administration by OpenSSH. I could install packages or one with a single command. I didn’t even need a complete package-list, just items for the desktop environment and user-applications. Simple and fast. I could also boot the installer by PXE. Another technique I used within the lab used Clonezilla to copy a disc-image to a server and roll it out to new machines. If I’d had a gigabit/s switch I could have loaded 24 machines in parallel by broadcasting. Even without the switch it took about 10 minutes to install an image. Another option was to create a bootable USB or CD-image. There were machines that could not boot from those sometimes but all but one could boot PXE.

    By creating these distros, less than 1h’s work, I saved hundreds of hours installing software. It worked for me. Converting machines to be thin clients was also a good option as the installation image could be minimal.

  38. Deaf Spy says:

    Now, now, Robert. First, you claimed Munich was saving money despite having to redo all their templates. You still haven’t brought any proof for that.

    Then you claim Munich actually helped LO. Good, but their mail-merge is still broken, and LO is not helping them in anyway for some weird reason.

    DYI is not fixing a few bugs. Gosh, I can workaround some bugs even in a commercial project, and if I am skillful enough, I can even patch them on assembly or binary code level. DYI means – get a team and do most of it.

    Btw, Robert, when was the last time you submitted a fix to a FLOSS project? For the record, I did so just a month or so ago, when I improved a function in DotSpatial to run O(n) instead of O(n*n). Poor morons were using a list with full-scan to find unique values, instead of a hash table. If you are curious, I can even give you the changeset number and code snippet in question.

    At the end, perhaps I know more about FLOSS than you do, Robert.

  39. Dr Loser says:

    On a slightly broader issue, and still stipulating for the sake of argument that a Gnu/Linux desktop solution will always beat a closed source solution — given whatever axioms and/or evidence you choose — answer me this.

    Explain why any organisation, anywhere, would roll their own distro. Go on, think for yourselves, little flossie sheeples. Why do that? It’s insane.

    Nobody rolls their own kernel. Nobody rolls their own C runtime library, for that matter. The whole idea is completely contrary to the “caring and sharing” bit of FLOSS.

    Nothing wrong with customising your own FLOSS software, based upon a distro that has maybe a thousand guys upstream doing the curating. But that customisation would involve things like individual programs and even Open Office templates and stuff.

    A whole distro?

    The only reason to do that is because a bunch of temporarily politically “correct” neck-beards have managed to convince City Management of their worth through a pack of lies.

    Good for them. A job for life. But would you seriously recommend the Muenchen Abortion to any other city government whatsoever?

  40. Dr Loser says:

    a) Shut up, Fifi, you ignorant slut. (A reference to Saturday Night Live, although Jane is worth ten of you.)

    There are many examples of different government departments around the world successfully rolling their own distributions and doing it without issues.

    b) There is only a single example of a governmental organisation (in this case a municipality, although I make no distinction at this point) rolling their own distribution and sticking with it. Vienna gave up — you even admit this yourself. Freiburg gave up. Insofar as there are successful examples of Linux distros in government organisations, you’re looking at something like the Gendarmerie in France. You are not looking at a Muenchen equivalent. Why not? Because the Gendarmerie did not “roll their own,” Jane, you ignorant slut.
    c) I can’t think of a single example of a “hand rolled” distro that is still being used (no idea of sunk costs, has Muenchen) when everybody around has shown not the slightest interest in it. To save you the bother: Venezuela has an equivalent. The PRC has an equivalent. Even North Korea has an equivalent. Useless as those equivalent systems might be, every last one of them has more than one organisation that depends upon them.

    I am not, here, arguing against the adoption of Gnu/Linux, Jane, you ignorant slut.

    I am simply arguing that the choice of either Red Hat or Canonical or indeed FLOSS providers as yet unconsidered would have been far wiser

    Do you have a specific come-back on the Red Hat/Canonical thing?

    If not — shut up, Jane, you ignorant slut.

  41. oiaohm says:

    Dr Loser http://www.spi.dod.mil/lipose.htm
    There are many examples of different government departments around the world successfully rolling their own distributions and doing it without issues.

    I say again. It is not the business of governments, at any level, to run a software house and to develop a personal distro. They are not good at it.
    So business of governments can include creating in house personal distribution. There are some good cases for it. Like google in house distribution the main advantage is absolute control of update cycle same can apply to government departments.

    Did the Munich guys under take any training before attempting to roll their own Distribution. The answer is no they just went for it and prayed everything would work out.

    I would say at least do the Linux Foundation documentation for training on rolling own Distribution before you do. Most of the major errors that Munich staff made with back porting Libreoffice wrong and so on breaking stacks of stuff come from lack of training.

    Red Hat provide these services. Canonical provides these services.
    Not always possible it all depends on security requirements. Red Hat and Canonical are not approved in Germany for government contracts. So Munich case neither of those are in fact option. SUSE, HP and IBM all do have staff inside Germany and can offer government support obeying the rule that support staff for government work have to be inside German legal jurisdiction. Yes pain rule when dealing in German government.

    The reason why Munich IT lead got ripped into at the Debian conference is he directly admitted to performing actions that the training over rolling your own for custom usage tells you not to do ever because it will bring hell.

    The fact Munich IT crew have done so much against recommend Linux best prac and it still worked shows that poorly trained staff can make a Linux Migration work. Would I like to be at a place with poorly trained staff doing the migration no I would not.

    Dr Loser Munich compared to French Police and other government migrations is pure amateur hour vs professional. A lot of Linux migrations have also failed because they have not done training and attempt migration patterns that will never work.

    To be correct HP, IBM and Linux foundation all provide staff training all over the world on how to do roll you own distributions without blowing your feet off. Also all 3 do provide free guides on how as well. Munich staff did not get the training or read any one of the guides. So Munich having distribution maintenance issues is a given and then being picked on at conferences when they state they are doing things they should have never done.

    The only thing that saved Munich is the IT lead had IBM white paper on how to migrate from Windows to Linux. Did not have the IBM white paper on how to roll own distribution safe and effectively.

  42. Dr Loser says:

    The prime motivation of Munich was not to save money but to gain independence.

    I bow to your inside knowledge on this one. Goodness knows where you get it from. Evidently you’re not on the mail-merge list, so it’s quite possible that Munich have somehow found a way to communicate.

    I say again. It is not the business of governments, at any level, to run a software house and to develop a personal distro. They are not good at it.

    If independence were the issue, then basically any FLOSS solution would have sufficed. Red Hat provide these services. Canonical provides these services.

    Might it just be that you are hopelessly misinformed, and that the real purpose was to employ a gang of worthless neck-beard layabouts on the public teat?

    non multiplicandur entia sine necessitas.

  43. oiaohm says:

    Yes if you watch the debian video you will see the mail merge that was not performing was in fact Wollmux implementation of mail merge. It was Wollmux integration with Libreoffice that broke. Cause was kinda horible. Due to a security problem a set of functions in Libreoffice was marked to be deprecated then completely removed. Guess what Wollmux was using. Also in the video.

    Of course when he said mail merge did not work the developer who works in Libreoffice directly asked about to find out that it was not Libreoffice mail merge but Wollmux mail merge with issue and the issue being deprecated section that was marked as internal only for Libreoffice and OpenOffice meaning no extension should have been using it directly.

    Any guesses what was deprecated. Embedding a full copy of seamonkey/thunderbird inside Libreoffice/Openoffice with no user interface just to access the the address book of seamonkey/thunderbird.

    http://ostrovsky.org/libreoffice-new-mork-driver/
    Yes replacement 2012. Details of the disaster and the security downsides of how implement the way it was brought major trouble. Yes Wollmux was not only using the address book functionality but other embedded bits as well to in fact send mail as well. Yes hack work around then extend on top of highly not a good idea.

    For all the things Munich did that was complete wrong to come in by all audits under budget has been interesting.

    Lesson from Wollmux vs Libreoffice just because something is FOSS does not mean you can get away with random-ally hooking in. Do make sure the features you are using are properly exported and don’t go linking into internal only areas if you expect you code base to remain running for a long time.

    Yes Wollmux is the only documented extension to Libreoffice/OpenOffice that went and used internally marked sections around email. All other items used thunderbirds own interface or used the UNO database access methods(correct method).

  44. Deaf Spy wrote, “In FLOSS, it is either take it, or leave it, or DIY.”

    That’s not correct. The DIY part can be everything or only a few bugs or features, certainly with popular products like GNU/Linux and LibreOffice Munich didn’t have to do even 1% of the work. Meanwhile with non-Free software like M$’s OS and office suite, users were paying huge margins which were just wasted money compared to doing a sliver of the work they needed to improve the software.

    Where Munich did a major part of the work was the template-system for LibreOffice. They wanted something that prevented duplication in all their parts and reduced the total number of templates in use. That took some work but also made the whole process of migration much easier. They did make their work FLOSS so others could use the software they created. See Wollmux

  45. DrLoser wrote, “I was under the impression that Limux is founded on Ubuntu, not on Debian.”

    Ubuntu is based on Debian GNU/Linux. Some people love Ubuntu. I am not one of those. I see a lot more breakage with Ubuntu than with Debian.

    DrLoser also wrote, “Red Hat leverages FOSS to deliver successful implementations to a wide variety of companies, municipalities, and organisations in general. Hundreds. Thousands. Tens of thousands.
     
    And Limux? It has delivered precisely one implementation. One single solution.”

    The prime motivation of Munich was not to save money but to gain independence. They may have gone a bit too far and missed several opportunities to save costs. e.g. Thin clients, and letting the FLOSS community do more of the work. Nevertheless they have gained independence and have saved money. Would RedHat have worked? Yes. Would RedHat have been less expensive than M$? Probably. Would they still have needed to do a lot of customization of their fleet of applications? Yes. Most of their earlier work was modifying templates. They made a system for that which is used by other jurisdictions. Good for them. Later they did a lot to manage applications, reducing the number quite a bit, and they rationalized their whole IT-system which was probably necessary no matter what OS they used.

    Judge a tree by its fruit. In total I think Munich was a force for good. It shows that a sizable municipality can become independent of M$ which is a good thing in general and it did create some local jobs. While I would have done it much differently they got what they wanted at a reasonable cost in time/effort/money. Munich is a major contributor to LibreOffice which certainly will benefit the whole world and along with using GNU/Linux continue to save them money compared to doing things M$’s way going forward. I have advised my local governments and my federal government to learn from the events and save my tax-dollars.

  46. Dr Loser says:

    RedHat and others did bid on Munich early on. Debian got the inside track because Munich chose to be self-supporting.

    Funny that. I was under the impression that Limux is founded on Ubuntu, not on Debian. An impression that has been reinforced by seeing the word “Ubuntu” continually referenced in every article I have ever read on the subject.

    But I’m sure you know better.

    Anyway, you’re evading the issue, aren’t you? I’m making the generous assumption that a professional solution from Red Hat would have been at least as suitable to Munich as the equivalent Microsoft offering. The fact that Munich chose to go it alone, rather than accept the Red Hat tender, is not evidence that they made the right decision. Or even the cheapest decision.

    The fact of the matter, Robert, is that Red Hat leverages FOSS to deliver successful implementations to a wide variety of companies, municipalities, and organisations in general. Hundreds. Thousands. Tens of thousands.

    And Limux? It has delivered precisely one implementation. One single solution. Nobody else will touch it.

    That hardly sounds like an efficient use of resources to me.

  47. Deaf Spy says:

    Not with FLOSS because the organization can leverage the work of others.

    Weird, Robert. It was you who just wrote: ” do have the scale and resources to develop software”. Leveraging the work happens with commercial software, where you pay the others to do the work for you. In FLOSS, it is either take it, or leave it, or DIY.

    Sure, Munich published their numbers…

    You bring nothing, absolutely nothing to prove your own claim:
    “They did have to put in a lot of effort converting documents and templates but it was much less costly than paying for TOOS and M$’s office suite everywhere.”
    Prove this, Robert. Extract the necessary data from Munich’s financial reports and use it to prove this particular claim of yours.

    Basically all I needed to write was the web interface and some trivial algorithms. Easy. I didn’t need to write most of the code, just a sliver, thanks to FLOSS.

    Thanks for supporting my cause, Robert. In case you don’t remember, here it is:
    Actually, this is valid mostly for startups only, and almost never for big enterprises.

    P.S. I am flattered that you confuse me with an Englishman. I thought my English reeks “Foreigner!”. Or, is it that you just never look into individuals and treat all people as a blob of biomass? How you treated your students, Robert?

  48. oiaohm wrote, “Correct solution would have been go to the Libreoffice website and down load the Libreoffice DEB’s if you needed a newer version. “

    Amen. There’s little point to using FLOSS and contributing if you don’t use the product.

  49. oiaohm says:

    http://www.techrepublic.com/article/heres-the-one-major-problem-facing-munich-after-switching-from-windows-to-linux/
    Deaf Spy those quotes are from a debian conference Glogowski gets the hell ripped out of him for being stupid. Ubuntu does 2 releases a year. The .04 releases are LTS and the .10 release are testing. .10 releases are to be expected to be full of bugs. The .04 release is done in two stages. First point release is when .04 is declared production ready. Guess what Glogowski has partly done. Made a mix between .04 and .10 releases to attempt to get newer applications. KDE 4.x was not in fact busted in 12.04. But pulling newer version of Libreoffice back from 12.10 brought forwards hell. Correct solution would have been go to the Libreoffice website and down load the Libreoffice DEB’s if you needed a newer version.

    Yes with Linux if your person managing it does not know what they are doing they can really let lose hell on themselves.

    If you go and watch the debian conference video you find out having to wait for point release before putting something into production is what is called best practice.

    Mail Merge issue of Libreoffice has been fixed. Of course this was kinda made worse by Munich adding java add on todo mail merge instead of funding upstream to fix issue properly. Same thing came out of that Debian conference video you have not watched. Yes Munich IT lead serous-ally got the heck ripped out of him.

    http://mihai-varga.blogspot.com.au/2014/08/good-news-part-2.html Since 4.4 the means to connect to SharePoint and CMIS.

    DeafSpy alternatives to Sharepoint only some are paid services. Problem is the paid services pay people to put up websites advertising them.
    https://en.wikipedia.org/wiki/Content_Management_Interoperability_Services

    FOSS does have a few Sharepoint replacement offerings. My biggest issue with most of them is they require JAVA none pure native binaries.

    Dr Loser basically I have told you before that link Deaf Spy brought in is crap. Both of you need to find either Debian conference video those quote are from or find my prior post give you idiots the link to it. techrepublic takes stuff out of context all the time to click bate. You idiots keep on falling for it.

    Dr Loser Sad fact in recent security survey of USA governments systems the reality is Munich using a old version of Ubuntu is more secure than what most USA government systems are.

    Now Dr Loser if I took Microsoft Windows Developer parts and Mixed it with Windows stable release parts and deployed this in production I am a Idiot right. Yet some how when someone does this with a Linux Distribution you have to blame Linux.

  50. DrLoser wrote, “a software department always results in an inefficient, expensive, and slow software development process.”

    Not with FLOSS because the organization can leverage the work of others.

    DrLoser also wrote, “Can you prove this claim, Robert? “

    Sure, Munich published their numbers and didn’t even spend all the money allocated for training. A lot of the expense was one time only so N being finite proves this. I’ve done a few migrations and the cost was instantly recovered in better performance and lower maintenance. I did write some software for migrations, web applications, and scripts with most of backroom stuff solid and usable. Basically all I needed to write was the web interface and some trivial algorithms. Easy. I didn’t need to write most of the code, just a sliver, thanks to FLOSS. The same goes for Munich. They chose to do more, giving back to the FLOSS community because that makes their software better and because they could afford it.

  51. Deaf Spy says:

    They did have to put in a lot of effort converting documents and templates but it was much less costly than paying for TOOS and M$’s office suite everywhere.

    You are an academic person, so let’s imagine you’re writing a paper. Can you prove this claim, Robert?

    As such, they do have the scale and resources to develop software and whole IT-systems.

    This is very incorrect, Robert. Having a software development department is having a company within a company. An internal company, very hard to manage, because managing a software company is not what the government is all about, and what basically any enterprise is. An IT department to support your hardware is one thing. But a software department always results in an inefficient, expensive, and slow software development process.

    It is often less expensive for them to customize and create FLOSS rather than paying for non-Free offerings.

    Actually, this is valid mostly for startups only, and almost never for big enterprises.

  52. DrLoser wrote, “Governments are in the business of governing and service provision. Not of doinking around with operating system software.”

    That view is blind to a key point. Governments are organizations, often one of the largest organizations in their jurisdictions. As such, they do have the scale and resources to develop software and whole IT-systems. It is often less expensive for them to customize and create FLOSS rather than paying for non-Free offerings. That said, I do believe they made mistakes that did throw away some of the advantages of GNU/Linux: LTSP instead of thick clients and way too much complexity. They did have to put in a lot of effort converting documents and templates but it was much less costly than paying for TOOS and M$’s office suite everywhere. That’s a testament to M$’s cost, not Munich’s brilliance.

    RedHat and others did bid on Munich early on. Debian got the inside track because Munich chose to be self-supporting.

  53. Dr Loser says:

    Reading Deaf Spy’s link, which I heartily recommend, you can see the folly of a municipal government building its own distro over the decaying corpse of an out-of-date Ubuntu LTS. That’s a whole lot of brokenness and merely an “aspiration” to source specific new hardware which runs the thing natively. (And I doubt that’s going to be cheap.)

    Governments are in the business of governing and service provision. Not of doinking around with operating system software.

    It brought one more question to mind, however. Obviously you can replace one solid, stable, OS with another solid, stable OS. Obviously Limux is neither solid nor stable. So why didn’t the Muenchenistas go for the obvious alternative, which is Red Hat?

    Answer: because this isn’t really about Gnu/Linux at all, is it? It’s about a bunch of neck-beards procuring the opportunity to play in their own little sand-pit, by offering illusory “cost savings.”

    If it was about Gnu/Linux, they’d at least have considered a Red Hat tender.

  54. DrLoser lost the thread with, “Except that it isn’t one of your foundations at all, is it, Robert? You haven’t even touched M$ software in ten years.”

    I will not be moving from TOOS to GNU/Linux. I did that 17 years ago. I’m moving from GNU/Linux on AMD64 to GNU/Linux on ARM64. That’s a change of software and hardware for the better and has nothing to do with M$. ARMed software is quite a bit more compact which should improve performance over the network, in caches and in the CPU. I thank Goodness for the privilege of having real choice rather than “anything, as long as it’s Wintel”.

  55. Twitter says:

    The Free Software Foundation is a good example for SME. They run free software from the firmware up and have an nice summary of useful applications,

    https://directory.fsf.org/wiki/Category/Business/accounting

    While their staff is small, I’d say they successfully manage a tremendous community and have an out sized impact. In their case, ha ha, it really is the software that makes the difference.

    They profile themselves and some larger organizations, such as the US Department of Defense, here:

    https://www.fsf.org/working-together/whos-using-free-software

  56. Dr Loser says:

    Twitter wrote, “I imagine their complexity would vanish if they were to dump old Microsoft crap the way ordinary companies do”.

    Yes, that’s one of the foundations of my impending move to GNU/Linux on ARM: It works and it’s good enough.

    Except that it isn’t one of your foundations at all, is it, Robert? You haven’t even touched M$ software in ten years.

    Your present foundation is that your old system is dying, despite the fact that all you ever do with it is run a blog, maintain a recipe database, and amuse yourself by building a Linux kernel every week or so … despite the fact that you can easily download one, ready built. A waste of electricity, really.

    You’re aiming to move from one cheap Linux server to another slightly cheaper and more modern one.

    Not a very exciting prospect for, say, the average SME.

  57. Twitter wrote, “I imagine their complexity would vanish if they were to dump old Microsoft crap the way ordinary companies do”.

    Yes, that’s one of the foundations of my impending move to GNU/Linux on ARM: It works and it’s good enough. I certainly don’t see GNU/Linux punishing me for keeping stuff for years.

  58. Twitter says:

    I imagine their complexity would vanish if they were to dump old Microsoft crap the way ordinary companies do without relief when some owner pushes new garbage. Free software is a bastion of simplicity and ease of upgrade.

    While it’s astute to use older hardware, nothing says they have to or that that’s the reason for any problems. Please see,

    https://libreboot.org/faq/#intelme

    This message sent from my GNU/Linux, Libreboot X60.

  59. Deaf Spy says:

    Lies?

    Robert, your own source:
    “Nonetheless, the city’s desktop has problems. These are caused by the diversity of desktop hardware and peripherals, and the ageing IT infrastructure. This causes problems with upgrades and configuration changes.”

    and,
    “mail-merge being broken and slow”
    http://www.techrepublic.com/article/heres-the-one-major-problem-facing-munich-after-switching-from-windows-to-linux/

    Who is lying, Robert?

  60. Deaf Spy wrote, “they still fight with problems that show clearly Limux’s inadequate hardware support and high system requirements (aging hardware, eh?). And LO mail-merge still doesn’t work for them.”

    Nope. Repeating lies doesn’t make them so.

    Quoting one of the links:“In the previously held under lock intermediate report on which the Süddeutsche Zeitung, it is not about the under Münchner computer nerds so popular question of principle “Windows or Linux ” (the city is primarily on Linux). But above all to the internal structures with which around 1,400 IT staff coordinate well 15 000 computer workstations.”

    That’s a problem of organization, not software. They have machines that are so old they are actually dying and they don’t have in place a proper plan to replace them.

  61. Deaf Spy says:

    GNU/Linux desktops have been going for years now in Munich’s city government.

    Exactly. And they still fight with problems that show clearly Limux’s inadequate hardware support and high system requirements (aging hardware, eh?). And LO mail-merge still doesn’t work for them.

    Clouds have their pros and cons. One definite advantage is the reduced costs for supporting local infrastructure. Wise systems go hybrid. Some data you store on the cloud, some stays local. Some services you host on the cloud, some you host locally, for some you keep your thick clients.

    In Munich’s case, they can’t go beyond local fire sharing. Funny to see how even open-source alternatives to SharePoint turn into paid services.

  62. oiaohm says:

    Deaf Spy its not like windows 10 in place upgrades go well. Lot items like really old printers stop being supported some computers don’t boot.

    There is a price for keeping any OS current and upto date. Price is some hardware will just stop working.

    Catch Linux Munich does charge a fee to allow you to attempt keeping update OS. Microsoft it was when Munich started paying a yearly subscription for volume licensing so allowing you to update old machines to current OS until they no longer work. Yes Microsoft recent years has started offering free upgrades.

    Cloud has many meanings.

    http://blog.patshead.com/2014/09/self-hosted-cloud-storage-comparison-2014-edition.html

    Self Hosted Cloud gets interesting.
    https://owncloud.org/blog/libreoffice-online-has-arrived-in-owncloud/

    Reality what Munich has done migrating to Linux in a lot of ways have prepped them to run a self hosted cloud solution. Yes low cost self hosted cloud solutions are going to be Linux solutions. Note the paper so far has found nothing wrong with the migration to Linux. Now talking about supporting mobile and other things more. Guess what Self Hosted Cloud.

    Like it or not Deaf Spy running a self hosted cloud the first problem you will run into is no MS Office because Microsoft does not sell edition for self hosted cloud. At this point you are forced down Libreoffice or something else like it.

    So cleaning up and moving to ODF clean I would say on that card Munich was a decade or 2 ahead so allowing them to take up some solutions faster than the parties who did not.

    Most businesses are a mess and are trapped on a tread mill and unable to chose the most secure outcomes.

  63. Deaf Spy wrote, “It is pathetic to see how Munich fights to get Linux on desktop going when the whole world is moving on to the cloud. As usual, a decade or two behind.”

    Get your facts straight. GNU/Linux desktops have been going for years now in Munich’s city government. As big as the cloud has become, it’s still tiny compared to thick client computing. Further, many governments in EU are nervous about clouds and rightly so. While clouds bring lots of efficiency they bring uncertainty and another surface for the bad guys to attack. The world is not short of bad guys. Also, even using the cloud requires a local client. Something has to run on that local client and it might as well be GNU/Linux. Why not?

  64. Deaf Spy says:

    Right from the source:
    “Nonetheless, the city’s desktop has problems. These are caused by the diversity of desktop hardware and peripherals, and the ageing IT infrastructure. This causes problems with upgrades and configuration changes.”

    Right, blame it on the weatherman. And I thought Linux was the OS to allow you to use your hardware, not limit you by some evil EULA, wasn’t that so, Robert? 🙂

    It is pathetic to see how Munich fights to get Linux on desktop going when the whole world is moving on to the cloud. As usual, a decade or two behind.

    Fact is, Munich is a mess, and only political power makes it keep the same stubborn course. Any business so far would have ditched the whole thing as a sad, unfortunate experiment and moved on.

Leave a Reply