Robert Pogson

One man, closing all the windows.

Writing and GNU/Linux

  • Feb 05 / 2012
  • 82
technology

Writing and GNU/Linux

I do a lot of reading and writing on my computer systems. Computers make it so easy to do. GNU/Linux keeps it being easy no matter what M$ does.

I read an article today, an interview with a real writer, Piers Anthony, who wrote five novels in 2011 using GNU/Linux. It’s a good interview apart from the dirty joke (which went right over my head…). In it the writer describes his history with IT and writing. He converted to FLOSS about the same time that IBM jumped in and I converted in my little school on the tundra.

Piers Anthony uses Fedora GNU/Linux, LibreOffice, and an M$-only printer. He’s not a techie so we don’t get the full story on that. It’s not hard to buy a printer that works well with GNU/Linux and it’s unlikely he’s still using the same printer after a decade since he last used that other OS, so that’s a mystery…
“I computerized in 1984, and have used four operating systems (CPM, DOS, Windows, Linux) and eight word processors. I enjoy the present one the most: Fedora on Linux, LibreOffice. It’s the software rather than the hardware that makes the difference.”

He really expects e-books to dominate in book publishing and is at the tipping point. He likes the way FLOSS works for him.

FLOSS has so many tools for writing. I like LyX for larger projects because it scales nicely. The applications does less during writing and saves the heavy lifting for the rendering process so I can maximize my productivity. The less my PC does to get in my way, the better I write. I use LibreOffice for routine stuff and it also provides a good spreadsheet for handling tabular data. I should also use a FLOSS database to keep track of stuff but WordPress does that already and Google is great so I have not done that yet. I could probably scrape MrPogson.com for hyperlinks and generate a good database for my writing automatically. Whatever we imagine we can do with FLOSS.

This interview with a writer shows how FLOSS works for people with absolutely no need for a monopoly on the desktop to intrude in productivity. Businesses and individuals who are locked in to Wintel need to climb out of the hole they have dug before it gets any deeper. I recommend Debian GNU/Linux because it works for real people.

82 Comments

  1. oldman

    “Oldman from this I can take it you don’t want a rebootless operating system at all.”

    Actually I dont care about it. We have other ways to provide high availability including clustering for either windows or linux based systems. When combined with our F5 load balancers. The whole point is moot.

  2. Robert Pogson

    oiaohm wrote, “Question is how much do you want rebootless and are you going to push for it to be default inside the next 2 years.”

    Enough about rebootlessness! That is a tiny niche feature of IT. It is desirable but far from mainstream. Most people are content to wait while a server reboots. Those who want better switch IP addresses to switch to a running server while the world barely notices. Of course to preserve context would be ideal but the world of IT is practical. What is easy gets done immediately. What is hard takes a while. It is far easier to preserve context in a super-reliable storage cluster than to make each and every server rebootless.

  3. oiaohm

    oldman LOL I like this.

    Red Hat Enterprise 6 is already systemd.
    Suse most likely before end of year. They run into a few issues.

    oldman I am not talking speculations.

    Each piece of the puzzle is dropping into place. Each piece has to be done in order. cgroups around services is the first major step.

    What you have to get is I have been doing reboot-less for years. So I know the complexity perfectly.

    Cgroups also allow you to multi instance for other reasons as well. Without dropping to virtualisation.

    Oldman from this I can take it you don’t want a rebootless operating system at all.

  4. oldman

    “The complexity is dropping each year to run a rebootless system with Linux.”

    You dont get it sir, I dont care about your speculations. Come back to this debate when all of this is fully main stream standard practice that is incorporated into the main stream commercial linux distrnutions like RedHat and SuSE.

    Until then its just to much self serving techno babble.

  5. oiaohm

    oldman there are not all the same level of skill administrator.

    Average is not the same. The thing to beware is that the level require drops.

    systemd with cgroups around every service. Now you can create new service entries what are single files for different instances.

    So is this average skill or advanced. Running fedora or Oracle unbreakable that is already running systemd. Using ksplice for free.

    These are systems that exist today. Oldman.

    Next cycle of changes the rolling instances if pushed for will basically be distribution default.

    One large irrelevancy not really.

    Question is how much do you want rebootless and are you going to push for it to be default inside the next 2 years.

    The amount of hacking to make system reboot-less has reduced a lot.

    The complexity is dropping each year to run a rebootless system with Linux.

  6. oldman

    “Do you have anything to say the statement “Linux Does Not Require Reboots” is false oldman? I guess not so just have to try to insult way out.”

    I deal with what is supportable sir. It does not matter that one can hack solutions to keep linux rebootless. What counts is what is supportable by an average skilled system administrator. In this context linux requires reboots, period, and your WallofText(TM)
    is IMHO one large irrelevancy.

  7. oiaohm

    oldman “you actually have something that doesn’t need super-hacker to work.”

    If you want it to work without super-hacker skill people have to believe it possible and know its above their skill level. So they be asking for it to be made simpler because they know it can be done but its out of their skill level.

    Systemd with cgroups wrapped around every service by default makes running rolling instances of services a lower skill level.

    systemd also simplify the init system getting rid of complex bash scripts. Again lowing skill level.

    Next is get distributions to provide a rolling instance configuration for systemd again people have to request this.

    That is mostly the init system fixed for HA at that point. To provide as high as HA possible to services for a solo system. Desktop systems single system HA is important.

    User should be after the hide of any application maker who loses there data by either a crash or killed application. Data loss safe applications can be coded against all bar major hardware errors.

    Applications need to take some responsibility for detecting updates and informing user so that user can change at least issue to what they are doing.

    Kernel rebooting preventing ksplice again is tech people could be asking there distributions to pick up as a default feature. Currently there is very little request for this to be broader supported even for the simple secuirty patches that could be applied. Yes MS only applies the simple patches themselves. The complex patches require real skill.

    Yes oldman a statement that the skill level to run rebootless is currently too high for most is valid.

    That Linux requires reboots is not valid. The issue skill issue todo it. Distributions are meant to exist to solve these skill issue problems.

    Oldman just because most people cannot run a cluster does not magically Linux impossible to cluster.

    Linxusoid never looked at the statement that it could possibility be true. But not usable by most due to the skill level required to perform it.

    Stating the issues progresses can be tracked.

    Attacking a statement does not require lies and miss information. Linxusoid. Point out its too high in skill for most users at this stage is still valid.

    Valid complaining will push Linux forwards. Invalid complaining you get laughed at. Rightly so.

    Do you have anything to say the statement “Linux Does Not Require Reboots” is false oldman? I guess not so just have to try to insult way out.

    Remember its bit like saying a fighter jet cannot fly because you don’t know how to fly it saying that super hacker skill level makes it a false statement.

    Fighter Jet can fly in the hands of a trained individual. That is the level Linux rebootless is at today.

  8. oldman

    “That unless you are skilled like a person like me you should not try to because there are extra steps required. Not all that simple of steps but with time like systemd it will become simpler for a normal person todo. Because people like me are sick of having to bend distrobutions a few 1000 ways so HA works properly.”

    Whatever.

    Do us a favor, save the WallofText(TM) until you actually have something that doesn’t need super-hacker to work.

  9. oiaohm

    inotify and dnotify are virtual provided so old applications work due to there processing engines being conflicting. fsnotify is the real processing engine. Yes fsnotify is used by selinux, smack and cgroups for different things.

    There is also a new to fsnotify interface fanotify. Guess what your oplock crap can be done by that. Because anti-virus scanning kinda needs oplocks that fanotify serves.

    Really Linxusoid you really need to get to know Linux before opening your mouth you just keep on claiming it don’t have features when it has them.

    Cgroups selinux and smack do more than just track file system changes. Cgroups have filesystem namespaces. That all alterations are now isolated. It can bend reality. Cgroups can create a virtual PID table and bend virtual file system. So 1 application sees /tmp/yesIamafile and 2 applications sees /tmp/yesIamafile But in reality they are two different files. Because the two programs are in two different cgroups. Basically virtual machines without different kernels.

    Windows does not have the case that you have vitalisation bending things anywhere need how they can be bent inside Linux.

    You are not after who opened a file. You are after where in the chain of processes does this old file usage fit in. Like stopping or breaking(yes a bad hotpatch or other failure) mysql. mysql could either be the mysql that owns to the Apache and php stack or a instance started by file indexing…. some of there are going to throw issues if you disrupt them. Sometimes extremely bad issues like replication failure.

    Basically you need to look at the tree work out what its upto and work out if it safe to apply the hot patch or not and hope you get it right and the hotpatch is not broken. Or you go the safe path start another instance and allow the old instance to stop. Really starting a new instance and allowing the old instance to stop is very safe the odds of making a miss calculation is almost zero particular with QA running before turning the old offline to run down and putting the new online.

    Is there an advantage to the safe path. When you re-instance you clean up any memory leaks. Hotpatch path all those memory leaks remain. So memory leaks kill the system down the road. More hot patching more leaks more issues. Yes its windows that requires reboots because it will run it self out of memory.

    Windows session tracking is like cgroups from systemd or cgroups patched into the old distribution init system ot make the mess cleaner to know who started what. But the same issue does appear when you have many applciation started by the user in the same session. What application does this application that was started by explorer own to.(yes those horrid vb programmer who remote explorer to start other applications then embed window inside).

    selinux and smack lsm systems. Are kinda required to track those horrid complex process things that keyboard emulation and other dirty process tracking breaking methods have been used. Windows does not track this crap. You have to seriously track to know how the processes are talking to each other.

    Filesystem does not have enough tracking power. You are looking for process to process interactions. For where it safe to cut or how big of a explosion you risk setting off. Get it wrong you might as well reboot since wrong the mistake can equal major data damage.

    Filesystem monitoring is leaving it to chance. So just using file system monitoring is not going to work you twit. Particularly broken file system monitoring like inotify.

    Reason why it has been so hard to do. Is the Linux init system has suxed. The old idea of bash script start up is hell.

    systemd due to its cgroup control. My two instances of apache or what ever are two almost identical startup script. Why I have cgroup namespaces. So that network card the Apache see is not real. So they both can happily bind to port 80 and what ever other ports the need. Yes two different ports the same thing. Those end up mapped into the global reality.

    Bash scripts what bash scripts. cfengine and puppet scripts are not bash. Just because I used a bash script to show you something does not mean that is what I use in production to process the same data.

    Hotpatching in userspace can go wrong and ruin you just as bad a kernel space hot patching? Most cases userspace hot patching going wrong is worse. Wrong kernel patch the system normally crashes and dies. Hotpatching userspace wrong you normally end up with corrupt data replicating. Due to the risks of hot patching you should not be doing it in massive amounts.

    Linxusoid still does not change the fact that the statement “Linux Does Not Require Reboots” is true not false.

    There are conditions to run without reboots. Its not a free lunch.
    1) you must have used a distribution that support commerical ksplice or have the skill todo equal yourself(what most people are not).
    2) you must understand how to run HA properly.
    3) Disruption to desktop applications and lose of data is a given due to them being poorly coded unless you live with secuirty holes being open for a little while.

    Number 3 even happens with hot patching.

    Really I pull a kill on a user application its not much different to that application running into a little bit of defective ram and crashing. Neither should result in user losing vast amounts of data but they do. This is why is so LOL when you are like administrator you cannot kill processes user will lose data. If I don’t kill the process the hardware might at any time it fells like it. Yes you learn doing HA that hardware has a mind of it own normally to ruin your day.

    Old saying of HA. Everything is Broken all the Time.

    Linxusoid stick to your case that “Linux Does Not Require Reboots” is false. That does not require user data to be preserved to be true. I can be a complete BOFH for all it matters. You don’t have a case.

    Linxusoid redo TM page with the correct information. Yes you can put a big scary warning there.

    That unless you are skilled like a person like me you should not try to because there are extra steps required. Not all that simple of steps but with time like systemd it will become simpler for a normal person todo. Because people like me are sick of having to bend distrobutions a few 1000 ways so HA works properly.

  10. Linxusoid

    > but in Windows you COULD detect who has the file opened much easier
    Or you COULD use oplocks (oh wait, Linux doesn’t have oplocks). Looks like the only kind of locks Linux developers are devoted to is Big Kernel Locks.

  11. Linxusoid

    Boy, you STILL don’t get it. I wonder if you’re THAT handicapped or just trolling.

    It doesn’t have anything to do with X. Or Wayland. Hotpatching in “user-space” is no different from hotpatching in kernelmode (except kernel-mode is arguably harder to implement properly). It doesn’t matter what distro do you use (UseDistoX™, right?). It doesn’t matter if you “can easily do it with a bunch of broken bash scripts” (ever thought why it’s not been done then?).

    I’m also glad that you think that “selinux, smack and cgroups” are designed to track file changes, while, say, inotify and the likes are not. Also, it’s awesome to get an assessment from someone as knowledgeable as you. /s (just in case it’s not obvious to you)

    Really other than restarting applications MS Windows ABI could have done most of the rest anyhow.
    Oh wait, but in Windows you COULD detect who has the file opened much easier (and with 100% reliability) than in Linux. Services COULD be configured to be restarted if they are killed (or updater itself could issue a restart sequence) and other processes COULD just care about themselves (Chrome does exactly the same in Windows as it does in Linux – tells “oh, snap”). Why care about proper contracts (“explicit is better than implicit”, right?) between OS and apps then – just start killing everyone and blame the user when something goes wrong. I guess the reason why it’s done properly instead of the “Linux way” is that Windows is actually designed and implemented NOT by loons.

    You are trying to slither and spin and wriggle and hide behind TheWall. The SimpleFact™ remains: Linux has a completely broken update system (not a surprise, really). Several of those – it’s all about choice after all.

  12. oiaohm

    Opps I forgot the change from SunOS to solaris. I have got in the bad habit of calling both solaris.

  13. oiaohm

    Linxusoid
    “well, it doesn’t even have reliable mechanism to detect WHO needs restart”
    In fact Linux has three dependable systems to detect who needs restarting. selinux, smack and cgroups.

    None can not be fooled by any process trick that an application might pull. cgroups become default on systems using systemd so you don’t need to load a LSM to trace perfectly either. Prior to that you have to enable the tracing manually by assigning cgroups. Again that is the skill of the administrator.

    See you don’t know crap Linxusoid you are walking into trap after trap. I leave what appears a weakness sorry I am sitting on the answers.

    Wait Linxusoid you are ubuntu user you poor sod stuck with apparmor that does not trace stuff perfectly.

    Wayland protocol include a state to state transfer. So a hand over. So Wayland protocol starts takes over then Wayland server the user was using stops. Basically the same way I can restart an apache server and not be noticed.

    Session state information is not glued up as some internal magical non transferable secret spread all over its code base like X11 does.

    X11 missed including a server to server state transfer and with all the plug-ins added to X11 its impossible to create one now. Basically 1984 is a little old young to be saving state. Saving state starts through crashes starts 1985. Transferring state to newly started applications is also 1985.

    Also nicely Wayland makes applications far more traceable. Wayland server renders nothing for applications. Application and Wayland server have to clearly defined roles without state issues. Wayland crashes. All Applications using Wayland server can reconnect to the replacement server. This is not optional in the protocol. X11 being able to reconnect after crash is optional part of protocol that does not work too well.

    There are issues about X11 design that just cannot be fixed. Wayland fixes them.

    “It’s just a contract between OS and app for *coordinated* restart” Yes that was done by solaris 1985. Mapping of what data to extract as persistent and restore into the new application. Yes 1985. Difference is solaris did not duplicate out to special storage. Exactly why do you need special storage.

    Result is going to be the same. Only developers who have to will implement it. Also will most likely be poorly tested so leading to more issues than it solves.

    “There is a predictable schedule for patches, so people can plan ahead”
    In place testing is not in the system any where for QA. With out disrupting the old instance.

    “There is a hot-patching that’s not limited to kernel.” Can you hot-patch outside the kernel under Linux yes. Should you. Answer no you should not. Doing HA in five nine and six nines you learn. A non quality tested solution in place is not a solution it is a time bomb.

    This is the problem no matter how well you test a hot-patch the more you patch the high the odds you will have a defective patch. Fail backwards location.

    Linux you can test before you jump off cliff on each system. Does MS have every bit of hardware combination on earth to test with. Answer no.

    Virtualized only if you cannot avoid it with databases.

    Linxusoid
    “One machine serves – other updates. Genius.”
    Exactly what reason when the one updating is not in active usage do you need to take the risk of hot patching user-space if you can restart it all transferring state.

    Wait you are hot patching service because they don’t include a method to transfer state when you made you services. No error caused by patching if you don’t patch. So are basically Application Recovery and Restart incompatible sods also normally HA incompatible sods.

    Exactly the Linux method is simple if applications are design to take advantage of state storage Unix offered.

    Yes persistent state after crash nice. Any unix method todo this pre 1990. SHM and mmap. Pre Linux. mmap persistence lives threw reboots. There are other options as well for programs to leave persistent state. Of course you still need a programmer to write the persistent state into persistent storage. Guess where this all fails perfectly. What happens if the application crashes and only half writes it persistent state. So you better checksum and size that. Nice bit fairly well platform neutral.

    Future systemd current monit and other quality of service monitoring allow application to for a contract to be restarted after crashed.

    There is absolutely nothing special about “Application Recovery and Restart”.

    Everything that new MS feature is offering the Unix world was doing before Linux was born. Ok scripts without the formal niceness of monit, cfengine, puppet and systemd.

    Now comes the same problem you have make developers to use it.

    Linux has abi/api is very complete. Thanks to the Posix history. Again is a issue its not being used right.

    Really other than restarting applications MS Windows ABI could have done most of the rest anyhow. Yet coders don’t code it. HA grade applications are the rarity. I don’t see why that should change now.

  14. Linxusoid

    One more peace.

    1985 solarias was the first time I saw something like this. This is quite simply a fancy form of core dump recovery mode that attempts to recovery state information out a crashed application.
    And this idiot calls me stupid. Seriosly, go educate yourself. Old English maybe. Your writing will not lose any sense anyway.

  15. Linxusoid

    Oh, and speaking of that “defence in depth” thing, when all the optimizations and mitigations fail – you’ll fall back to perfectly safe reboot IF AND ONLY IF it is required (compare with Linux Way: reboot every time just in case or screw everything and ignore security patches)

  16. Linxusoid

    Thank you for insulting Oracle by the way. [skipped] About time you pull you head in on this before you insult someone who will kinda sue your ass off.

    Oh, see who is talking. Seems that you guys don’t have any problem insulting Microsoft. First and biggest real software company. Cutler, by the way, had almost 40 years of OS *design* experience (as opposed to duct-tape engineering) before he moved to Azure and then to Xbox. But I must agree, Solaris is one of the saner modifications of Unix (but you can push this crap only so much)

    Us doing HA not for the last 2 and a bit years every kernel update has been applied on the fly including Semantic struct changes to the kernel.
    Any avidence of this? You know, you didn’t get enough credit (quite the contrary) yet.

    Wall of nonsense
    Wailand-the-Savior-Just-Around-The-Corner. Blame-the-X and so on. You fail to understand AGAIN. You cannot just restart things (if you don’t want an angry user) – you need that guy to know it is restarting. No such mechanism in Linux (well, it doesn’t even have reliable mechanism to detect WHO needs restart). It’s not X’s problem (and Wayland will not fix it – neither it will cure the cancer or bring the world peace).

    “Application Recovery and Restart” from Microsoft has to include a way for a process to transfer state.
    Oh, it doesn’t. It’s just a contract between OS and app for *coordinated* restart (app can persist relevant state itself if necessary). “Defence-in-depth” y’know. There is a hot-patching that’s not limited to kernel. There is a predictable schedule for patches, so people can plan ahead. There is Restart Manager. Etc, etc, etc. And there is App Restart and Recovery. No pixie dust involved – only real software ENGINEERING, you know.

    When you get some more relevant experience you’ll probably realize that THE ONLY way to guarantee high availability (five nines or whatever you are trying to accomplish by not having security updates) is failover clusers (maybe virtualized with live migration – even consolidated on the same machine but that’s vulnerable to hardware failures), but still MORE THAN ONE MACHINE. One machine serves – other updates. Genius.

  17. oiaohm

    Linxusoid
    “And as unqualified as Linux developers usually are – it will only be applied to most trivial state changes”
    Thank you for insulting Oracle by the way. Linxusoid. Same staff work on semantic change patches and handlers for Solaris and Linux yes ksplice is a Oracle product. Yes the Solaris guys have over 20 years experience each doing semantic change handlers. This is why you pay Oracle for it. About time you pull you head in on this before you insult someone who will kinda sue your ass off.

    “What was the last time linux kernel update didn’t require system restart again?”
    Us doing HA not for the last 2 and a bit years every kernel update has been applied on the fly including Semantic struct changes to the kernel.

    Using ksplice commercial supported distributions there is no secuirty grounds to force a kernel restart. Hardware will fail first. I am willing to pay the money to get the patching and the handlers. Linux is not a free lunch.

    Remember I said there were conditions. Linux is reboot free if you have commercial ksplice or a hell of a lot skill to make the patches to do the Semantic handler production. ksplice kernel patches are reversible if you find hey a service will not start or run any more.

    Linux does not require reboots to replace kernel if you are paying for the right service. Or you have the skill to write the state bridge. No automated tool yet can create the state bridge for you.

    Of course us running HA also like to reboot servers every so often to make sure they do reboot. But this is not because we have todo a kernel update. The bios on motherboard can go dead. Computers running fine hard drive is fine. Power outage it don’t come up why the bios did not start. Normally time to do hardware maintenance when reboots happen. Like reboots of dieing ram or harddrive on blade. If they happens to be a new kernel out at the time upgrade. Its not a intentional reboot just to replace kernel. Its a reboot to deal with hardware.

    Run through a HA single sever Apache httpd. At the end of this your ideas are screwed.

    Detected Apache httpd running old library or old version.
    Start new instance of Apache httpd on different port.
    Of course Apache httpd is design to cluster so it shares session data between servers.
    Run Quality Assurance tests on new instance. To make sure newly installed libraries do work with required operations. If fail roll libraries back report error.
    If everything ok push firewall rule redirecting all traffic for http to the new Apache httpd server.

    Let everything run. Check on the old Apache httpd server because everything will stop running on it. Because its getting no new requests unless its breached. Of course a particular amount of time latter the old Apache gets terminated.

    This process applies to ftp, ssh, cups. Most services in fact basically any service that don’t need to share data between instances and any that are built for fail-over cluster or better can have any patch you can dream up applied in a safe quality controlled way.

    Service space of a Linux OS if you are not having what appears to be 100 percent uptime when updates go in. Most of the time are poorly configured even if it only 1 server.

    Microsoft patching into the current instance in user-space is insane. How can you run QA to make sure that the patch works before putting load on it. Simple fact you cannot. So you have to pray MS tested the same way you applications are going to hit it. If not you might as well terminated the service and restarted it cleanly.

    You only patch the kernel space because you have no other option other than reboot. kernel space is the only area you should be doing to memory code patches.

    This is where the problem is the old instance must remain running along side the new instance so that you can perform QA on the new instance without anyone noticing or having visible failures. Yes failure to pass QA before actively displayed is exactly what you want.

    If you are running multi servers you will use the N+1 option. Bring the spare server hot and shut down the server running the old instance from getting traffic then kill that instance once it processes out. Again to the end user this does not appear as if anything has happened.

    “Application Recovery and Restart” LOL how big of idiot are you. You think this is some new feature????? Of course you are a MS Loser. That believes MS is making new features.

    1985 solarias was the first time I saw something like this. This is quite simply a fancy form of core dump recovery mode that attempts to recovery state information out a crashed application.

    Wonder why this was stopped by 1992 in the Unix world as a automated solution? It is called the core dump loop of death. User trying over and over again to get there data out of application crash dump due to the fact the state read from the crash dump is invalid it crashes leading to another core dump then another core dump then another core dump. Is disc space unlimited?? I think not.

    Basically Microsoft is still young and stupid.

    The data is not lost if core dumps are enabled. User self recovery is not wise if application is having to be terminated. Assisted recovery is the safe path using debugging to make the application keep on running past the error or termination.

    Force kill will core file if system is set that way. You of course thinking killed equals data destroyed sorry this is Linux not Windows. Linux has settable options what a kill will do. Yes good administrator brute forcing its wise to keep the core dumps just in case you do kill something important.

    When we get to the desktop I will agree all hell breaks lose. Really serous-ally why do I have to give poor quality desktop application kid gloves. I am use to HA grade application I can kick around quite well without anyone noticing with QA testing done all the way.

    Quality of the applications is not there on desktop on any platform Linux, OS X and Windows the applications are crap for desktop usage for quality of experience they are going to give. So killing a few does not particularly worry me.

    Is it possible for firefox and chrome to find out that flash has been updated. Yes just check the /proc/self/maps for deleted plugins. Hey the plugin deleted start a new server to provide plugin. Even better they could follow the standard HA path and if new flash or java or what ever plugin does not work inform that they need to rollback update or reinstall it since site wants it.

    That administrator like me has to come by with the base ball bat and kill flash to force new version be used is not my issue. It is an application defect.

    X11 designed like a disaster zone. No design to transparently change the X11 server without user noticing. Result is stack of killed applications. Wayland is designed way better. Wayland is more a HA design.

    Applications in userspace most not having state transfer system. Most not having detection for new versions or a clean way to roll new versions up to user. This is a quality of application issue. Something with services I don’t have to put up with.

    Linxusoid every thing from a low level system point of view up to build the best desktop ever will exist on Linux by the end of this year. I just don’t hold out hope that application developers will provide quality. Maybe with GTK and QT application getting HTML backends. Maybe the GTK and QT application developers might start considering HA and designing for it but I think this is wishful thinking.

    “Application Recovery and Restart” from Microsoft has to include a way for a process to transfer state. This requires coders to code this and get it right. I am not believing in pixie dust you are. Linxusoid. Yes the pixie dust is that coders will do the right thing by desktop end users who don’t complain about the lack of HA on the desktop. Mostly because they believe HA and desktop are incompatible.

  18. Linxusoid

    Hiding behind TheWall again?

    Have you even seen Chrome’s frowny face “oh-crap-plugin-crashed-feel-free-to-reload-page-and-start-over” message? So your suggestion is what exactly? Kill everything and screw user (he is screwed by the sole fact of using Linux anyway)? If it’s an X server? Just kill everything. Including user any data – Linux user cannot have anything valuable anyways. Or maybe it will take another 10-20 years for Linux to invent something like this? Oh wait. It will take them 10-20 years to invent this AFTER they’ll spend 10-20 years inventing decent updating mechanism that could reliably detect if something needs to be restarted.

    As for ksplice. Now, you came REALLY close to realizing that no pixie dust could really update state – only developers could. And as unqualified as Linux developers usually are – it will only be applied to most trivial state changes (I’ll take it for granted that such a mechanism is not just a product of your inflammed imagination – too lazy to check, but at least THIS is theoretically possible).

    What was the last time linux kernel update didn’t require system restart again?

  19. oiaohm

    Sorry I skipped a word.
    “User sessions of linux could use some work in allowing safe termination of secuirty threats in a more generic matter. Even so it does not require reboots to achieve this. You have the right to complain that the GUI of Linux suxs at informing users they need to restart applications. I will *not* dispute that fact. Its true.”

  20. oiaohm

    Linxusoid
    “Just kill everything and hope it’s smart enough to sustain.”
    That is not what I said. Particular things flash being one you can kill off and the system will sustain. There are quite a few items that fall into this camp.

    There are some like cups were you must send them a restart message or you will break things in really bad and creative ways. Services when it comes todoing this is 1000 times simpler. Restarting them was designed in very helpful when you have a Semantic change or secuirty patch to deploy.

    User sessions of linux could use some work in allowing safe termination of secuirty threats in a more generic matter. Even so it does not require reboots to achieve this. You have the right to complain that the GUI of Linux suxs at informing users they need to restart applications. I will dispute that fact. Its true.

    “First you’ve totally ignored the fact that ksplice generates patches AUTOMATICALLY and you’ve started to discuss how you would convert.”

    In fact no read Ksplice documentation. Semantic change by Ksplice at this stage has to be coder assisted. Yes ksplice paid for come with little program catching and ghosting the structures temp for Semantic alterations.

    I was describing how the Ksplice handlers do it.

    Linxusoid
    “you’ve got to some stupid hops and jumps instead of suggesting to simply traversing one tree and reinserting everything into another.”
    This does not work because you cannot straight up stop for use all usages of the old tree. Something might be holding a pointer to it that has not completed it run yet. Yes you do traverse the tree covert it and mmu lock the what will become the slave.

    You can release the old tree once the kernel threads that existed when you applied the Semantic change have ended.

    ksplice will generate Semantic change patch without handler and as you can guess this goes south in a big away. ksplice can load a ksplace patch and ksplice handler.

    “Oh, and just so you know, AVL trees are usually more “shallow” so for most cases (rare writes – frequent reads) AVL are more performant.”
    Yes I knows this but we are talking about getting from one to the other without crashing the complete system. There are limitations that must be obeyed when you beat what people call impossible.

    Semantic change is not impossible is just tricky how you must do it. There is really no other way you can do it.

    Rule 1. A more complex data struct cannot be straight migrated into a less complex data struct resulting in requests to the more complex data struct not being answerable. Because you are not killing the the old code that is still running off instantly. It remains running for a little while.

    Break rule 1 your Semantic change method now has to beat the Turning prise. As you will admit beating the turning prise is fairly impossible.

    You are letting the old code die from natural causes. That when the old kernel thread does what is requested it will end.

    The key reason why Semantic change appears impossible is people stupid stuborn nature to want to fix everything at once. So that put you in the impossible location of having to find every point in memory where you wish to semantic change is being used. That you might or might not find. Failure to find will crash.

    What is the possible path without crash. Simple everything starting new run with the Semantic change. Don’t start anything new that used the old method other than possibly the handler. The handler is required to keep old and new structs synced. Fairly simple with mmu assistance.

    Hardware without mmu there is currently no developed way to perform a Semantic patch using ksplice. Hardware without a mmu you have to solve the Turning problem. Right ?? No you don’t there is another complete evil cheat that avoids the Turning problem.

    Two Kernel Monte related. Basically you load two kernels into memory at different address spaces. First kernel being the old kernel you need to replace. The second kernel keeps it state synced with the first and takes control of the first kernel locking and a few other things. All new kernel requests come to the second kernel loaded. This is the 2009 difference. Reality that 2 Linux kernels could share one ring 0 and transfer control between each other.

    The solution is parallel data. Once you can make parallel copies of the data you can do semantic change very safely.

    Would I ever using the 2009 hack in production no way in hell the ways it can go south are insane. But is a good proof that Semantic change can be done. You cannot do much bigger proof that semantic change could be done than replacing the complete kernel without stopping anything in userspace.

    If you are running into the Turning problem trying to perform a semantic change on a system you are going the wrong way or you are one hell of a genius or insane. There is no valid option at this stage bar to avoid the turning problem.

    The turning problem does not forbid performing a semantic change over time. Because a semantic change over time only requires data synced between the two sides. One side will die because its not getting any more tasks to perform.

    This solution to the turning problem was design into the way Unix was designed. Applying semantic changes is not strange to the high end Unix systems.

    Same method is used over and over again.

    Linxusoid the fault you found is showing a secret. You thought it was showing a weakness. Its showing you how to avoid the Turning problem. At least good enough for real world conditions. We don’t always need perfectly solutions. You just did not think through to see what happens when you start a new process.

    Would it be flawed. No its not. Can these processes one running old and one running new be sharing locking yes they can.

    What level can the method you were seeing be applied from. Everywhere in kernel and up.

    Linxusoid basically you are looking at the secret how Unix systems have been able todo semantic changes for years. This is an area Linux has not got refined yet equal to its Unix releations.

    Yet then Linxusoid are standing up on a box yelling from the roof tops that is a defect. I think you better re look were you are standing.

  21. Linxusoid

    Linxusoid you choose chromium-browser. In fact chromium-browser is designed on the presume flash crashes.
    Haha. I’ve thought so. Just kill everything and hope it’s smart enough to sustain. Here is news for you: nothing in Linux is smart (on the contrary, most of it is stupid. stupid. stupid.) and one of the reasons is ignorant theretics like you.

    “Now, would you please describe AUTOMATED algorithm that having just the implementation of RB tree and AVL tree would automagically generate code to convert RB trees to AVL trees. Atomically.”
    Something like this level semantic change is normally not a big problem. old next to new.

    Where would RB trees and AVL trees need to be transferred between old thread system to new thread system atomically. This is where things get interesting. Normally there is no need todo this ever. Since AVL worse performing that a RB.

    Readind comprehension problems? You’ve got to your usual wall-of-gibberish and failed TWICE. First you’ve totally ignored the fact that ksplice generates patches AUTOMATICALLY and you’ve started to discuss how you would convert. And second, you’ve got to some stupid hops and jumps instead of suggesting to simply traversing one tree and reinserting everything into another. Now if only you describe how automatically deduce this approach from IMPLEMENTATION of AVL and RB.

    Oh, and just so you know, AVL trees are usually more “shallow” so for most cases (rare writes – frequent reads) AVL are more performant.

  22. oiaohm

    “Please give me step by step description of how you would restart that process and reestablish connection with “browser process” over that –channel=3227.0×7… which is apparently not listened by anyone anymore.”

    Linxusoid the solution is kill old firefox out right if was old firefox you chose. That problem does not exist if your tracing works. Because you know old firefox of that PID group and USER started the flash it is using that is out of date. Yes that the direct user action leading to the out of date running library was old firefox being started.

    Linxusoid you choose chromium-browser. In fact chromium-browser is designed on the presume flash crashes. Modern firefox also contains the same thing. Yes you can kill flash at any time and new firefox and chromuim will both just start a new copy as soon as they cannot connect to the old copy to them it crashed. This is something flash does anyhow. User will not notice at all.

    This is also true for a lot. There is tones you don’t need to care about. Its surprising how few you really need to worry about joining back up.

    Browser plugin updates is one of those things where the script these days is dead simple. Find process using old browser plugin apply kill -9 to it and the system will sort it self out. Best of all users will not complain because they think the plugin just glitched as long as you randomise it a little. Yes this is something you can put into cfengine or puppet todo automatically so never have to worry about it again.

    That channel jump is why ps -ef does not work every time.

    I am applying a semantic changes. Where does it say I have have to keep the processes alive or restore a replacement. Only have to restore a replacement if a replacement will not be automatically done. Even then its better to take out what started it. Is it not possible to kill you can cut off is risk without stopping. Yes in fact its is. Its possible to cut the old process off from network.

    The timing of the kill. Is the critical bit.

    Anyone starting a new chromium-browser gets the updated version of flash. So those users have the semantic change applied. Being able to find the processes still running old tells you how far along the semantic change you are. Most cases general usage will see the semantic change spread by itself through the system. There are a few points where you need to give it a helping hand.

    Npwrapper does not escape tracing like Selinux sandboxing or cgroup. Everything linked to that application is tagged. a ps command does not tell you enough.

    Question also becomes what resolution do I need. Would be the users session be enough. Most cases yes resolution of how many user session are running old binaries is enough and who they are.

    Really most case users session is enough. Lock off network access inform user that they need to save work and log out and back in is the most extreme solution.

    I am only disruption the users that were in the system when the change came in and were using something the update effects. 20 to 100 that logged in after the change was applied to file-system I don’t have to touch.

    I can also choose when I hit the users that were logged in and how hard.

    How you handle the out of date part comes down to company policy.

    Semantic changes under Linux I have many options to apply them. Yes I always have two options. Kill or Secure. Just because something is running a secuirty flawed binary does not meant it is exploitable if attacker no longer can access it.

    Secure option is why I can run to the end of user session or until user themselves restarts the effected program.

    Linux is naturally applying the semantic change. It just need a little push and it will do it. This is the art of being a good Linux administrator. Is knowing how much of a push you have to give it and when it needs a push to roll out a update effectively.

    Running a check for applications running deleted libraries or deleted applications tells you when the system might need a shove. Services the shove can be completely automated and is fully dependable.

    User logins. I will give you this is still horrid. Even so horrid does not make impossible. You just have to decide what resolution you want to worry about it at.

    Individual user applications is an option but it is harder to get right.

    User session is simpler. I have 20 users running old application part notify them all give them 30 mins before I kill their sessions. Before killing session check if they have killed the effected applications. If they have cleaned there session let them keep on running and send them a thank you notice. Yes also provide them with a little application that tells them if they have killed everything or not. Surprising how useful a little desktop widget is.

    I am not yell at the user I must reboot. At worst I am yell at the user you must log out. If you don’t I will log you out.

    Reboot effects everyone using the system. I only want to effect users running out of date code.

    Nothing you are saying changes the fact reboot is not required.

    Linxusoid
    “Now, would you please describe AUTOMATED algorithm that having just the implementation of RB tree and AVL tree would automagically generate code to convert RB trees to AVL trees. Atomically.”
    Something like this level semantic change is normally not a big problem. old next to new.

    Where would RB trees and AVL trees need to be transferred between old thread system to new thread system atomically. This is where things get interesting. Normally there is no need todo this ever. Since AVL worse performing that a RB.

    AVL is a less complex structor. But I will presume the worse case that is not a RB to AVL. Were is something like a B+-tree to B*-tree.

    Because in the case you picked for being a idiot. RB is close to AVL so if you make RB the dominate copy and the AVL the slave copy while you are migrating guess what the atomic works. Because AVL requests are RB compatible after a little wrapping.

    Once nothing is using the RB struct any more that is not coming by the wrapper you can then basically copy the RB out of existence. This is something Turing never consisted because mmu did not exist. Where you can have master and slave copies of a data struct in memory. Just like you can have a master and slave databases with the slave databases being read only with all writes having to be transferred to the master copy.

    With the horrible B+-tree to B*-tree you have to create a third struct that represent both. So both the new and old have to be slave generated on request structs.

    There is one case when semantic cannot be done. Is if it impossible to create a master struct or use one of the struct as a master struct to represent both sides. The odds of this happening. You are talking about a complete alien change. With Linux kernel history this is not 100 percent impossible but olds are really low that it will be required to fix a secuirty fault.

    Semantic update is depending on the means of the mmu to make access memory controlled. With that controlled you can you can bend reality. Code in kernel does not know that the data struct it just talked to is like a view table in a database. Not real. Also does not know the struct it just updated is also not the real struct but a struct that will be replicated.

    Linux world worked out how to create ghosts. How to replicate like database master slave replication on structs in kernel space.

    ksplice has the prior art to this. So getting a Turing award for yourself is impossible.

    Who said mmu page fault feature was a bad feature and that page faults were bad in kernel space.

  23. Flying Toaster

    If you allow the old threads in kernel to run there course at some point they all end. This is kinda intentional. You don’t want a request in a kernel in a never ending loop.

    But if a request is not in a never-ending loop, then what do the little green men from the flying saucers have for breakfast?

    Userspace you can apply semantic changes as long as you make sure old stop at some point and be replaced by new.

    You didn’t pay attention to Linuxoid, did you? It’s not about the user or the kernel spaces. It’s about whether it is always possible to come up with a proper transformation for something into another. And the answer is, “It’s not.”

    But words like these are just going to be wasted on you anyway, so instead I’ll just give you this very simple advice for you to ruminate on:

    Black-list all the bigfoots!

    Yes shock to roll a semantic change into the Linux kernel takes less than 5 mins in most cases. Its a rare thread in Linux kernel space that lasts past 5 mins

    Nah, taking into account the stondula beams bouncing off the tectollian shielding, a semantic change can take up to 10 minutes to complete.

    Items like firefox you can kinda encourage it by block that process from opening new network connections(yes that means to block by process id in iptables is not there by mistake).

    But how do you maintain the force field without overloading the zero-point energy generator?

    So Linux is semantic change compatible in userspace design.

    It is compatible in hyperspace design, though?

  24. oiaohm

    Phenom
    “Guys, sorry to interrupt your nice discussion about KSlice, but I have a more practical question to Ohio: what is the difference between a full system reboot and restarting every binary in memory?”
    When dealing with a system with over 16GB of ram lots.

    Because rebooting system you end up losing everything cached in ram. So you have to re read all that from disk. Even the fastest solid state storage is not as fast as already in ram. Reboot equals pain. Loss of performance due to cache disruptions. This is why after reboot filling of caches was developed but its still slower to have to refill cache.

    Flying Toaster and Linxusoid really the funny thing is rolling semantic is not that hard.

    Really the issue Linuxsoid is point to is part the solution todo semantic changes without bring house down. It is why Linux is superior.

    If you allow the old threads in kernel to run there course at some point they all end. This is kinda intentional. You don’t want a request in a kernel in a never ending loop.

    Userspace you can apply semantic changes as long as you make sure old stop at some point and be replaced by new. The allowing old binary to remain in memory after it deleted is about semantic changes. So the process can be safely stopped.

    Exactly what says you cannot a semantic kernel change you is not allowing old and new to live side by side for a while. Doing a semantic change by killing off the old behaviour instantly is impossible you will miss something so break system. Rolling a semantic change into a kernel using a handler to roll it in. Is not impossible. Ksplice uses the roll semantic change in.

    Yes shock to roll a semantic change into the Linux kernel takes less than 5 mins in most cases. Its a rare thread in Linux kernel space that lasts past 5 mins. Even so all new started threads by kernel in that time are fixed against the flaw. This has now made attackers life many times harder by reducing the surface area of the attack. The surface area will keep on reducing until it don’t exist if the semantic patch has updated everywhere. Now if you have some closed source binary driver it may be repeatability hitting the handler emulating old. Still cure-able without rebooting but it will equal pain from taking that driver off line. Again you can place it in schedule.

    Linux is basically using a different solution MS. The different solution handles semantic change.

    Splicing in userspace is normally not done in Linux. Microsoft does it yes. But there is really no major advantage to it other than risk that you have failed to detect the patch is semantic so has blown a user space process up in some latter creative way that could be days latter(yes MS has done this). Normally if you don’t care about instant fix right now normally short distance down the track an option to restart or end that process will appear. Like a user logging out.

    Items like firefox you can kinda encourage it by block that process from opening new network connections(yes that means to block by process id in iptables is not there by mistake). Person does not lose what they already have. But they are not at risk either. Yes this is sometimes exploiting human nature to get the user to restart what needs restarting.

    With services with restart option they contain some intelligence. They work out when the point that the service is safe to stop and start again. A rolling application of a semantic change is way simpler than a instant application of a semantic change.

    NT design takes the path that you can not delete any dll in use. So closing the door to apply semantic changes to userspace while running. Since this disabled old running next door to new disable applying semantic change without rebooting the system at times. So when you reboot system you nuke what is cached in ram. Killing system performance after reboot. Basically how brilliant is this idea. Seams like a good idea having disk and memory matched until you are applying a semantic change. Where having both out of alignment temporary help you.

    There is only 1 process in userspace on Linux that you would consider splicing in most cases. It is the init process. Process 1. The process that cannot be terminated without terminating the system. But as part of the change from initrd to filesystem the process 1 is changed. This in case of a security issue allows init to be changed on the fly.

    There is nothing in the userspace of Linux that cannot be stopped. Everything else can be stopped as part of normal operations somewhere. So Linux is semantic change compatible in userspace design. Something Linux got from Posix. Something people don’t consider Unix people were kinda old and wise with secuirty issues.

    This is the funny part Linxusoid is not finding a fault in Linux. That Windows does not is a fault in Windows.

  25. Linxusoid

    This is wrong. You need handling code to transfer states. The thing you are missing Linux does have a To semantic updates you need a handling code.
    Now, would you please describe AUTOMATED algorithm that having just the implementation of RB tree and AVL tree would automagically generate code to convert RB trees to AVL trees. Atomically. Thank you so much, I’ll give you a credit in my Turing Award inaugural speech.

    The simple fact my script did run over every process and with a little work you would list every old library left in place.
    There are more than one simple fact here:
    1. Your script is not working and any fix you make to blacklist those “expected” binaries will leave it fragile
    2. It’s not part of the update system (neither deb-based nor rpm-based)
    3. It doesn’t solve anything. Since you don’t understand things “in general” let’s go by example. Here is a screenshot of fairly simple chromium session. Let’s say you’ve updated flash. Please give me step by step description of how you would restart that process and reestablish connection with “browser process” over that –channel=3227.0×7… which is apparently not listened by anyone anymore.

  26. Flying Toaster

    @Pogson

    GNU/Linux uses them to their full potential.

    Sure, so are you going to “prove” that with more stories of you comparing derelict/neglected Windows machines with Linux machines that you had set up in unspecific times for the primary purpose of replacing the former, again?

    And how did you go with the answers to my questions (see “#comment-81178″)?

  27. Flying Toaster

    @Pogson

    M$ is taxing thin clients to death.

    I am feeling a bit lazy today, so I’ll just quote one Linux user on this Microsoft “tax” thing:

    “First, someone said my arguments that there’s not a “Microsoft tax” were hollow and not serious. In fact, I pointed out that the only people to whom it can even be considered a tax are those who want a different operating system than most people; I added that after explaining that most people — the masses — who buy computers demand they come with Microsoft Windows. Those mainstream users, who make up over 90% of the computer market, aren’t paying a “tax” to Microsoft or anyone else. They’re getting a value-added feature at a lower price than they would get if computers came without any operating system and, accordingly, no savings from a bulk OEM license agreement.

    “The second commenter at that particular site makes a similar, common error among those who consider bulk OEM license agreements some kind of tax — that a lack of computers with Linux or any other operating system is proof of some kind of monopolistic “tax” on buyers of OEM hardware. I think that’s a non sequitur.

    “The reason OEMs install Windows by default is because of the more than 90% of buyers expect a computer to come with an operating system, and the demand of the mainstream buyers that the installed operating system be Microsoft Windows.

    “It’s about supply and demand. That’s all. Little or no demand for Linux, very few models are offered with Linux. Great and nearly 100% demand for Windows, guess what gets installed.”

    I shall leave the rest for readers here to read on at their own time.

  28. Flying Toaster

    @ohmie

    Yes Flying Toaster how is unknown a problem when you black list the old function pointer call just like you black list a semantic data change old data location.

    It has nothing to do with “black-listing” pointers. It’s just like Userful Edubuntu-based solution has nothing to do with Microsoft NT-based offering or NDA or flying saucers or the Noch Less monster. Just admit it – you don’t know anything about the stuff you are talking about, and you are just trying to make up your lack of knowledge by filling all the gaps in your reasoning with your imagination. It’s not working, and it has been hilarious watching you paint yourself to a corner with things that aren’t really there.

    Now, with that out of the way…

  29. oiaohm

    Flying Toaster in fact the links are data struct it was worked out latter. To change the data struct you have to cause a full pause. The 2009 alteration to allow data struct sub also has to cure.

    “are always on the call stack of some thread within the kernel”. Updating these functions mean that you will need to go and alter every thread that utilizes these functions – which can be anything known or unknown to ksplice.”
    Funny. That is the exact same problem with changeing data structs. There can be something referencing it that you don’t know about. You really do need to look at the latter versions.

    “The tricky part not about hunting down each and every process running in the system. It is about the differences in the startup procedures for individual services and apps and how you are going to go about restarting them safely. But it’s been a blast nonetheless watching you making up stories about writing scripts for steps that don’t really matter at the end of the day.”

    That is the tracking. So you know how they were started. systemd makes this simple. Dead simple. Its done all the tracking my old scripts had todo.

    Really that ps -ef fails. If someone forks off and breaks back link to parent you no longer see that it was like started by cups. So instead of starting the cups restart you kill the process so disrupting a print job. Yes the process required tracking and ps -ef is no where near good enough.

    “And just like ksplice it cannot do “semantic updates” meaning it can only update code – not state.”
    This is wrong. You need handling code to transfer states. The thing you are missing Linux does have a To semantic updates you need a handling code. Handling code is also kind of evil. It exploits the mmu. You tag the old call location memory block out. So anything old calling it triggers the mmu that triggers the handler. Nice bit about Linux kernels being relocatable in memory.

    Same method applies to semantic data strut update as applies to semantic function update.

    So there is no fowl. Avoid Halting problem problem by never causing a event that causes. 2009 experiment with kernel to kernel replacement with kexec. How to transfer state basically without bring house down. Mask out old kernel and everything calling it is a old call that has to be updated and state transferred.

    Yes something interesting about the Linux kernel is that pages can be pushed out of memory and have to be retrieved by the memory management. That is the under handed trick to avoid the Halting problem.

    Basically you need to look at the semantic update solution of ksplice. Simple yet 100 percent effective. There is no loop hole.

    “For patches that do introduce semantic changes to data structures, Ksplice requires a programmer to write a short amount of additional code to help apply the patch. This was necessary for 12% of the updates in that time period”

    That handler code currently not automatically generated done right can apply any semantic change to the Linux kernel. Data is the most common thing to need semantic replacements. Functions can be done just as simply. Cannot be done on systems without a mmu. Its simply a old use pointer trap. You attempt to access old data structure using old function guess where you end up mmu has just page faulted leading to the handler. Trying to access an address blocked out. Patch that call and move on.

    Linxusoid yes they did solve the halting problem by not attempting to swap everything over in one major cliff jump. You have a mmu exploit it. It also solves out all the cases of where something might be called you did not know about.

    Yes Flying Toaster how is unknown a problem when you black list the old function pointer call just like you black list a semantic data change old data location. Unknown will tell it self in time to the handler so the handler can update it. Don’t be a mind reader let the program tell you what you need to change makes the problems millions of times simpler.

    Simple solution is normally the cure to the most complex problem.

    It was the 2009 experiments using kexec to rip and replace the kernel on the fly that showed how. Of course the handlers todo that was huge.

    Linxusoid my script did work I did not tune it I said it would find anything that had been deleted of course the temp sys volume has been deleted. I am not here to give you perfect administration scripts if you decide to pay me for my time I will.

    The simple fact my script did run over every process and with a little work you would list every old library left in place.

  30. Robert Pogson

    oiaohm wrote, “That is the only way to buy a License for commercial usage.”

    There is no need to buy a licence for T150 or T200. HP sells them.

    Userful gives this comparison of GNU/Linux and that other OS with T200
    http://www2.userful.com/products/product-comparison/userful-multiseat-vs-windows-multipoint

    Licensing
    With Userful, there is one license type to purchase:

    1) A Userful MultiSeat license per user station

    With Microsoft Volume Licensing Academic Programs, three license types must be purchased:
    WMPS 2011 (Standard/Premium)

    1) A Windows MultiPoint Server 2011 Standard/Premium license for the host computer
    2) A Windows Server 2008 or later Client Access License (CAL) per user station
    3) A Windows MultiPoint 2011 CAL per user station

    M$ is taxing thin clients to death. GNU/Linux uses them to their full potential.

  31. Flying Toaster

    Updating these functions mean that you will need to go and alter every thread that utilizes these functions – which can be anything known or unknown to ksplice.

    I misread.

    The problem with updating such functions is that there will always be thread entering and leaving the same segment of code in memory at any given time and that altering such code will result in changes not anticipated by the calling thread. And since by design ksplice enforces a safety measure that avoids alternating code segments mid-execution, the update process will always end up failing every time.

  32. Flying Toaster

    Flying Toaster again reading 2008 documentation. All the 2008 ksplice faults have been long since fixed.

    Yeah, right.

    Yes there have been schedule(), sys_poll(), or sys_waitid() patches done by direct code execute since 2009. Same issue of not being able to change data structures prevented changing them in 2008. Changing data struct on fly removed a lot of limitations.

    Sure. The more I talk to you, the more you are just going to make up on the spot. Now, why not just change the way things work here a little by tell you why there aren’t patches for schedule(), sys_poll(), or sys_waitid() and why they can’t be done.

    Look at the paper cited in the lwn source. Pay attention particularly to what it says under “Capturing the CPUs to update safely” and notice what it says about “non-quiescence” functions. The reason ksplice cannot update these functions has essentially nothing to do with data structures (surprise!) but the simple fact (see what I did there?) that these functions “are always on the call stack of some thread within the kernel”. Updating these functions mean that you will need to go and alter every thread that utilizes these functions – which can be anything known or unknown to ksplice.

    One example is “sys_waitid()”, which is part of the Linux syscall interfaces

    So much for making stuff up on the spot, eh?

    What makes my script solution complex is the tracking. So yes I know that X process owns to Y service or application.

    You know the easier way to do that? “ps -ef”.

    The tricky part not about hunting down each and every process running in the system. It is about the differences in the startup procedures for individual services and apps and how you are going to go about restarting them safely. But it’s been a blast nonetheless watching you making up stories about writing scripts for steps that don’t really matter at the end of the day.

    Userful could not get a license from MS to sell a version with kernel modification. So the “Windows Multipoint” is a compromise. Just like Xen extensions for Windows were. Same limitations Academic Volume License only. Yes you can have an Academic kernel signing key if you sign a MS NDA.

    And Santa Claus is coming to town, apparently.

  33. Linxusoid

    > Little extra code is to remake the data structure in memory.
    And, ta-dam, Linux developers just solved (provably undecidable) halting problem
    Everything is possible if you put just enough eyeballs.

  34. Linxusoid

    Gentelmen,
    I wonder just why are you so fixated on X? You seem to not understand that problem lies in Linux package managers – not specific packages.

    As for oiaohm’s script. Here is a screenshot from freshly booted system (uptime is in the last line). Don’t you think it’s A LOT to restart after you’ve just rebooted? What excites me about bash scripts is that they never work.

    Now, ksplice. Windows had HotPatching shipped since 2004. Patented technology, by the way. Unlinke ksplice it can update EVERYTHING (both user and kernel mode). And just like ksplice it cannot do “semantic updates” meaning it can only update code – not state. It doesn’t matter if oiaohm believes it or not. On of the reasons languages like Erlang allow live code replacement is that they are “pure” (i.e. bear no state). Is Linux written in Erlang or Haskell? Thought no.

  35. Phenom

    Guys, sorry to interrupt your nice discussion about KSlice, but I have a more practical question to Ohio: what is the difference between a full system reboot and restarting every binary in memory?

  36. oiaohm

    Flying Toaster again reading 2008 documentation. All the 2008 ksplice faults have been long since fixed.

    Yes there have been schedule(), sys_poll(), or sys_waitid() patches done by direct code execute since 2009. Same issue of not being able to change data structures prevented changing them in 2008. Changing data struct on fly removed a lot of limitations.

    “Yep, every single process and its child, all without needing to know what they do (as shown in your method and reasoning) or even what they are.”

    What makes my script solution complex is the tracking. So yes I know that X process owns to Y service or application. Maybe there is a reason why I don’t give the script up is because its not the simplest.

    I have showing how to find the out of date using processes. What I have not given you is the tracking. That is a problem a skilled administrator can solve.

    “I didn’t notice Microsoft had changed their company name to Userful.”

    Really how dumb are you Flying Toaster I guess you think MS made their defrag tool as well(that is symantec by the way made the windows defrag). Lot of products sold as MS products are not Microsoft developed or maintained.

    Is it not funny that MS cannot release a version of Windows Multipoint for commercial usage. Since to do that they have to talk terms with userful.

    Userful could not get a license from MS to sell a version with kernel modification. So the “Windows Multipoint” is a compromise. Just like Xen extensions for Windows were. Same limitations Academic Volume License only. Yes you can have an Academic kernel signing key if you sign a MS NDA.

    Basically you want to sell a custom version of windows you have to do it under Microsoft terms and make no profit unless you sell it Academic only and have signed MS NDA you can go for it. Just you must sell it with MS as the Distributor.

    I told you the same company makes both. I was not kidding. You are that blind you cannot see past some simple MS wallpaper.

    Simple fact here Flying Toaster until you start digging out post 2009 ksplice documentation you have nothing. You will find out that you do completely have nothing to back your case.

  37. Flying Toaster

    @ohmie

    Sigh… Same walls of semi-intelligible nonsense. Well, here goes nothing:

    Yes I have used ksplice a lot….

    And I have flied in a space shuttle with a supermodel. True story.

    Little extra code is to remake the data structure in memory.

    Thanks for repeating what I said.

    Basically the documentation talks about the outside possibility of not being able to remake it but its never happened.

    “its never happened”. By chance, not engineering.

    And, by the way:

    “Kernel functions like schedule(), sys_poll(), or sys_waitid() are likely to always have processes running within them. In cases like this, ksplice will eventually give up and inform the user that the patch cannot be done; it is simply not possible to make changes to those particular functions.”

    I’m looking forward to reading from you about how such concurrency issues are supposed to have been fixed despite the pure impossibility of such.

    This is why I have script to the restarting of obsolete binaries.

    Yep, every single process and its child, all without needing to know what they do (as shown in your method and reasoning) or even what they are.

    Seriously, ohmie.

    That I will not give to you. Basically I am not going to skill you up to take my job.

    Sure, with your mind-blowing communication skills and most certainly fantastic documentation skills, there is no doubt you are the hotshot in town.

    I have not found a server yet that there was not a point somewhere in the day that you could do a disruption of a application without being noticed. Reboot will be noticed. Particularly when you time it with a low cpu load. Scripts enabled me to sneak it under the radar.

    I staple Lipton teabags to my forehead so I can sneak into the ladies’ restroom without getting noticed. They work every time.

    It does because it enabled you to do the update 1 piece at a time.

    Sure, why not? I get into my jumper every day by slicing it into halves so I can put it on one side at a time.

    Shame that it helps nothing with the interruption of any application, either.

    Worst I have seen is 1582 application needing a restart.

    And I have seen Death himself. He just muttered some stuff about about the Holocaust and then walked off. What a weirdo!

    By the way it is possible to splice applications in user-space to make them migrate to the new version of a library and cease using the old one.

    Did I mention I wore my pants in halves as well?

    If there is a flaw bad enough I might consider splicing userspace applications. Boy it would have to be bad and I would have to be maintain something like a sat uplink that we might lose if the program restarts.

    Meh. I don’t know about you but I have a black belt in software karate and can easily slice applications into parts with just one hand. Do you want the name of my dojo?

    Flying Toaster the claim is Linux does not require Reboots This is absolutely true. There is no case you can make that Linux requires reboots.

    I can’t argue with that, now can I?

    The claim has never been that is 100 percent Disruptions free. Disruptions lower due to no reboots being forced.

    Grrr.. But I thought there was something I could do with those sneaky distro devs living 3 time zones away from my town!

    Linux does not interrupt teacher in the middle of class telling a student they computer must reboot now windows update has very bad habits of doing this.

    Yeah, and not to mention the pesky Sasquatches and Yetis that keep interrupting everything!

    This is part a english fault on your part Flying Toaster

    I thought for a moment that you were going to talk to me about German faults. Phewww…

    Real fact of the matter it requires software called userful multiseat for Linux.

    Thanks for telling me what I have already known about Userful.

    Userful is also the writer of windows multipoint server that the t200 device will not operate without either.

    Really? I didn’t notice Microsoft had changed their company name to Userful.

    Yes this is underhanded and sneaky. Looks like there are two competing produces because 1 is coming form Userful direct and ones is coming from Hp. Really they are not.

    Sneaky indeed.

    So how HP 100 unit is the host machine vampire.

    I don’t know about you but I am most definitely with Team Jacob.

    Due to me being a commercial I cannot use a ms6200 server.

    Well, thanks for stopping from enjoying my favorite TV show anyway.

    Appears to be a windows version due to some very warped bending.

    Windows Moebius Ring Edition!

    I get asked at times todo some very strange things. Boss wanted to work out the viability of this kind of solution.

    It’s nice talking to you, too.

  38. oiaohm

    Flying Toaster you need to read everything.
    “Requires HP MultiSeat ms6200 Desktop with Windows® MultiPoint™ Server 2011
    Academic Volume License”
    Due to me being a commercial I cannot use a ms6200 server. Academic Volume License forbids commercial usage. So both HP t150 and t200 if you use full hp documented solutions are paper weights to me. Only way I can use them is useful multiseat for Linux. That is the only way to buy a License for commercial usage. I have set up t150 and t200 in Internet cafe this requires using Linux version. Appears to be a windows version due to some very warped bending.

    Useful multiseat linux is running rdp clients to a windows terminal server. Yes this works out to be a low power usage combination.

    I get asked at times todo some very strange things. Boss wanted to work out the viability of this kind of solution.

  39. oiaohm

    Flying Toaster so far with ksplice since 2009 has not been a single patch that could not be applied . Yes I have used ksplice a lot. Little extra code is to remake the data structure in memory. Basically the documentation talks about the outside possibility of not being able to remake it but its never happened.

    Flying Toaster
    “restarting all the obsolete binary in memory manually is tedious and far off from the interruption-free experience that Pogson has always boasted about.”
    This is why I have script to the restarting of obsolete binaries. That I will not give to you. Basically I am not going to skill you up to take my job.

    I have not found a server yet that there was not a point somewhere in the day that you could do a disruption of a application without being noticed. Reboot will be noticed. Particularly when you time it with a low cpu load. Scripts enabled me to sneak it under the radar.

    “separating the “client” and the “server” helps nothing about interruption due to updates.”

    It does because it enabled you to do the update 1 piece at a time. So you don’t cause cpu resource starvation. At times if you restart everything that is running old stuff at exactly the same time you might as well reboot because the system is going to lag out. Worst I have seen is 1582 application needing a restart. Not something you want to start kinda all at once.

    By the way it is possible to splice applications in user-space to make them migrate to the new version of a library and cease using the old one. The splice functionality of ksplice can be used on user mode application. Its normally not worth the hassle or risk. You are talking a few seconds in most cases to restart the application if you time it with low load on systems. Reason most cases it does not have to read the application from disk because its already in memory. Once the library is in memory is also shared.

    If there is a flaw bad enough I might consider splicing userspace applications. Boy it would have to be bad and I would have to be maintain something like a sat uplink that we might lose if the program restarts.

    Flying Toaster the claim is Linux does not require Reboots This is absolutely true. There is no case you can make that Linux requires reboots.

    Are the conditions that have to be met so you don’t cause secuirty issues by no reboots?
    Yes there are conditions.

    Is the administrator required to do extra work or setup script to do extra work if you don’t reboot a Linux machine?
    Yes this is required. If you are too lazy todo this you should not be running Linux reboot-less.

    Basically there is no such thing as a free lunch. To avoid reboots you have to pay a price configuration and management. It is not a magically freebie.

    The claim has never been that is 100 percent Disruptions free. Disruptions lower due to no reboots being forced.

    Linux does not interrupt teacher in the middle of class telling a student they computer must reboot now windows update has very bad habits of doing this.

    Question does a disruption that no one sees count?
    From Robert Pogson point it does not count. So you scheduled the the disruption to happen when no one will notice like 2 am its a no issue to a person like Robert Pogson.

    Interruption free is different to disruption free.

    To be a interruption someone must notice. So yes Robert Pogson about interruption free is a fully valid claim even if he is rebooting the server and clients at midnight every night. As long as no one notices its not a interruption of service.

    But to you and me flying toaster its not 100 percent disruption free. The system is being disrupted in a controlled and predictable way. Of course you want that disruption to be as short as able so its less likely to be noticed. The art of 99.999 uptime. You never ever get to 100 percent uptime with no disruptions.

    This is part a english fault on your part Flying Toaster. You do not schedule a interruption to service. You schedule a disruption to service. Interruption is something that happens when its not expected. A disruption can be expected. Minor difference in meaning but in this case its critical minor difference.

    With the hp t200 some section of the hp sites says it require windows some sections says it requires Linux.
    http://userful.com/press/hp-multiseat-t200-release
    Real fact of the matter it requires software called userful multiseat for Linux. I can understand this mistake due to agreement between hp and userful. userful sells software for Linux servers and hp sells the windows servers. But the devices are compatible with both.

    Userful is also the writer of windows multipoint server that the t200 device will not operate without either. Does matter what one you buy you are paying Userful.

    Yes this is underhanded and sneaky. Looks like there are two competing produces because 1 is coming form Userful direct and ones is coming from Hp. Really they are not.

    By doing this they can get away with this. http://www.userful.com/products/product-comparison/userful-multiseat-vs-windows-multipoint

    Flying Toaster the mistake you just made is a mistake anyone could make. Its just you lack of knowledge.

    I just noticed HP t5325 was the one I was think of at 100 dollars. HP has stopped making that one. HP t5335z for a generic client from hp. Better off with Wyse T50 than the hp t5335z.

    So how HP 100 unit is the host machine vampire.

    Of course there is still the NorhTec MicroClient TC that is 100 dollar. Many others for under $50 each from china in lots of 10 including world wide shipping. Yes 500 dollar spend 10 shipped anywhere. Yes they are high spec than NorhTec MicroClient.

  40. Flying Toaster

    I have used xrdp. It works. RDP is an open standard. I prefer X.

    So it’s not so much a “lie” as it is simply a personal preference, is it? And an officially unsupported one at that. Oh, and mind if I ask if you actually used “xrdp” or the Userful Edubuntu alternative specifically on an HP t200 thin client, despite the obvious licensing scheme attached, at the set price of $99?

    Heck, I would be thrilled even if you could just tell me your experience on the Userful solution with the HP t100 thin client.

    Or how about just give me some good ol’ fashioned answers to those questions I have asked you, namely, the one on when exactly you got those Acer 17″ monitor, and the one on how much you paid for the thin clients from DevonIT?

    So many questions, yet so little answers.

  41. Flying Toaster

    FT, stop lying.

    “Requires HP MultiSeat ms6200 Desktop with Windows® MultiPoint™ Server 2011″

    That’s a description as officially given by HP themselves. What they do not mention, however, is some hack-your-own solution that you have come across on YouTube. I am sorry but it seems that the whole world just won’t stop lying to you.

    Oh, and how did you go with the answers to my questions?

  42. Robert Pogson

    FT, stop lying. Userful, a GNU/Linux multiseat system can use those devices. M$ would like people to think they need to pay the M$ tax however. The protocol is RDP and that can be provided on GNU/Linux systems by xrdp:
    apt-cache search xrdp -f
    Package: xrdp
    Description-md5: b98c1889e17be6136503794b3491891b
    Description-en: Remote Desktop Protocol (RDP) server
    Based on research work by the rdesktop project, xrdp uses the Remote
    Desktop Protocol to present a graphical login to a remote client.
    xrdp can connect to a VNC server or another RDP server.
    .
    Microsoft Windows users can connect to a system running xrdp without
    installing additional software.

    see http://www.xrdp.org/

    see a demonstration on YouTube

  43. Flying Toaster

    Flying Toaster don’t read documentation that is 2 years out of date other wise you are a twit.

    Thanks for repeatedly calling me a “twit”.

    Oh, and ksplice can’t modify data structures in a kernel instance on its own. That’s done by additional code that comes with individual patches, and there is no guarantee that such additional code is always feasible. Again, read your own source.

    The rest all pretty much boils down to the points that:

    1) no one cares about Wayland,
    2) restarting all the obsolete binary in memory manually is tedious and far off from the interruption-free experience that Pogson has always boasted about, and
    3) separating the “client” and the “server” helps nothing about interruption due to updates.

    I would be a bit more inclined to read your walls of text if they were not next to indecipherable, with every other sentence based on unreality made up on the spot and overall tangential most of all times to what is being discussed. I am sorry, but your attempts at grabbing my attention are nothing but a waste of my time – and yours.

  44. Flying Toaster

    DevonIT. They use a VIA CPU and a bit of RAM. We ran a GNOME desktop on the terminal servers.

    The rest of his comment is garbage. We bought Acer 17″ LCD monitors, for instance.

    Thanks for clarifying. Those who have been following your blog are pretty well aware of your story. Well, did you get a bunch of 17″ LCD monitors for them as well?

    Oh, and what year was it that you got those monitors by the way? Are you just going to make some timeless comparison like you did with PC XT and Android phones last time? And how much did you spend on those thin clients by the way?

    You won’t be going so low as to stopping me from asking question by banning me from your blog, right?

    Today, there are perfectly usable thin clients $100 and less.

    Yep, shipped all the way from an obscure supplier in China. Read your own source.

    HP makes a thin client you can buy for $99 including keyboard and mouse.

    More like “Starting at $99″. Advertising speak, as usual.

    The $99 version requires Windows Multipoint Server, by the way.

  45. oiaohm

    Flying Toaster
    “A real thin client (Citrix or SunRay) would cost you $300-500 brand new (without monitor)”
    HP 99 dollar units does those protocols. http://thinstation.org/ its not like paying for the Citrix or the Sun brand is mandatory.

    The Citrix thinclient devices is Linux inside anyhow in fact hp makes them. So its basically the 99 dollar hp device in a Citrix branded box resulting in 200 to 400 dollars added on to the price. You cannot help the stupid. Quality no different what so ever between the hp and citrix device.

    Also Flying Toaster you need to brush up on maths. 40K to make a cost of 100 dollars per seat. You are talking 400 of something.

    40 instances of libreoffice is not bad. KDE 4.x with opengl acceleration off is also not bad.

    Size machines Robert Pogson is talking about as long as you don’t need opengl acceleration the size machines are big enough to run everything else. Flying Toaster. More of my setups are thick clients without hard drives because I am doing stuff that needs opengl. Thin clients do have limitations. Not everyone will run into the limitations of thin clients.

    Flying Toaster and that bug is ksplice patchable. So no reboot required to fix. Not like many other windows equal privilege exploits that only close when the system is rebooted. No matter what we do software will have bugs. Difference is that Linux has the systems to deal with them.

    Difference here is Linux publicly admits these sins. Lot of MS windows privilege exploits are still in the wild.

    Linux machines are not past compromise this is true. Linux admins should take system inspection serous-ally. Running thin-clients does reduce your surface area to audit.

    Running thin/thick clients removes that one windows machine that for some reason is not connecting to local wsus server so not updating so gets infected badly. With thick or thin clients that one machine with a dicy network connection falls over where a full client can be failing to update but still appearing to work. So stealth secuirty problem.

    Issue it can be failing to update in a way that the windows wsus server is reporting that the windows machine is fully upto date when it not.

    Basically more harddrives with individual OS’s on the higher you odds of infection are. Changing from full clients to thin/thick clients what are network server controlled reduces infection issues.

    You point to update not being pushed through to Linux ram. There is issues with windows clients and updates no moving through the local lan if local lan has minor switch issues that disrupt nothing else. Stealth time bombs.

    Apt and yum does report correctly when it has failed to download updates correctly. Windows update needs some work in this regard.

    Yes there is two halves to deploying updates. 1 getting them to the machine. 2 getting old copies out of memory. Linux does the getting the updates to the machine perfectly. Old copies in memory Linux not quite as good. Administrator has to be aware of this weakness this is knowing the OS you are using.

    Windows getting updates to machine is dicy. Result can be that damaged update is applied so computer don’t boot. This can sometimes be blamed on malware when its update failure. If the windows machine does get the update correctly yes the auto reboot mess does keep on top of the old copies in memory.

    Linux does provided the require frameworks to find and kill the applications running old libraries without rebooting the machine. So Linux could do better than what it currently does by default with an addon program.

    Windows requires a core part replaced the update solution. Also windows needs cleaner support for running as a thick client with a network provided read only file-system.

    The network provided read only filesystem for Linux thick clients kills a lot of privilege execape attacks. This is also true for Linux running in live cd configuration. Lot of attacks just fail.

    Server hosting the linux thick client boot directory can update it. This is why apt does not clean memory of old libraries by default.

    Reason you might run apt on server to update thick client programs. But the software from that directory is running on different computers so the apt on server cannot seen the memory of the thick clients so cannot clean up memory space.

    This is why cron, puppet and cfgengine is what you use to run a search for old libraries and clean up.

    Apt is a disc update solution. Memory update solution is a different section of administration of a Linux box.

    Most people claiming that Linux needs reboots fail to understand that Linux is designed for thick client operation. Thick client operation is commonly used in data processing clusters. Yes management script for clusters do have search for old libraries and applications and terminate with extreme prejudice.

    Thick clients are a different beastie to a desktop machine. Yes the idea that updates to OS are applied by a machine that will never run the thick client OS and does not even have to be a compatible arch to run it makes people go what the but this is normal Thick client.

    Understand setting up Linux for thick clients you understand what apt does and how it fits into the jigsaw of system management. cfengine and puppet are two administration frameworks that will run on the machine running the OS. Where apt is optional to be run on the machine running the OS.

    Having software administration framework like cfengine and puppet is key part of Linux secuirty. Most breached systems turn out not to be running software administration of any form.

  46. Robert Pogson

    FT wrote, “where did you acquire those thin clients from?”

    DevonIT. They use a VIA CPU and a bit of RAM. We ran a GNOME desktop on the terminal servers.

    The rest of his comment is garbage. We bought Acer 17″ LCD monitors, for instance. The unit price was $140. Today, there are perfectly usable thin clients $100 and less. HP makes a thin client you can buy for $99 including keyboard and mouse.

  47. Flying Toaster

    The difference is that thousands of machines running that other OS are compromised on the zeroth day while that’s rarely a concern for GNU/Linux.

    Penn and Teller should better go and do an episode of that B-whatchamacallit show with you.

    I have seen thousands of machines compromised by malware running that other OS but I have never seen a single malware running on GNU/Linux in a decade of running thousands of machines.

    So pure anecdotes are the best you can come up with to justify your double-standards. Mind you – there are the proven facts that Internet-facing Linux machines have got compromised year after year and that malware on Linux did and do exist. I’ll leave the relevant adjustments for popularity (which will quite frankly not in any way work in your favor) to the readers as an exercise.

  48. Flying Toaster

    The first time I went from Lose ’98 on 30 thick clients to GNU/Linux on thin clients I went from one user being messed up per hour to going six months with no such problems except one power failure.

    Fascinating stuff if we were still in 1999 and pretending that NT 4 didn’t exist. Otherwise, we both know how seriously one should take such a story – just like the one about the Earth blowing up on 21st December.

    Largo uses servers that cost ~$40K each, perhaps $100 per client. My servers at Easterville cost ~$1200 for 40 clients (4gB RAM, amd64 2X, RAID1 on 4 40gB hard drives, hefty PSU, gigabit/s NIC), about $30 per client.

    Running what? 40 instances of Sugar Labs? Big deal.

    Oh, and where did you acquire those thin clients from? Oh, wait, you probably just repurposed a bunch of old PCs and pretended that they were so. Am I right?

    Real energy saver that one.

    One failed with infant mortality of RAM and another with a hard drive in 5 years of operation.

    My granny is pretty infant, too.

    Compared to the few per week failure rate we would have had with that other OS on thick clients, it was heavenly.

    Are you seriously suggesting that Windows broke your hardware? Honesty is your friend, dear.

    Even with expensive servers, the savings with thin clients are huge. My per seat cost for the thin client was $140 for the box, $100 for the monitor, and $10 for keyboard/mouse combo which also doubled as a USB hub.

    There are a few things I must point out before the facts get fudged even further by you:

    1) Those you are talking about were not thin clients. They were full-blown PC with their own hard drives and connected to remote desktop sessions on a on-demand basis.

    2) A real thin client (Citrix or SunRay) would cost you $300-500 brand new (without monitor) and $150-300 refurbished. And, don’t worry, it breaks just as easily as a “think” client.

    3) $100 would get you a CRT monitor in 2005 (second-hand or otherwise, and most definitely beat-up second-hand in latter times). Depending on your luck, you would probably get something of a reasonable refresh rate, but, in any case, it’s hardly something one would recommend in a production environment particularly where kids are involved.

    At the time a thick client would have cost about $500 on the cheap end.

    Just like a brand-new thin client – the real deal, not your dumpster-dived sort-of-kind-of “thin” client with an antiquated hard drive.

    Some things you do on a thick client like checking the print queue every second are stupid on a thin client.

    You can easily check print queue on a thin client. Ever tried Windows Terminal Server? Oh, you obvious haven’t. Way to live your life inside a bubble, buddy!

  49. Robert Pogson

    I saved big on thin clients by not bulking up the server but depending mostly on the reliability of GNU/Linux. Largo uses servers that cost ~$40K each, perhaps $100 per client. My servers at Easterville cost ~$1200 for 40 clients (4gB RAM, amd64 2X, RAID1 on 4 40gB hard drives, hefty PSU, gigabit/s NIC), about $30 per client. They were incredibly reliable and we had multiples of them so clients could be shifted in minutes manually or in seconds automatically. One failed with infant mortality of RAM and another with a hard drive in 5 years of operation. Compared to the few per week failure rate we would have had with that other OS on thick clients, it was heavenly.

    Even with expensive servers, the savings with thin clients are huge. My per seat cost for the thin client was $140 for the box, $100 for the monitor, and $10 for keyboard/mouse combo which also doubled as a USB hub. $280 per client five years ago. At the time a thick client would have cost about $500 on the cheap end. Our performance was at the top end because of the RAID and cached files. At first the system was jerky until I found some weirdness in the configuration. Some things you do on a thick client like checking the print queue every second are stupid on a thin client. Same with insisting on synchronous writes to NFS. Same with “smooth” scrolling. After a day or so, users could not tell they were on a thin client at all.

  50. Phenom

    …a very expensive setup…

    Indeed. And where exactly comes the big cost-saving of using thin clients?

    Thin clients have their definite advantages, but saving costs is not always one of them. Money can be saved by reducing the IT stuff at branch offices, for instance, but not from much other stuff.

  51. Robert Pogson

    Phenom wrote, “The question here is – what happens when the server goes down? Answer: all clients go down, with no chance to save their work.”

    Two things. “The server” may be a cluster. In Largo, they have a session server, a browser server and a word-processing server, each capable of handling hundreds of clients. If one of the application servers goes down some data may be lost that was not auto-saved, but that has no effect on the users sessions nor other applications. Some installations put each client’s application in its own chroot or virtual machine, a very expensive setup, to minimize the damage. I’ve done things like that for other reasons like shortage of RAM or storage, but whatever the motivation it makes the system very tough. The first time I went from Lose ’98 on 30 thick clients to GNU/Linux on thin clients I went from one user being messed up per hour to going six months with no such problems except one power failure.

  52. Phenom

    @Pogs, you can do the same with any server software – MS Terminal Services, Citrix… These do the job much better than VNC when it comes to utilizing resources and managing sessions, and performance.

    The question here is – what happens when the server goes down? Answer: all clients go down, with no chance to save their work.

  53. Bob Parker

    Feb 5th, 2012 at 5:41 pm
    “There is no better way to say it.”

    Yeah…..there is….both Clarence and Flying Twit are full of crap as are all the M$ Bun Buddies that hang out here.

    @lpbbear I think you are being a little hard on these guys. They never actually use any flavour of Linux but just glom words from Ms web site site somewhere for the sake of the monthly issue of Ms food stamps, coffee money or whatever.

  54. oiaohm

    Yonah funny enough. My prime goal at this stage is to bring a worse problem under control.

    I stopped myself from swapping letters in words around now I am battling word swap. Punctuation done by self at this stage makes my word swap uncontrollable.

    Grammar I do better in libreoffice with grammar checker. There is no web browser with integrated grammar checker. Yes the grammar checkers some of those include punctuation checking. So coders could help me here.

    Time and effort I am beating my problems slowly Yonah. So the diatribes are language prac. Less boring. Be thankful 10 years go caps and full stops was rarities to be in the right place.

    I do have to introduce , back in. Its one extra char at a time. But that cannot be until I am in a stable state.

    Look back at some of my old stuff I was doing full paragraph and sentence shuffling. I still do that a little. List of faults I am bring under control is long Yonah. Non functional dyslexia is what I started with. That thing is no joke. You have sentence order shuffling. Word order in sentences shuffling then letters inside word shuffling.

    “sentence structure” Some of this is coming from word order shuffling in sentences not being fully under control yet.

    Punctuation and Grammar is lower down my list of problems.

    Odds of a normal human being able to read what a non functional dyslexia produces without treatment is nil. Odd of a non functional dyslexia being able to read a document they wrote the next day without treatment is also nil.

    Lets just say my brain loves to encrypted stuff. I am fighting all the time to prevent it having its way. Longer the block of text the more I have had to fight it.

  55. Robert Pogson

    FT muttered about rebooting GNU/Linux systems…

    The difference is that thousands of machines running that other OS are compromised on the zeroth day while that’s rarely a concern for GNU/Linux. I have seen thousands of machines compromised by malware running that other OS but I have never seen a single malware running on GNU/Linux in a decade of running thousands of machines. Also, APT never demands a reboot as that other OS does. I can apt-get upgrade a working system without disturbing users and I don’t worry much about them using an obsolete image in RAM.

  56. Yonah

    oiaohm: “Flying Toaster so far you have shown no signs of knowing old english good enough to understand it.”

    That’s rich coming from a guy that shows no signs of knowing modern English. Yeah, yeah, I know. Dyslexia, right? Except, there are people who have overcome their own language barriers through time and effort. You’re never too old for self-improvement. A little less time typing technical diatribes and a little more time working on your punctuation, grammar, and sentence structure could go a long way in both your personal and professional life.

  57. oiaohm

    Flying Toaster don’t read documentation that is 2 years out of date other wise you are a twit.
    http://en.wikipedia.org/wiki/Ksplice
    Ksplice added in 2009 means to deal with semantic changes. Basically out of date link make you make a bogus statement Flying Toaster would have known if you had read wikipedia and the matching references. There is not a single patch ksplice cannot apply to kernel.

    “bonus points for updating X with X-dependent apps running without interruption” Non opengl that you use to thinclients. http://xpra.org/ If the x11 applications are xpra wrapped on a thin terminal server you can disconnect the client and reconnect it with no loss in applications.

    Serving to thin clients with applications xpra wrapped its not a issue. Of course you want to notify user that there application needs terminating.

    Note a X11 terminal server does not need to run X11 server. Only X11 client apps.

    I had rejected X11 server on the server. Not X11 on the clients. There is a difference ie xpra cannot be used safely with X11 server on the server.

    Wayland solves a historic problem. Local applications to Local X11 server equals having to terminate everything. Server to remote client you don’t have to terminate everything.

    Stand alone desktops is the only place the X11 issue of not being able to restart X11 server alone exists. Wayland will be 1.0 before middle of year and cures this problem.

    What Robert Pogson is doing is xpra compatible.

    Normal Linux desktop systems are not xpra wrapped. Also local opengl punches a hole in xpra wrapping.

    Also you are forgetting students should not be running applications at night. So 1 am run script terminating everything using old libraries. Simple cron job solution. Also can be part of the zombie process clean up.

    By the way you can do a if statement around grep. You can use a more targeted grep than what I did there for basic example. basename on the $i gives you the process number that you feed into a simple kill command if you must get rid of a zero day problem. Of course there are more polite solutions.

    Grep returns 0 if it found nothing and greater than 0 if it found something so bash if compadible. Script solution is not that complex really. Flying Toaster basic bash will do it.

    The process that is complex is matching service to process id. So you can restart service instead of just terminating service. Of course running puppet or cfengine with service monitoring if you do bring the service down in clean up the service will be restarted.

    Wayland and Systemd I am both looking forward to because it makes these zero day problems a walk in the park.

    Systemd look up the /proc//cgroup in the process is in and it tells you the service or session it is. From there if it a service you can order systemd to restart the service so fixing zero day. Advantage service will get a chance to shut down correctly. If it a session you can inform the user about pending termination of that application wait 30 mins and terminate again fixing zero day.

    Systemd its pure clean. Linux does not need reboots will become reality for all install types after wayland and systemd.

    Linux servers today don’t need reboots if managed correctly.

    I guess were not thinking 1 minor-ally complex script will fix the complete need to reboot a Linux server due to anything bar a kernel update.

    Wayland removes me from having to ask local users on the server to log out. They would just notice some flicker. By the way I am already running wayland and have tried this.

    ksplice fixes the need to reboot to change kernel.

    Needing to reboot a Linux box is lack of skill or resources in the hands of administrator running it.

    Flying Toaster there is only limited cases where a Linux machine has to be rebooted on secuirty grounds. Distributions without ksplice and people who don’t have management scripts set up.

    No matter what the tmrepository link of yours Flying Toaster is a bogus representation of the facts.

    The real case is far more limited. Systemd distributions will make it even more of a limited case since sorting what is service and what is in a user session will be simpler. Anything that is a service can straight up get a service restart message even cups copes with this. Finishes current print job and restarts without missing a beat.

    What excuse will there for being in the location that tmrepository example gives. Nothing bar administrator lazyiness. But hey what else should you expect from general Ubuntu users. The fault is administrator incompetence.

  58. Flying Toaster

    When I was a system admin at a school, I could verify that no one was active and reboot the terminal servers after hours.

    So now all of the sudden the whole “zero-day” things becomes a non-issue. Interesting!

    That could lose someone’s unsaved work but no one ever complained over the years.

    But when Windows asks you to restart your PC? Time for some bleeding revenge!

  59. Flying Toaster

    Don’t forget that the application and the X server can be on two different machines.

    So, is this the computing equivalent of “you forgot Poland”? I honestly can’t tell.

    Sure, you can certainly have an X “server” running on your client and your X “client” (i.e. application) running on your server. But unless you can propose a method in which you can update either one of those without killing the application, I just don’t understand what you are trying to suggest here.

    You can also use VNC to allow a connection to be interrupted and resumed.

    So here you have both the “server” and the “client” running on the same machine and a client connected to it via a VNC service. Again, if you can find a way to update the “server” and the “client” without restarting the whole thing, please by all means let me know.

    Oh, while we are at it, why not throw in ssh or SunRay to the mix? Sure we can find some relevance here and there, right?

  60. Robert Pogson

    FT wrote, “does not take all the steps necessary to update binary images both on disk and in memory. That means for each piece of obsolete binary left running by apt-get, you get a set of unpatched security vulnerabilities in your system. “

    Partly true. Apt-get does restart services that are patched. In any event, one is no worse off than that other OS when you don’t want a reboot during office hours. When I was a system admin at a school, I could verify that no one was active and reboot the terminal servers after hours. That could lose someone’s unsaved work but no one ever complained over the years. Interestingly, one can restart sshd after an update without breaking the connection. Similarly one can usually reload an ethernet driver without breaking the admin’s SSH connection. No reboot is required to do that. Great fun.

  61. Robert Pogson

    FT wrote, “Now only if someone figures out a way to restart applications accordingly – and bonus points for updating X with X-dependent apps running without interruption…”

    Don’t forget that the application and the X server can be on two different machines. You can also use VNC to allow a connection to be interrupted and resumed. GNU/Linux gives a lot of flexibility.

  62. Flying Toaster

    @oiaohm

    Look – I do notice that you would like to think of yourself as more than what you really are, but my patience has already run too thin to keep dealing with your inane comebacks. Seriously, I mean it.

    It is part of the reason we want X11 dead. Wayland based session management can restart without terminating the applications. So avoiding the problem you just pointed to.

    Sure. We shall talk about it in details when it’s actually here. But, until then, anything about Wayland is irrelevant to this discussion.

    People who restart there system daily really don’t need to worry about it Flying Toaster.
    For us who don’t.

    Again, paying a little attention to what’s being discussed would be nice.

    Funny part is for 90 percent of all application on Linux X does not render fonts any more.

    What oiaohm says 99% of the time is utter nonsense. 76% of people agree.

    See? I can make stuff up as I go along, too!

    The following script will detect the issue. A more fine targeted script can be made and auto restarting of effected processes can be scheduled. Without requiring reboot.

    OK…

    #!/bin/bash
    for i in /proc/[0-9]*; do
    echo $i $`cat $i/cmdline`;
    grep ‘ (deleted)$’ $i/maps;
    done

    Nice little script there. Now only if someone figures out a way to restart applications accordingly – and bonus points for updating X with X-dependent apps running without interruption… Oh, I forgot you had conveniently rejected X as part of the picture. My bad! I should have taken more notice the seriousness and pragmatism you have in setting up your scenarios!

    Windows also has the same issue after applying some updates. Due to the fact applications have there own copy of dll’s with them. So the insecure version of the dll will be around after the next to windows reboots.

    Unsubstantiated bullcrap is unsubstantiated. But, hell, showing one’s working is for sissies.

    Of course a desktop widget that informs you of what applications you have running using old libraries would be a nice feature.

    World peace and unicorns farting rainbows would also be nice.

    Linux servers we don’t need to reboot because most of the time we don’t care if X11 gets terminated without notice.

    Very relevant point you have got there. I am pretty sure Pogson is more than happy to serve Sugar Labs or whatever “educational” program to his “thin clients” without X or with X restarting spontaneously.

    You must be using ksplice to update Linux kernel and init in place.

    ksplice, by design, “cannot handle semantic changes to data structures”. And that means for updates requiring such changes, you’ll still need to restart your system. And unlike you, I have legitimate sources to back up my claim.

    You must be running either a hardened version apt/package management[...]

    Which exists only in your sheer imagination.

    [... O]r management scripts watching for out of date parts running and update then restart those processes.

    Don’t hesitate to show me the results once you have done writing then.

    With both of those no reboot is required.

    And watch out for those unicorns and their amazing technicolor flatulence.

    Basically apt-get on hardened versions works.

    Or acid. Depending what recreational substance you are taking at the time.

    If you have not worked out already 99.9 percent of all so called Linux fault documented on tmrepository are bogus.

    And 54% badger roadkills agree.

    So that left over crap is no reason to require reboots.

    Except when it is, which you have gleefully ignored regardless of significance.

    And unicorns passing multi-colored wind.

    Simple fact if you were not a twit I would not have walls of text answers for you.

    Which mostly consist of baseless nonsense made up on the spot. I must be a total twit to deserve such a pointless waste of my time.

  63. oiaohm

    #!/bin/bash
    for i in /proc/[0-9]*; do
    echo $i $`cat $i/cmdline`;
    grep ‘ (deleted)$’ $i/maps;
    done
    Flying Toaster this little script detects anything deleted that is still in memory. Script to detect and deal with the problem tmrepository linuxdoesnotrequirereboots is generic. Distrobution neutral. Basically a few extra options and it can limit to libraries or executable and terminate effected and restart as required.

    So that left over crap is no reason to require reboots. Hardened distributions have these scripted embed in the package management.

    Basically everything on tmrepository as Linux flaws is bogus crap to pro administrators.

    Simple fact if you were not a twit I would not have walls of text answers for you.

  64. Flying Toaster

    Oh, and of course, source-less nonsense a la oiaohm’s walls of text don’t count.

  65. Flying Toaster

    Flying Toaster did you not notice my little bash script.

    The only thing I notice is the lack of old English. Seriously, could you please go get help instead of have me scrolling past your nonsensical walls of text every now and again?

    Now, does anyone have anything related to my point? No?

  66. oiaohm

    Of course flying toaster is another tmrepository idiot.

    If you have not worked out already 99.9 percent of all so called Linux fault documented on tmrepository are bogus. The ones that are not don’t apply to hardened distributions that are meant to be run without reboots.

    Most are worked around by simple Linux server best prac that anyone who has done a Linux Administration course has learnt.

    If you think you something by reading tmrepository you are a complete joke.

  67. oiaohm

    Flying Toaster did you not notice my little bash script. With a little tweaking you can limit this to executables and libraries so detecting anything apt has removed from disk that is still in memory. Also anything you have manually removed from disk and is still in memory.

    It finds the obsolete libraries in memory. There is a puppet and cfgengine scripts for detected it.

    Yes known secuirty flaw libraries you can add auditing to detect if they are in memory even if they have not been removed from disk and shutdown effected processes until apt updates them.

    Flying Toaster windows of course has the same problem. But a more of a mess.

    The reboot from Ubuntu/Debian does equal clean of issues. Not the case for windows after reboot you can still be flaw infested. Application only libraries cause windows a stack of secuirty holes.

    Is it possible to add a check script to apt. In fact yes it is. Harddened distrobutions will auto terminate any application run libraries in a deleted state at end of apt process. This is called too brutal for desktop users. If Robert Pogson has been using hardened he would expect it to be cleaned up.

    Linux with good management scripts does not need reboots. Because obsolete library stuff you are talking about is detectable can be auto handled if so wished.

    Disruption to desktop user is called too great.

    Is out of date binary in memory simple to detect under Linux yes it is. Can you run a memory audit for expired libraries in use yes you can.

    Linux servers we don’t need to reboot because most of the time we don’t care if X11 gets terminated without notice. Same with other services running old libraries.

    Can Robert Pogson alter his method and detect the issue fairly simply yes.

    How are you going to cope with the out of date libraries that hide in Program Files under windows. That you cannot update would breaking the applications using them. Flying Toaster

    Linux some management training problem solved.

    Windows you are screwed. Windows update does not update everything so it leaves insecure libraries all over the place. Windows will still be insecure after reboot due to insecure libraries being left in the system.

    Linux after reboot even with an admin not doing there job the secuirty issues are fixed.

    You example targeting one out of date library was poor thinking you can make a script that will detect and terminate any application running obsolete libraries under Linux Unix and Freebsd this is standard hardening alteration.

    “Linux doesn’t require reboots” This is true for a but a few conditions must be met.
    1 You must be using ksplice to update Linux kernel and init in place.
    2 You must be running either a hardened version apt/package management or management scripts watching for out of date parts running and update then restart those processes.

    With both of those no reboot is required. Since all secuirty flaws will be patched while running.

    There is no such thing as a no reboot required configuration for Windows.

    Ubuntu does not provide a no reboot configuration. Debian hardened does.

    Basically apt-get on hardened versions works.

  68. Flying Toaster

    Correction:

    contrary of what Robert Pogson’s expectations -> contrary of Robert Pogson’s expectations

  69. Flying Toaster

    @Ipbbear

    Yeah…..there is….both Clarence and Flying Twit are full of crap as are all the M$ Bun Buddies that hang out here.

    Nice retort. Why not just call whomever you disagree with “Nazis” and save everyone taking you seriously forever? If you think I am really that “full of crap” and want to tell the whole world about it, then why not take the opportunity here and prove me so? It’s easy – show your working and hardly anyone with a brain cell will disagree, and it’s much more effective than the knee-jerk insults that you have stuck to most of the times.

    Bullcrap aside, I do believe you are fully aware of the fact that the screenshot I’ve linked to is solid proof that apt-get, to the contrary of what Robert Pogson’s expectations, simply does not take all the steps necessary to update binary images both on disk and in memory. That means for each piece of obsolete binary left running by apt-get, you get a set of unpatched security vulnerabilities in your system. What’s more, apt-get simply won’t necessary tell you whether you have such binary in memory and you are thus left with the impression (as shown by Pogson’s enthusiasm) that all that has to be updated has been updated. Of course, you can by all means go and layer a warning mechanism like that of Ubuntu/Debian Update Manager onto apt-get and hope that such a bolt-on will catch all the obsolete binary in memory (it probably won’t). But, then, the likes of you won’t be as much joyous about going around telling everyone that “Linux doesn’t require reboots” as the reality hits them in their faces.

    So, is this enough “crap” for you for the day?

  70. oiaohm

    Flying Toaster so far you have shown no signs of knowing old english good enough to understand it.

    You are a MS troll sorry would end up mixed old and modern. I do avoid that.

  71. Flying Toaster

    Yes the correct fix is not at apt level. The correct fix is user info level. So avoiding reboot and service disruption at bad times.

    I am sorry but could you please repeat the whole thing in old English? Many thanks!

  72. oiaohm

    Flying Toaster of course in that example the user did not log out and log back in. Reason X11 restarts each time you do that so would have cured the problem. No reboot required to fix that issue.

    It is part of the reason we want X11 dead. Wayland based session management can restart without terminating the applications. So avoiding the problem you just pointed to.

    X11 is one of the few services that apt will not restart automatically in case of a dependency update.

    Funny part is for 90 percent of all application on Linux X does not render fonts any more.

    People who restart there system daily really don’t need to worry about it Flying Toaster.
    For us who don’t.

    The following script will detect the issue. A more fine targeted script can be made and auto restarting of effected processes can be scheduled. Without requiring reboot. This is more my lost file finder. I rare do find libraries that are old on services. Yes its normally X11 or some other desktop application that could be be restarted after update holding on to old libraries.
    #!/bin/bash
    for i in /proc/[0-9]*; do
    echo $i $`cat $i/cmdline`;
    grep ‘ (deleted)$’ $i/maps;
    done

    Linux system monitoring software out there also checks for the same issue and addresses it.

    Again Flying Toaster making a mountain out of a mole hill. One simple monitoring script and problem detected in a generic form.

    Windows also has the same issue after applying some updates. Due to the fact applications have there own copy of dll’s with them. So the insecure version of the dll will be around after the next to windows reboots.

    Linux minor detectable annoyance that can be cured without a reboot.

    Of course a desktop widget that informs you of what applications you have running using old libraries would be a nice feature. So you get a chance to save you work when it suits restart application curing problem.

    Normal anti-linux troll Flying Toaster. Not understanding for most usage the bug you are pointing to is a non event.

    Apt has basically updated the system without disrupting work. Now with all the talk about ubuntu for humans ubuntu never has added a widget to inform users of what applications are running old libraries.

    Yes the correct fix is not at apt level. The correct fix is user info level. So avoiding reboot and service disruption at bad times.

  73. lpbbear

    “There is no better way to say it.”

    Yeah…..there is….both Clarence and Flying Twit are full of crap as are all the M$ Bun Buddies that hang out here.

  74. kozmcrae

    Clarence, you are full of crap. There is no better way to say it.

  75. Clarence Moon

    BTW, the joke is just an audacious way of suggesting that people selectively prosecute social misbehavior just to make a point.

  76. Clarence Moon

    Your story lacks a certain amount of punch here, Mr. Pogson. The hero of the piece is satisfied with a word processing system that cannot produce a hard copy output for his configuration. That puts him in a fairly low need strata and/or he has a high tolerance for malfunctions.

    He doesn’t speak of any advantage presented by Linux and OO either, just to say that he has swapped one master for another. In any case, if you prorate the cost savings derived from using Linux in lieu of Windows and such over 50 or so novels he has produced in that time period, it seems insignificant relative to the income produced and isn’t any sort of motivational factor.

    He is simply an oddball, just one instance of the average one per hundred users of computers who use Linux.

    Out of curiousity, due to his fumbling of the opportunity to comment on the suggest lack of “openness” of sharing ebooks once purchased, I found almost every one of his titles readily available on usenet in the .MOBI format supported by Kindle. I didn’t look for the .EPUB Nook format, but I bet it is there, too. I wonder if he understands how that all works.

Leave a comment