Linus On (desk)Top

At DebConf14, Linus Torvalds had a Q&A session. One topic was GNU/Linux on the desktop. At one point, about 8:45 in, he details a major problem with GNU/Linux on the desktop, from the viewpoint of developers dealing with a bunch of distros. Essentially, a distro changes something and has to rebuild everything or all applications may be broken. For one distro, this is not a problem but a developer can’t produce binaries for all distros. It’s just impossible. He thinks Valve distributing major applications, games, to GNU/Linux will use huge statically linked binaries to overcome this. This will put pressure on all distros to come to some standards to help developers.

I agree with Linus that this is a big problem but I think it’s mostly a problem for non-Free software where the distros or OEMs have no access to source code. If the distros fix the issue in source code for their repository, the users don’t see the problem and the developer is not closely involved in the work. Clearly, OEMs and distros can do the work if they have the source code of applications. Since most users of PCs can function perfectly well with any particular distro without importing a lot of “foreign” applications, I think this problem is back a level or two from the key issues which I rank from most important to least important:

Problem My take
1 – Getting OEMs to ship GNU/Linux readily, without buyers having to beg, Still a problem for consumers buying directly, but not a problem for wholesale buyers like governments or large organizations because OEMs make a few changes and copy images, not a lot of work. Distros like Linpus and Ubuntu have made big inroads with this.
2 – Getting retailers to offer retail shelf-space (some aren’t even giving retail shelf-space to that other OS here…), Still a problem here even for small cheap computers because legacy PCs are not getting much space and consumers are not lining up demanding choice. When the market matures in a few years things could change. GNU/Linux is better for OEMs, retailers and consumers in many situations. Retailers tend to like selling higher-priced items with higher markups when they should consider that they can sell more units at a lower price and make more money. With FLOSS, the retailer gets to keep more of the money the consumer pays.
3 – broken ABIs by distros, making it difficult for developers to get applications into distros or onto OEMs’ machines, Linus’ issue, and Important for non-Free software because distros/OEMs can’t get the source-code. Less important for FLOSS because the packager in the distro does the work, not the original developer. The developer needs only to convince the distro to package the software. (Linus does mention that this is not practical even for a FLOSS application (e.g. his diving programme, Subsurface) with a small number of users. ~58minutes in)
4 – Consumers’ lack of familiarity. In my experience, consumers/students/non-geeks are readily convinced with a demonstration/advertisement, particularly when they see improved performance and/or lower price, something OEMs and retailers can do.

I have built a few applications from source code and it is a problem how diverse the library-space is: years of constant upgrades with incompatible versions of dozens of libraries. At some point, the ABIs should be frozen. After all these years, why is that not done? Perhaps instead of employing developers to constantly update libraries in source code we should employ them to create applications using the libraries as they are. Just fix the bugs. Stop throwing more features into the bins.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

50 Responses to Linus On (desk)Top

  1. oiaohm says:

    That Exploit Guy
    http://yarchive.net/comp/linux/kernel_fp.html
    kernel_fpu_begin() and kernel_fpu_end() Yes I do know of them. Totally a bad idea to use them. If you are running any driver using them you will be paying for it. Fpu code in user-space is preempt-able in kernel space its not. Price for doing FPU in kernel space is blocked interrupt processing. In fact entering and leaving fpu in kernel space in lots of cases more expensive than a context switch to userspace when you get into real world tests due to the size of disruption it is triggering. Also if you are using FPU in kernel space this is not portable to a lot non x86 architectures and even some clone x86 chips do not support FPU from ring 0.

    AES might be fully integer but the accelerator for it in an Intel cpu is in the FPU not the CPU core. Just because something is integer in design does not mean CPU designers don’t put its processing in the FPU. So ever time you access AES accelerated processing from kernel space you are kicking the system where it hurts.

    Netfilter filter callbacks do register to userspace as well as kernel space.

    https://www.kernel.org/doc/htmldocs/uio-howto/uio_pci_generic.html

    You did not read onto section 4 of the UIO manual. Yes the generics so a pci device made after the year 2002 does not require a kernel driver why because the generic driver is already written that allows you from user-space to register everything about the device and have it redirected to the user-space driver. So a lot of hardware does not require extra kernel mode drivers.

    UIO is fun. You can write a driver in kernel space or you can use the generics to set up the same from user-space. Yes the UIO user-space can unlink the kernel driver and connect up the user-space one instead.

    If you kernel space driver is written only using UIO you driver will build on every kernel without much work at all. UIO is a stable API with backward compatibility.

    By the way the UIO documentation horribly written in itself.
    These drivers uio_pdrv, uio_pdrv_genirq, uio_dmem_genirq are in fact usable from user-space init of a device by UIO without having a kernel driver. You have create a UIO structure and pass to the uio core driver from userspace and it works. This is one of the problems the kernel.org documentation suggests you have to have a kernel driver todo the init you don’t have to. All the calls the UIO kernel driver example does are in fact exposed to user-space. You have to write a custom driver if the hardware you are dealing with does actions in breach of communication standards. Extremely rare device breaks communication standards.

    All the generic UIO drivers are ABI stable to user space. In fact the UIO documentation does not cover using UIO to its max.

    UIO generics allows Linux have Micro-Kernel drivers. Where the kernel has small systems to init up the drivers that is generic. Something most people are not aware x86 and may other archs are in fact designed to support a Micro kernel OS. This is where once registered by ring 0 or equal in the hardware the interpret calls from devices can directly wake up user-space skipping the kernel space.

    If you application is after to run a device as fast as possible UIO is what you use. Less switching. Your user space code can be directly picking up the interprets as well as directly sending messages to the device without having to go to kernel space all. Yes UIO set device up after that user-space is in control of that device.

    That Exploit Guy the fastest drivers are majority user space. The reason for wanting kernel space is not performance. Performance is a Myth. The correct reasons for wanting kernel space is resource sharing between applications and security.

    The fun part here is resource sharing sometimes makes zero difference. A lot of people look at zfs fuse compare to kernel mode zfs and use that as an example of hey kernel space is faster. This is fine until you wake up that zfs fuse is 2005 and few year later fuse had its API/ABI extended to boost performance by a factor of 6 that the old zfs fuse driver does not support. Bad kernel API has also historically had equal effect on kernel mode drivers. Modern Linux Userspace driver with modern Linux application interfaces mostly runs rings around kernel mode equals.

    KMSCON operation on Linux is to remove the console code from kernel space is the fact the console code will run faster in user-space. There is a lot of code in Linux kernel that in reality has no good reason to be in kernel space yet it is. Why is the console code in kernel space is because when Linux was first design it was pure monolithic.

    The thing that That Exploit Guy and other like him forget is OS design does not demand monolithic and Micro-kernel designs cannot be in single OS. If Linux kernel was pure monolithic you have to have kernel mode drivers. UIO makes the Linux kernel part Micro-Kernel. Something Micro-kernels have is extremely fast drivers in operation but suffer at times from bottle necks with resource sharing and device init. Work on sharing resources on 4096 core systems between user applications has kind fixed up the resource sharing problem.

    Linux kernel in a impure beast. Linux kernel can have a mix of user-space and kernel-space drivers. Issue we don’t have many user-space drivers.

    That Exploit Guy the answer to the Linux driver problem is not kernel mode. Its like AMD they have found with their graphics cards that majority of the trade secrets is in the user-space that drives the video card. Most of what is in there kernel driver is a closed standard IPC and firmware blobs.

    That Exploit Guy think about it for one min how foolish would it be asking for a kernel mode driver if the OS was Minix that is a Micro-kernel. 100 percent foolish right.

    Linux you have the choice. Write your driver monolithic style that means you have to put up with kernel changes and being a complete pain. Or write your driver Micro-kernel and share you source code with vxworks and the like. Performance difference between the two methods is the user-space driver at worst min-orally slower than the kernel driver at best user-space driver 4 times faster.

    The kernel mode performance would be lost by 1 kernel mode driver using FPU. Little thing here a kernel mode driver using FPU does not stop a interpret raise into a user space driver it only stops interpret raise into another kernel mode driver. Yes a left over from when x86 was being designed for micro kernel. So yes the user-mode driver has more stable performance than kernel mode driver. The user mode drivers can safely parallel process. Monolithic advantage disappears on multi core processes.

    This is just one of these classic cases that what is reality is backwards to what common sense would first seam to suggest.

    That Exploit Guy if this was still the year 2000 where a majority of CPU were single core demanding kernel space drivers would make sense. The problem is its 2014 so it does not make sense at all.

  2. That Exploit Guy says:

    Quit “moving the goalposts”.

    No one is moving the goalpost.
    You yourself said:

    TEG wrote, “No. The Linux network stack is mostly in kernel space, and that includes the components for TCP/IP.”
    That doesn’t contradict what oiaohm wrote, “A large section of the Linux kernel network stack in fact ends up running in userspace”
    Both could be true since “large” and “mostly” are not exclusive.

    It sure sucks to be the slime-ball weasel that likes defending lying sacks of crap, doesn’t it?

    No one said the code was communicating but that it was in the network stack and in the kernel

    If it’s not kernel code, then what business does it have in the kernel at all?
    Think before you open your trap, weasel.

  3. TEG wrote, “It has nothing to do with actual communications.”

    Quit “moving the goalposts”. No one said the code was communicating but that it was in the network stack and in the kernel…

  4. That Exploit Guy says:

    A lot of the networking stack is certainly aware of userspace. I don’t know how much runs in userspace but even if it’s just the API…

    *Sigh*
    “Regulatory Domain” is just a fancy name for the frequency and power output settings needed for your WiFi adapter in order to be compliant with the telecommunication regulations of a given jurisdiction. The Linux kernel is designed to hold only one “regulatory domain” at a time, and if it becomes necessary to switch “regulatory domain”, the new “regulatory domain” is then fetched from a user-space daemon known as a “Central Regulatory Domain Agent”, replacing the one in the kernel.
    It has nothing to do with actual communications.

  5. That Exploit Guy says:

    @ Peter Dolding

    Its all todo with memory placement.

    No. It’s all about knowing the right tools for the job, which you don’t even have a first clue of.

    You make a clone driver with a different memory signature guess what kernel space the hardware device can end up reject it.

    Heh… “Memory signature”.

    Odd strange asm instruction sets with no reference in intel can in fact be like hardware instructions for devices.

    “Odd strange”? That’s a bit redundant, isn’t it?
    “asm instruction sets with no reference in intel”? You mean “Intel official documentations”? That’s hilarious considering that the GNU people would have to first come up with an assembler that uses these fabled “instruction sets” in order for them to be present in the binary, and in order to come up with the assembler, the GNU people would have to understand these “instruction sets” the were implementing.
    Logic is not your strong suit, is it?

    So That Exploit Guy has never heard of NUSE.

    It’s a kernel infrastructure that doesn’t actually exist. Why should anyone pay attention to it?

    (NUSE) leads to (mTCP).

    Cool story. I still don’t understand why I should give a toss about this novel, experimental thing called “mTCP”, though.

    (Netfilter) modules can be userspace or kernel space.

    From the front page:
    “netfilter is a set of hooks inside the Linux kernel that allows kernel modules to register callback functions with the network stack. A registered callback function is then called back for every packet that traverses the respective hook within the network stack.” (emphasis mine)

    Because there are a lot of driver parts that have been proven to be many times faster in user-space.

    According to yet another piece of statistics that you have pulled out of your backside, that is.

    There are a set of accelerators in Intel and AMD CPU that assist with checksum and encryption they are in the FPU that you cannot use from Linux Kernel Space.

    1) Encryption? You do realise the AES cipher is entirely integer-based, don’t you?
    2) So out of all sketchy, second hand information you have gathered about the Linux kernel, not a single piece tells you about kernel_fpu_begin() and kernel_fpu_end()?

    UIO is what you need to look up. That Exploit Guy you will find Linux includes a huge stack of ABI/API particularly for allowing devices drivers to be developed in userspace…

    You did realise those four names you grabbed were from a header file meant for kernel modules, right?

    UIO was created so solve the issue and embedded developers use it a lot.

    UIO didn’t solve any problem regarding kernel-space ABI/API instability. It’s just a dubious, one-size-fits all attempt to move driver code partially to user mode. At the end of the day, you will still need to have a kernel-mode component for your driver, as pointed out by the latest guide.

    I know that Microsoft documentation tells you its the same complier to build drivers

    No, you don’t. Stop making stuff up.

  6. TEG wrote, “No. The Linux network stack is mostly in kernel space, and that includes the components for TCP/IP.”

    That doesn’t contradict what oiaohm wrote, “A large section of the Linux kernel network stack in fact ends up running in userspace”

    Both could be true since “large” and “mostly” are not exclusive.

    e.g. A search on the net branch finds:
    linux-3.16/net$ grep userspace `find . -wholename "*.c"`|wc
    235 2357 18490
    pogson@beast:~/Downloads/linux/linux-3.16/net$ find . -wholename "*.c"|wc
    1188 1188 26524

    A lot of the networking stack is certainly aware of userspace. I don’t know how much runs in userspace but even if it’s just the API…

    e.g. linux-3.16/Documentation/networking/regulatory.txt: “Due to the dynamic nature of regulatory domains we keep them in userspace and provide a framework for userspace to upload to the kernel one regulatory domain to be used as the central core regulatory domain all wireless devices should adhere to.”

    e.g. linux-3.16/net/wireless/reg.c:“/*
    * This lets us keep regulatory code which is updated on a regulatory
    * basis in userspace.
    */
    static int call_crda(const char *alpha2)
    {
    char country[12];

  7. oiaohm says:

    That Exploit Guy there is a big difference dissembling kernel space code. Its not as simple as just getting a debugger to work there. There are many hardware protections can be applied from kernel space drivers that user space drivers cannot have. Its all todo with memory placement. You make a clone driver with a different memory signature guess what kernel space the hardware device can end up reject it. Odd strange asm instruction sets with no reference in intel can in fact be like hardware instructions for devices.

    So That Exploit Guy has never heard of NUSE. http://www.eecs.berkeley.edu/~sangjin/2013/01/14/NUSE.html Yes 2013 invention that leads to http://shader.kaist.edu/mtcp/ . Sorry Linux kernel TCP/IP stack can be in kernel space or userspace. http://www.netfilter.org modules can be userspace or kernel space. In fact quite a few modules end up running userspace. The more you follow the Linux networking stack the more you find examples of usermode drivers, parts and so on. So current day Android devices may not be using kernel mode TCP/IP at all. Shocking usermode TCP/IP can be faster and use less power.

    Things like performance are overrated anyway
    That Exploit Guy I fully agree. Because there are a lot of driver parts that have been proven to be many times faster in user-space. Yet for some reason they are still in Linux has kernel mode drivers. There is test after test proving this. This is one of these super myths that keeps on being raised. UIO work proves over and over again its a myth. Kernel Space in Linux is in most cases slower. Big reasons. In kernel space in Linux you cannot use the FPU in your CPU. There are a set of accelerators in Intel and AMD CPU that assist with checksum and encryption they are in the FPU that you cannot use from Linux Kernel Space. So yes Linux kernel mode drivers switch to user-mode to access CPU features. Shock horror right.

    That Exploit Guy developers like Greg Kroah-Hartman have worked on many things. Do you know what cuse and buse are. http://bryanpendleton.blogspot.com.au/2011/02/fuse-cuse-and-uio.html About time you start reading.

    UIO is what you need to look up. That Exploit Guy you will find Linux includes a huge stack of ABI/API particularly for allowing devices drivers to be developed in userspace. UIO to control the device itself. CUSE, BUSE and so on to provide interfaces that to applications look to be coming from kernel space even that the driver is fully user-space.

    Lot of work on UIO is done by Greg Kroah-Hartman.

    The 4 authors of UIO code with the dates they started working on it.
    Copyright(C) 2005, Benedikt Spranger
    Copyright(C) 2005, Thomas Gleixner
    Copyright(C) 2006, Hans J. Koch
    Copyright(C) 2006, Greg Kroah-Hartman

    Then read the date of the manifesto you love quoting “Fri, 03 Dec 2004” sorry That Exploit Guy the manifesto is out of date and no longer aligns to how the internals of the Linux kernel is today. True in 2004 we are 10 years later things have changed. UIO did not exist in 2004. UIO was created so solve the issue and embedded developers use it a lot.

    This is the problem That Exploit Guy in 2006 Greg Kroah-Hartman starts working on user-mode driver support after he gives up on kernel mode support ever being workable. If you are going to keep up this arguement that kernel mode drivers are required please find something post 2008 stating that is based on 2008 or latter Linux kernel design. You are going to find no such thing exists.

    The something is I could not remember was UIO last post I wrote as something.

    That Exploit Guy you should know the Linux world is known for out of date documentation laying around why did you think kernel docs would be any different.

    That Exploit Guy I know that Microsoft documentation tells you its the same complier to build drivers but its like gcc on the Linux command like and the means to choose between versions. You will find building a driver in visual studio will allow bugs to slip that building a normal application would have been picked up. Its only windows 7 and newer where Microsoft starts with the so called same complier. Testing tells you that once complier sees WDK it changes how it is processing completely. So it is two compliers pretending to be 1. It is something to be very aware about when building Windows drivers that the complier is going to behave differently so miss different programming errors. Newer feature updates appear in the user space complier mode of Visual Studio complier first.

  8. That Exploit Guy says:

    If complier is going to make stuff with different memory alignments it cannot be in the same code segment.

    This line of gibberish is so delightfully nonsensical I think I am going to make it into a bumper sticker just so I can share it with the rest of the world.

    A large section of the Linux kernel network stack in fact ends up running in userspace

    Ha… No. The Linux network stack is mostly in kernel space, and that includes the components for TCP/IP.

    The answer is yes just Usermode not Kernel mode.

    Why not? Things like performance are overrated anyway.

    Some of Windows issues with people inserting usb keys and breaching the OS… Stable Kernel ABI or Kernel Mode Security.

    You see… Something, something jetpack and then Windows is exploded.
    I am Peter Dolding.

    I have brought in the example of the valve runtime this full kills the userspace application arguement.

    No, you haven’t. All you have brought up is a bunch of nonsense that is easily refutable with less than one minute of Google search.

    What is the main reason why closed source driver makers don’t like being forced on-to the user space code. Debugging and disassemble tools work very well against user space code.

    For the record, there is no difference between disassembling user space code and disassembling kernel space code – it’s just a matter of converting the machine code back to the corresponding assembly language. Heck, you can even do that through a web app.
    And this is to put aside that reverse-engineering is usually against the license agreement of the software.

  9. That Exploit Guy says:

    @Yet another lying sack of crap begging for attention

    DrLoser every time visual studio releases it releases with different C libraries and many other parts. So Windows applications ship with their own versions of these Libraries… Sorry Windows is also only about 80 percent stable ABI as well in user space.

    Let’s forget for a minute that you don’t have a first clue about the C Runtime library. Instead, I would like to know how on earth you arrived at this fascinating number of “80%”. I mean, clearly, you pulled it out of your backside, but I am curious enough to want to know how exactly you pulled it out of your backside.

    Greg Kroah-Hartman is in fact linked to fuse, cuse and buse something.

    No. Kroah explicitly states that his manifesto concerns “kernel drivers”. FUSE stands for “Filesystem in UserSpacE” and is an interface that allows filesystem components – especially non-GPL filesystem components (e.g. ZFS) – to run in, needless to say, user space.

    Something That Exploit Guy would not also notice that you don’t need a driver to pass a PCI/USB… devices into a virtual machine under Linux this functionality is build in.

    And who was talking about PCI-to-USB bridges? Are you on drugs or something?

    Linux userspace is full able to control connected devices over 90 percent of all devices connected directly.

    Again with the numbers… You sure love making up statistics, don’t you?
    Also, “connected directly”? I am glad that you were clear on that fact or else I would probably think of devices that were connected telepathically.

    User space compatibility of Linux kernel is also driver compatibility.
    Again, Kroah explicit states “kernel driver”, not “user-space driver”. The fact that you don’t have a first clue about what you have read really speaks volume about how much you comprehend the subject matter, doesn’t it?

    When someone says releasing a driver for Linux requires you to make a kernel mode driver they need to take a serous look at closed source Android video drivers.

    I have looked, and the only thing that tells me is that you are full of crap.

    Microsoft releases a special compiler just for making drivers. In fact you will find that its old and misses many coding defects the newer visual studio compliers pick up.

    DrLoser is wrong about you – you are apparently too lazy to even Google search anything.

  10. oiaohm says:

    By the way the reason why device makers claim they cannot ship user mode drivers for Linux is the runtime issue. Yes we will not ship a runtime with our driver but then the driver does not work with many different distributions. Funny enough you look at the windows usb drivers hello we have bundled visual studio libraries. This is the big problem. Everything exists on Linux for great closed source application and driver support. Reality its not used.

  11. oiaohm says:

    DrLoser every time visual studio releases it releases with different C libraries and many other parts. So Windows applications ship with their own versions of these Libraries. Sorry Windows is also only about 80 percent stable ABI as well in user space. This is the reality Microsoft does not provide stable user-space either. Only a subsection of Windows Libraries are ABI stable. Windows applications don’t have problems with this due to shipping own runtime. Over the life of Windows OS newer applications are bundled with newer libraries because they are not compatible with the old Libraries released with the Old version of Windows this include Microsoft equal to Libc.

    Sorry DrLoser you are making a big mistake. Valve steam runtime works. Valve packages up a runtime for Windows Linux and OS X including close to all the same parts. The solution to the Linux Distrobution problem for closed source application makers is bundle own runtime. Bundle own runtime is treat Linux like Windows.

    Greg Kroah-Hartman is in fact linked to fuse, cuse and buse something. That Exploit Guy is complete wrong embedded developers use fuse, cuse and buse to run their drivers in user-space. Something That Exploit Guy would not also notice that you don’t need a driver to pass a PCI/USB… devices into a virtual machine under Linux this functionality is build in. Linux userspace is full able to control connected devices over 90 percent of all devices connected directly. So unless its a real core part required in booting the driver is not in fact required in kernel mode. All those real core parts are already released open source.

    User space compatibility of Linux kernel is also driver compatibility. When someone says releasing a driver for Linux requires you to make a kernel mode driver they need to take a serous look at closed source Android video drivers. Majority are build using user space only without a single line of kernel space code. Yes they are using using the same interlaces qemu, Virtualbox… use to take over full control of a device to interface with their device from userspace.

    Greg Kroah-Hartman is right why the Linux kernel is so changing. Microsoft releases a special compiler just for making drivers. In fact you will find that its old and misses many coding defects the newer visual studio compliers pick up.

    So its pick binary compatibility or up-to date complier. You cannot have both. If complier is going to make stuff with different memory alignments it cannot be in the same code segment. So binary compatibility drivers with changing complier always equals userspace drivers.

    This is the big problem is their binary driver support for Linux. The answer is yes just Usermode not Kernel mode. DrLoser can you explain why when binary driver support exists in Linux that is complier independent why does the Linux kernel need have a in kernel space abi. Please don’t say speed there are many examples where userspace driver in fact ends up faster. A large section of the Linux kernel network stack in fact ends up running in userspace memory even if it a kernel mode driver due to reduction in transfer actions. This is not the only group of drivers like this.

    Some of Windows issues with people inserting usb keys and breaching the OS has been because old support for old usb interfaces has not been removed so allowing flawed designed drivers to be loaded in Kernel mode even that every new Windows USB driver is a user-space driver. Stable Kernel ABI or Kernel Mode Security. You cannot have both. Usermode programs this include usermode drivers that are security issues can be protectively wrapped due to them running at a lower ring level. The theory behind a Micro kernel is valid. Linux and Windows are both Hybrid kernel OS with user-mode and kernel-mode drivers. Where they have drivers in user space and kernel space. Yes user space drivers under Windows also can be build with any complier you like. For some strange reason developers are highly willing to release user mode drivers for Windows yet then turn around and refuse to release a user space driver for Linux and OS X. Yes user mode and user space same thing Microsoft gives them a different name to every one else.

    That Exploit Guy and DrLoser on this topic neither of you in fact have a leg to stand on. I have brought in the example of the valve runtime this full kills the userspace application arguement. Solution exists we just need more todo it. Now the user-space drivers we need more hardware makers to release them. If you don’t want to display your driver source code Linux does not force you to but you will be in user space. What is the main reason why closed source driver makers don’t like being forced on-to the user space code. Debugging and disassemble tools work very well against user space code. If someone was releasing a driver for Minix or Hurd….(some of the high secure certified OS’s as well) it would have to be user-space code those OS solutions don’t allow kernel space code at all for drivers.

    That Exploit Guy and DrLoser please stop bringing up the two dead horse arguments. There are more than enough examples between android and valve to disprove both.

  12. Finalzone says:

    Linux as desktop system is coming nicely with the advent of systemd, wayland and pulseaudio. It was about time for Linux to get a much needed core part and leave the legacy behind. systemd was made to fully exploit kernel features while still retain compatibility with the old sysvinit. Wayland can use X based application via XWayland.

  13. DrLoser says:

    Nothing, except it’s not possible without ~a million developers agreeing to that. When have a million people agreed on anything? So, this idea is not feasible until developers produce the perfect universe of libraries and leave them alone forever.

    I take it, Robert, that you admit that you are completely wrong on this point.

    Go on, admit it. It will do you good.

  14. DrLoser says:

    I can do no better than to recommend you read “That Exploit Guy’s” complete refutation of your points, Robert. (Ignore the insults. That’s what I do when some shabby little pretend google-kiddie like oiaohm insults me. I recommend this attitude: it is invigorating.

    To add emphasis to TEG’s points, even though emphasis is quite clearly unnecessary:

    1) User space and kernel space are obviously different things. Or, to quote your latest position:

    Linus has raked all kinds of people over the coals for breaking “user-space”. He has no control over what writers of libraries and applications do but he is a benevolent tyrant when it comes to Linux not messing with people.

    Except for “messing with people” via ABI instability in the kernel. That, obviously, is Sacred and Profane and we should congratulate him for that attitude.

    2) It is the Best of All Possible Worlds, or, to quote you, Robert:

    Linus Torvalds and Greg K-H are professional programmers at the top of their game.

    Interesting. How do you know? What other “professional programmers” have you compared them against? Is there a lifetime curve on this, as with, say mathematicians? (Commonly supposed to be past their best at the age of 30 … which is a long time back for both Linus and Greggie.)

    You don’t have any objective measure at all to back up this preposterous statement, do you? All you are saying here is: “These two people I have never met work on the Linux kernel and associated libraries. ME LIKE THEM!”

    The product is not shoddy.

    Before I go further into this, Robert, let me compare Linux with a domestic water supply.

    Your (completely unsupported) case is that everybody, the whole wide world over, requires Linux as something that is fundamental to their life.

    Fair enough. I would possibly claim that “a computing device with an OS” is closer to the mark, but let’s just assume this means Linux.

    Your inference (on my very generous assumptions) that Linux is not shoddy is a completely false inference. Let me go back to the domestic water supply thing:

    Before the Broad Street cholera outbreak in 1854, everybody on earth used a shoddy domestic water supply. Because, they had to.

    I submit that it is possible to be both ubiquitous and still shoddy. (Which is your theoretical case against M$.) I am turning your anti-M$ bias — well-informed, ill-informed, anecdotal, unexamined, thoroughly well-examined and up to date, who cares? The same logical argument persists — against you.

    … with more than 1000 million satisfied users and high ratings for code-quality

    The high ratings for code quality come from Coverity, I believe. And about 90% of those 1000 million “satisfied users” are actually people who own an Android phone.

    Which is fine. But it’s a separate market.

    And also, what gives you the right to proclaim that 1000 billion customers are “satisfied?”

    Did you ask each and every one of them invidually? Of course you didn’t.

    It must be a mirage, then. It’s clearly not the case that I have ever seen a customer review of an Android phone that gives it less than four stars. And obviously that dissatisfaction would have nothing at all to do with the OS and the features and the reliability. It’s almost universally the colour scheme that gets them.

    PAH!

  15. That Exploit Guy says:

    Linus has raked all kinds of people over the coals for breaking “user-space”

    As you said, “user space”. Greg Kroah-Hartman has even issued a manifesto stating there will never be a stable ABI or API for anything that runs within the kernel. In fact, it pretty much demands you to release your driver code under GPLv2 and submit it for incorporation in the kernel source tree. Think about this: as a device manufacturer, you are forced to relinquish control of half of your every product (the driver) and effectively allow Torvalds et. al. to dictate how long it will remain supported. Your brand name will also tarnish as a result of every screw-up they make (and blame you for) during the course of their “maintaining” your driver. This is a terrible business proposition however you look at it.

    It’s shoddy software with built-in re-re-reboots.

    Didn’t your parents teach you to be honest (or did they teach you to be the lying sack of crap that you apparently are)? The press release explicitly states that the 2-hour reboot timer applies to the preview release of Windows 8.1 only. Heck, even the message you (deliberately) omitted stated the following:

    Your license to use this evaluation version of Windows will expire soon. When it expires, your PC will restart every two hours. Get the latest version of Windows to avoid these interruptions. (emphasis mine)

    By the way, my Vista box has been running non-stop for 2947 hours (~4 months). “Re-re-reboot” my backside.

  16. DrLoser ranted, “1) Given the explicit refusal of Linus to implement ABI compatibility in the kernel.
    2) Given the explicit claim by Linus that Linux user space should, in fact, respect ABI compatibility (not API compatibility)”

    Those “givens” are not given at all. Linus has raked all kinds of people over the coals for breaking “user-space”. He has no control over what writers of libraries and applications do but he is a benevolent tyrant when it comes to Linux not messing with people.

    e.g. LKML “Mauro, SHUT THE $!@!# UP!
    It’s a bug alright – in the kernel. How long have you been a maintainer? And you *still* haven’t learnt the first rule of kernel maintenance?
    If a change results in user programs breaking, it’s a bug in the kernel. We never EVER blame the user programs. How hard can this be to
    understand?”

    DrLoser also wrote, “Don’t you think that you have spent the last ten years defending a shoddy bit of IT that is nothing more than a plaything for Linus Torvalds and Greg K-H and a handful of others?”

    There’s no reason for me to defend FLOSS and GNU/Linux. It stands on its own merits. I am attacking that other OS and its “partners”. Linus Torvalds and Greg K-H are professional programmers at the top of their game. The product is not shoddy with more than 1000 million satisfied users and high ratings for code-quality. I’m using Greg’s releases on all my PCs. I just switched The Little Woman’s PC to Debian Jessie which uses 3.14 too. That dist-upgrade was very smooth with just a few annoying questions about accepting a new configuration or keeping the old. Everything worked immediately on her ancient hardware. That’s not shoddy. With that other OS I would be finding drivers not available for “8.x” and adding a malware-scanner and re-re-rebooting endlessly. That’s shoddy.

    Just seen from M$, “If you are still running Windows 8.1 Preview, you’ll see the following notification every time you sign in. After January 15, 2014, your PC will also restart every 2 hours and you will lose any unsaved data.”

    It’s shoddy software with built-in re-re-reboots. HAHAHA! ROFL! GASP! Thank you for making my day, M$!

  17. DrLoser says:

    Further to that little venture into set theory, Robert (and I apologise to Dougman for bringing advanced concepts like set theory into the conversation. Just keep banging the rocks together, Dougie):

    1) Given the explicit refusal of Linus to implement ABI compatibility in the kernel.
    2) Given the explicit claim by Linus that Linux user space should, in fact, respect ABI compatibility (not API compatibility)
    3) Given that even API compatibility has never once been on the table in user land.
    4) Given the very obvious fact that, even if it were to be mandated, there’s not a single downstream Distro that would pay any attention to it
    5) And given the fact that, for any Distro Functional Set {ω}, it will only take a single “upgrade” of that set {&omeaga;} to completely break API compatibility, either backwards or forwards …

    Don’t you think that you have spent the last ten years defending a shoddy bit of IT that is nothing more than a plaything for Linus Torvalds and Greg K-H and a handful of others?

    No? Well, that’s the way I see it. There are side-effects, of course. It’s been tremendously successful at cannibalising Unix. And I’m not even sure that Google anticipated the success of Android. (Any more than I’m sure that Nokia anticipated the success of Symbian, for what that’s worth.)

    But you’ve still been taken for a ride on a nasty shoddy bit of IT regression (originally a regression from Minix: latterly a regression from either BSD or Solaris, take your pick).

    You really think these people care about you? Because … they don’t. You don’t shuck up the money … they don’t care.

  18. DrLoser says:

    Nothing, except it’s not possible without ~a million developers agreeing to that. When have a million people agreed on anything? So, this idea is not feasible until developers produce the perfect universe of libraries and leave them alone forever.

    An interesting and intelligent guess.

    But completely wrong.

    Here’s the canonical form of Forward Compatibility, Robert. You have a set of functions, call it {X, Y, Z}. (This particular set has a cardinality of 3, but you’re welcome to pick your own set.) The important thing about this set is not that the set itself is immutable — otherwise you would never be able to add more functionality — but that each individual member of the set is immutable.

    Which is to say: if X, Y, and Z exist in version 1.x, with signatures X’, Y’, and Z’, and implicit functional contracts X”, Y”, and Z” … and X, Y, and Z exist in version 2.x (derivatives left to the student) … then you have forward compatibility.

    You might be surprised to learn, Robert, that this does not take “~a million developers.” It doesn’t even take “a commercial organisation.”

    All it takes is a minimal amount of common sense, a degree of caring about the customer, and a lack of clumsy laziness.

    For further details on this blatantly obvious fact of IT life, I refer you to Jamie Zawinski and his problems with Open[E]GL, related to porting his Dali Clock application to the Mac. JWZ is a God in these matters, and I am but a mere mortal.

    But if you show enough genuine interest, I can dig out the details for you.

  19. DrLoser says:

    And then again, of course, it’s always possible to find an amateur moron with no standing whatsoever, which is to say oiaohm. And even if oiaohm staggered towards any sort of standing whatsoever (which he has demonstrated, repeatedly, that he is no), he’s more than capable of throwing it all away again:

    You are looking at 80 percent+ of the Linux libraries with stable ABI standards.

    With stable ABIs, oioahm, you are looking at the IT equivalent of virginity.

    It’s not possible to be an 80% virgin.

  20. DrLoser says:

    M$ even has rules against backwards compatibility, like that secure boot rule, eh? That means “8” can’t run on old hardware.

    You are, of course, a professional scientist of good standing, Robert. Therefore I am sure you have tested this theory out on appropriate hardware … appropriate hardware, in this case, being absolutely any Intel x86 laptop or desktop or server you happen to have lying around, or perhaps a machine you can borrow from a neighbour.

    Your extensive testing may very well have been based upon a flawed premise of some sort, or possibly slightly wonky hardware.

    I’ve peer-reviewed your entirely reasonable hypothesis on a six year old HP 1000 notebook (it came with Suse 10.2 SLDS installed), and you know what? The little darling installed and ran Windows 8 with absolutely no problems whatsoever.

    Perhaps Hewlett Packard got onto the UEFI gravy train about three or four years earlier than everybody else? I mean, there’s no conceivable reason why M$ managed to accidentally shoe-horn this backward compatibility in otherwise, is there?

  21. oiaohm says:

    DrLoser the parties lieing here is you
    “Forwards compatibility is fairly much always true under Linux. ”

    My statement is correct. Fairly much always true is the state of it. You are looking at 80 percent+ of the Linux libraries with stable ABI standards. The remaining bit can be a little trouble. There are still some applications on my testing debian from 2004. Old applications that have not required chroot wrapping.

    “fairly much always true” is all you can claim about windows as well. Remember how many windows applications you have to use compadiblity XP with Windows 7 Pro. Or Windows equal to chroot so application works. The reality the chroot card gets played under Windows just with nicer interface.

    DrLoser basically you have never done a proper compare of what Linux provides how Windows Applications ship and what Windows provides to work out that the difference between Linux and Windows with application support purely tracks to application packaging. Valve did.

  22. oiaohm says:

    DrLoser. Valve is not Chroot. Valve is use the loader. Steam interface program can run a per application loader so controlling the Libraries the application sees.

    http://wiki.gentoo.org/wiki/Steam#Steam_runtime Yes steam the program is exploiting the fact you can push Library paths on the application it has installed. Yes you can tell it not to.

    –inhibit-cache Do not use /etc/ld.so.cache
    –library-path PATH use given PATH instead of content of the environment variable LD_LIBRARY_PATH

    http://www.xonotic.org/ exploits this.
    These are not the only ways to over ride. There is the RPATH option on the binary as well.

    Forward compatibility(older OS newer Application) does not exist even in Windows without developers bundling Libraries. Everything exists in Linux to provide forwards and backwards compatibility for Applications.

    Backwards compatibility(older applications on newer OS) exists by Library standards and shipped Libraries. This also requires a percentage of Library bundling for Libraries that don’t follow standards. Yes libraries that don’t follow standards exist on Windows. RPATH option in Linux binaries can be used to provide a host path for the runtime libraries.

    Linux filesystem standard has the /opt directory where any closed source vendor could officially place a runtime. If you have not noticed the directories ending in “Common” in “Program Files” under windows from Major applications vendors. What are these directories. Runtime to cope with Windows.

    Like it or not Valve has done Forwards and Backwards compatibility on Linux. How did they do it. They treated Linux like you treat Windows problem solved. All the Linux world needs is Closed Source vendors to treat Linux identical to Windows when it comes to application packaging.

    DrLoser Docker setups run 100s of different Linux Runtimes from different time frames on the same kernel without any issues at all. Pulseaudio as bad as is sounds first version of the Pulseaudio interface library works with the Latests Pulseaudio server same with X11. This is the problem issues are not that large.

    If a Linux application is build using the loader frameworks you will find it works perfectly fine across many distributions.

    The chroot solution is to work around developers who don’t put the effort in as Valve has to use a Distribution neutral runtime setup. Valve controls the updating of their own runtime so are not broken when the Distributions move theirs. Yes Valve problem 100 percent solved.

  23. DrLoser wrote, “What’s wrong with building actual, proper, forward compatibility into the stuff in the first place?”

    Nothing, except it’s not possible without ~a million developers agreeing to that. When have a million people agreed on anything? So, this idea is not feasible until developers produce the perfect universe of libraries and leave them alone forever. It’s different with that other OS. There’s a single orchestrator calling the tune but M$ has different problems like the guys being in charge are salesmen, constantly overruling good developers with crazy ideas about monopoly and stifling competition. M$ doesn’t have backwards compatibility for instance, whereas because the source if out there, GNU/Linux does. I had a 15 year old version of RedHat running on a machine last place I worked because nothing else would. M$ even has rules against backwards compatibility, like that secure boot rule, eh? That means “8” can’t run on old hardware.

  24. DrLoser says:

    Amongst the usual welter of gibberish from oiaohm, I spy the following complete and utter lie:

    kurkosdr and DrLoser get this wrong all the time. Stable ABI is forward compatibility only. Meaning old applications will run on newer. Forwards compatibility is fairly much always true under Linux.

    I suspect that the “fairly much always” gives the lie away as a lie. And also neither Kurkos nor I specified [g|eg|.]libc as the sole problem here.

    But it is a problem, isn’t it? Do you seriously believe that (absent any other problems at all) a Linux runtime will function properly if it is built against 2.10 (just after Windows 7 launched) as opposed to 2.20 (the most recent)?

    Only a complete nutter would suggest that this is the case. Oh look, we have a complete nutter making that case … Welcome, oiaohm!

    And the solution? Use chroot!

    Why is the proposed solution always “use chroot,” as in the Valve “solution?”

    What’s wrong with building actual, proper, forward compatibility into the stuff in the first place?

    Ah, I know. That would require extensive testing, wouldn’t it? Not just shoddy poking around and hoping that the wider world won’t notice.

  25. oiaohm wrote about http://snapshot.debian.org/.

    Wow! I know I’m alive when I still can learn something about plants, trees and Debian here…
    “snapshot.debian.org
    The snapshot archive is a wayback machine that allows access to old packages based on dates and version numbers. It consists of all past and current packages the Debian archive provides.
    The ability to install packages and view source code from any given date can be very helpful to developers and users. It provides a valuable resource for tracking down when regressions were introduced, or for providing a specific environment that a particular application may require to run. The snapshot archive is accessible like any normal apt repository, allowing it to be easily used by all.
    The Debian Project would like to thank the Wellcome Trust Sanger Institute and LeaseWeb Netherlands B.V. for providing hardware and hosting. We would also like to thank the Electrical and Computer Engineering department at the University of British Columbia, Canada and Nordic Gaming for providing hardware/hosting and hardware, respectively, in the past.”

    The world can and does make its own software without M$ and “partners”. Some code. Some check everything. Others package software and others donate resources for the cause. I like that. Made my day.

    ” Snapshot used to run on two machines hosted at and provided by the Wellcome Trust Sanger Institute and by the Electrical and Computer Engineering department at the University of British Columbia, Canada. A few months ago, the machine at UBC, named stabile.debian.org, started to die. Since it was approaching its storage capacity limits anyway, we began looking for a new second home for snapshot, and LeaseWeb offered! Providing snapshot from two different places (now Sanger and LeaseWeb) allows us to survive temporary and not-so-temporary issues that affect any single site.
    Currently, snapshot consists of 24 terabytes of data in about 15 million files, and it appears to be growing at a rate of approximately 5 terabytes a year (or about 10 megabytes per minute).”

    When oiaohm wrote 24TB, I was skeptical because my own partial mirrors are just a few gB but there it is from the source. I think this qualifies Debian as “Big Data”, eh?

  26. DrLoser wrote, “It’s $5 or you’re out of the game”.

    Poor Google. They’re losers according to DrLoser.

  27. oiaohm says:

    http://snapshot.debian.org/
    The big problem in the Linux world is that the commercial software houses fail to understand how small they are. This is what causes huge problems.

    Debian open source has build and QA servers. Note what I said Servers. How much did Debian have to pay for the Servers. Answer 0 dollars. Someone places and pays for a order to HP, IBM, Dell…. decides they are not going to take it and the refund time has expired the hardware goes to Open Source free or charge. So huge powerful data centers build from this. So they have huge amounts of processing power. Debian run a full QA test every 24 hours is not a problem they have the CPU power to this. Closed source software maker QA testing their application against Debian on the other hand is does not have the resources. Same applies to Redhat and so on.

    Major Linux Distributions are bigger and better server resourced than any major Commercial software house including Microsoft. So why do distributions break closed source application. The truth is the closed source software company is nothing more than a ant is to a human. You are a closed source company the best option not to be run over is provide your own Runtime. In other words follow Valve example.

    Somethings kurkosdr does not understand is that Linux Distrobutions have a limitation. Distributions use Mirror sites. Mirror sites are worried about amount of disc space Distributions will take up. A full debian mirror measures over 20 terabytes of course very few servers want to host this. The reason why its so huge is snapshots allowing old binaries on newer distributions and the reverse.

    Please take note at the snapshots site. Snapshots means if you know the exact date an application was working you can build a chroot/docker install of debian of that exact date to run the application.

    This is where backward and forwards compatibility arguement normal fall apart if we are referring debian. Debian your compatibility mode is done by using chroot. Lack of tools to set this up easily is your only issue. Yes you can freeze in time and run old binaries on debian no problems. Yet how many closed source binaries publish they were tested with X date version Debian.

    Not all Linux Distrobutions are created equal. Ubuntu and Fedora for instance does not have the snapshots of distribution framework. This is want is annoying me. Debian users like Robert and Me don’t have the same issues as Ubuntu, Fedora or Redhat. The difference in Distrobution is quite huge. I have no problems running older binaries on Debian its chroot correct snapshot.

  28. oiaohm says:

    DrLoser the wine project uses the same define as you. Platinum rating objective of wine. But unlike you wine project is more than aware that Windows applications have some very stupid issues and don’t always run out box. Depending on the application under windows depends what length path it supports. 128 for some very bad coders. 250-260 using some sections of the Windows abi and a few thousand if using other sections of the Windows ABI.

    DrLoser Glibc does not use any syscall to the Linux kernel that is not on the stable list. Ever klibc part of kernel.org only uses stable syscalls.

    kurkosdr and DrLoser get this wrong all the time. Stable ABI is forward compatibility only. Meaning old applications will run on newer. Forwards compatibility is fairly much always true under Linux. Yes loader tweaks are required at times. Stable ABI does not prevent new functions been added to the Stable ABI. This is where the problem comes running with older.

    Some old year 2000 games from loki games today you use a shell script to run them with a bundle of libraries. Shell script instructs the loader to load different paths.

    This is the reality .desktop files could be used to provide overrides(it is not).

    kurkosdr there is absolutely no need to statically link. Run /lib/ld-linux.so.2 notice the options.
    –inhibit-cache Do not use /etc/ld.so.cache
    –library-path PATH use given PATH instead of content of the environment variable LD_LIBRARY_PATH

    These two options fairly much mean you can install your own runtime where every you so like. Next thing ld-linux.so.2 is a static binary with no other dependencies. So yes you can ship a program under Linix with your own Libc.

    There are some games for Linux that in fact exploit this feature. This is why they are a tar.gz file that you extract and click on a .desktop or .sh file and they work.

    Selling applications and don’t want to ship a runtime you can list on steam.

    Problem here nothing kurkosdr and DrLoser is talking about has to be a issue to commercial development. Value takes the simple way out they define a runtime. All the applications in steam store for Linux run on that runtime. Steam runtime is distribution neutral.

    If you don’t want to debug your application every time distributions change ship your own runtime. How do you avoid having to debug your application every time windows receives a update. Ship your own runtime. Treat Linux like Windows when packaging up applications and problems reduce a lot.

    Value has solved the QA problem DrLoser is talking about. So stop talking about it DrLoser and start telling others to follow Valve lead. Valve has been quite shocked themselves. They defined a runtime and distributions added conformance tests to make sure what they define unbroken. FOSS world requires guidance. This is something Adobe could not get. Linux developers said over and over with the Audio mess to Adobe pick one and just use that then conformance will happen. The requirements of applications effect how distributions are made.

  29. DrLoser says:

    Speaking of that $5, I’d seriously suggest that you fire up a virtual XP or something (I think you can find the bits and pieces for free) and stump up $40 for Scrivener, the non-Beta version.

    It’s $40 well spent. You will have fun with it.

  30. DrLoser says:

    If I polished it up it could be commercial software and I might not write it to make $ from the software. I could sell diet-books or cemetery plots or whatever and the software is some inducement to encourage customers to business.

    Please read what your correspondents write, Robert. This corresponds precisely to my first “exception” point.

    Now, first of all, this doesn’t work for everybody, or even a large proportion of “commercial software providers.” It works for my company, because the “Unique Selling Point” is that our software designs roofs that only feature our nail plates. That’s a small, small part of the roof, but it enables us to, effectively, sell something embedded in the software.

    Same thing with Candy Crush or World of Mincemeat or whatever. It’s embedded in the software.

    I don’t wish to discourage you from selling diet books or cemetery plots on the back of your free software, Robert, but even you would admit that, if you are one of say ten thousand dieticians or morticians before you write the software and give it away for $0, then you are still one of say ten thousand dieticians or morticians after you do so.

    Perhaps you gained publicity. Fine and good. But that doesn’t make what you wrote “commercial software.” It just makes it “marketing collateral.” A wholly different animal.

    Oh, and remember that the whole idea behind me offering a definition of “commercial software,” which I think stands up rather well to your bog-standard fall-back of quoting Websters or whatebver that was, was a refutation of your, to me entirely absurd and indefensible, claim that providers of “commercial software” should give it away for free on the Linux desktop.

    Because, y’know, even if you are correct in your assumptions and I am asinine in mine … mine happen to be much, much closer to the way that people who produce “commercial software” think.

    It’s $5 or you’re out of the game, Robert. You can complain all you like, but you need to stump up that $5 or do without the “commercial software.”

  31. DrLoser says:

    We are talking about legacy applications mostly. The current stuff has current libraries readily available.

    As Kurkos points out, the average Joe in the street defines “an application that doesn’t work when I hit the big red button saying go” as broken.

    To the vast majority of the population of the Earth, broken is what it is. Not only do they not want to build it from scratch, or even apt-get the dependencies … they don’t even want to trawl down that awful long list of VLC versions that you kindly provided.

    If it doesn’t work when you press the big red button marked “Go,” then it’s broken.

    Or, in your terms, a legacy application.

    How long is it before “the current stuff” becomes “a legacy application?” Now, to be fair, Kurkos, you, and I know “how long,” as of the present state of Linux distros. It’s basically the life of an LTS version, which is basically two years. After which, to a customer expecting to press the “big red button…” it’s broken.

    Not to mention the obvious fact that a lot of “current stuff” is broken in the first place, because the task of doing proper QA on any Linux desktop application inside two years is hopeless. Eventually the beta testers, oops, users will form a giant helpful loving peaceful community and un-break the thing …

    … and then it becomes unbroken but according to your definition legacy software, and the Wheel of Misfortune spins once more.

    I can quite see how people can make money out of this nonsense. But it isn’t by selling “commercial software.”

  32. DrLoser wrote, “you provide “commercial software” in the expectation of getting direct monetary gain back, with no significant extra effort in training, etc, etc.”

    Nope. You provide software for any number of good reasons. I’m currently righting a little meal planner using a database from USDA, MySQL and Pascal. If I polished it up it could be commercial software and I might not write it to make $ from the software. I could sell diet-books or cemetery plots or whatever and the software is some inducement to encourage customers to business. The idea is that none of the meal-planning software does exactly what I want so I write my own. I could do that whether or not I am in business. If I’m in business it is by definition commercial software.
    Commercial \Com*mer”cial\, a. [Cf. F. commercial.]
    Of or pertaining to commerce; carrying on or occupied with commerce or trade; mercantile; as, commercial advantages; commercial relations. "Princely commercial houses."
    --Macaulay.
    [1913 Webster]

  33. DrLoser says:

    My working definition of “commercial software,” you will recall, Robert, is that the software in question (indeed, absolutely anything) becomes “commercial” the moment somebody pays you even a notional $5 for it.

    It doesn’t have to be “successful commercial software.” You might go bankrupt selling it at $5 a unit. You might never sell more than one unit. You might even become insolvent because you have to service the debt on the $10 million it cost you to build that $5 McGuffin … which is sunk cost, so it doesn’t affect the argument.

    I’m claiming that if you can sell a software package (loosely defined as anything from “Hello World!” upwards) for $5, you have “commercial software” on your hands. If you can’t, then (with two classes of exceptions I will come to), you do not.

    Note that this definition of “commercial software” is value-free. I’m not saying it’s good. I’m not saying it’s bad. I’m just suggesting that an actual definition helps.

    Now, to turn to your four supposed counters:

    M$ would gladly take $1million instead of $2million if it meant keeping another company away from GNU/Linux.

    This fits my definition. $1 million is easily a larger sum than $5. As a matter of fact, if you’d quoted $5 instead of $1 million, it would still fit my definition, wouldn’t it?

    Why did you bother to bring this up, Robert? It’s completely irrelevant.

    That single developer may have written that application to demonstrate his prowess to some company for which he wishes to work. The company might give him a big “signing bonus” and share options and a fine annual salary instead of a licensing fee.

    By “big,” I assume you mean “more than $5.” Which fits my definition of “commercial software.” In this instance, you are taking it upon yourself to define the market, in this case a single company. Which is fine. I didn’t define the size of the market: it’s up to the producer of “commercial software” to do that. One sale might be enough. It’s still a sale.

    Why did you bother to bring this up, Robert? It’s completely irrelevant.

    There’s nothing more commercial than Linux yet, as far as I know Linus and friends do not sell licences. They sell interoperability, hardware independence, training and membership in the club.

    Finally, Robert, we can agree that this is a relevant point. One that, I submit, defeats your purpose. Linus (and a sinister bunch of unnamed “friends”) provide “interoperability, hardware independence, and membership in the club,” but they don’t sell it as a commercial proposition, do they? Not for $5. Not for 5¢s.

    Consequently they are not selling “commercial software,” by my definition.

    Now, obviously somebody makes money out of it, and here we come to your category of “training” (and support and presumably documentation and obviously hosting).

    But that is not money being made by a seller of “commercial software.” A good thing, a bad thing, make your own judgement, but do not distort the basis of my argument, please.

    Now, the two large exceptions to my definition of “commercial software:”

    1) The company I work for gives its software away for free. Yes, that’s right! You can build a wooden roof, or an infinite number of wooden roofs, of whatever dimensions you like using our software, and we will charge you $0!
    How do we do it? Well, we’re not the Bank of Change, which sometimes appears to be your approach to monetizing Linux. We manufacture nail plates. The more roofs you build with our software, the more nail plates we sell.

    2) Applications like “Candy Crush” are given away for free. (Or for a notional fee, say 0.99¢s, which presumably covers the cost of putting them in an App Store.) Again, these apps are not working on the “Bank of Change” principle. They’re working on the principle that there are tens of thousands of idiots, whoops, interested consumers who will pony up real cash for “lives” or “virtual samurai swords” or whatever.

    But, regardless, you provide “commercial software” in the expectation of getting direct monetary gain back, with no significant extra effort in training, etc, etc.

    So, DrLoser denies reality while claiming his dark corner of the universe is the whole thing…

    One of us is sitting in a dark corner of the universe, Robert.

    But it isn’t me.

  34. ram says:

    I agree, and that is usually what I do. Big media apps are on a big machine and it all sits in memory. Other machines are just X-terminals (to use the archaic phrase) to them.

  35. ram wrote, ” The solution, implemented by projects such as Cinelerra, is to incorporate the actual source from projects it is dependent on and doing a static build. With the amount of memory in today’s machines it is not a bad approach.”

    Yes. Shared libraries do save a lot of RAM but after you’re past the application level there is so much diversity waste happens. e.g. I try to avoid certain KDE/GNOME applications because I chose XFCE4 but many good applications link to all that stuff I didn’t want… I think some applications pull in 20-30 libraries from KDE/GNOME. Still, with 4gB RAM, and 512gB storage, I haven’t quite run out of space. I would likely be tight if every application were statically linked. That has to be an exceptional solution not the standard route. Statically linked applications are really sluggish to load too because they might be several hundred MB rather than just a few pulled in from storage. With GNU/Linux I think the best way to do it is to run the statically linked applications from terminal servers and use them via X so the application is usually in RAM 24×7.

  36. Deaf Spy wrote, “Yeah, we know. YouDon’tNeedThat(tm).”

    I and thousands of teachers and students with whom I’ve worked and my family. It’s more than just pogson. Distros work for lots of people. That’s why they exist and persist. IBM found that around 80% of usage cases for PCs were in the easy to migrate category just for this reason, many folks just use a few applications. Outside of business/large organizations are about half of PC-owners browsing. When the PC was a new concept I kept wondering why ordinary folks even needed them. That was in the dial-up era… When fast Internet became prevalent the PC took off to the extent that Gates changed everything about the OS. If M$ hadn’t GNU/Linux would have been king.

  37. kurkosdr wrote, “The virtual machine solution is the most cumbersome you could think of. You have two OSes to manage and upgrade.”

    Actually not. In GNU/Linux, with X, one doesn’t need a display or xserver on the system running the application, only the client device, so it’s a minimal OS, application and dependencies. Most applications don’t depend on X. For a single application the overhead is just 1-200 MB of RAM extra. Since it’s a virtual machine, the security implications of not updating the software for some applications might be acceptable because there are other layers of security. We are talking about legacy applications mostly. The current stuff has current libraries readily available.

    kurkosdr wrote, of building vlc from source-code with static links, “a cumbersome process not explained anywhere in the manual.”

    I guess they’ve done the build-for-almost-everyone route:

    VLC media player – Linux – Gnome
    VLC media player – Windows 7 – Qt Interface
    VLC media player – Windows 7 – Qt Interface
    VLC media player – Windows 7 – Qt Interface
    VLC media player – Windows Vista – Skins Interface
    VLC media player – Windows Vista – Qt Interface
    View all screenshots
    Official Downloads of VLC media player
    Windows
    Get VLC for Windows
    Mac OS X
    Get VLC for Mac OS X
    Sources
    You can also directly get the source code.
    GNU/Linux
    Get VLC for Debian GNU/Linux
    Get VLC for Ubuntu
    Get VLC for Mint
    Get VLC for openSUSE
    Get VLC for Gentoo Linux
    Get VLC for Fedora
    Get VLC for Arch Linux
    Get VLC for Slackware Linux
    Get VLC for Mandriva Linux
    Get VLC for ALT Linux
    Get VLC for Red Hat Enterprise Linux
    Other Systems
    Get VLC for FreeBSD
    Get VLC for NetBSD
    Get VLC for OpenBSD
    Get VLC for Solaris
    Get VLC for Android
    Get VLC for iOS
    Get VLC for QNX
    Get VLC for Syllable
    Get VLC for OS/2
    Association VideoLAN
    VLC media player
    VLC
    VLC for Windows
    VLC for MacOS X
    VLC for Ubuntu
    VLC for Fedora”

    but they actually have instructions in the source tar-ball:
    The file INSTALL mentions asking configure for options which, if you scan for “static” gives:
    “./configure –help|grep static
    –enable-static[=PKGS] build static libraries [default=no]
    –with-mad-tree=PATH mad tree for static linking
    –with-faad-tree=PATH faad tree for static linking
    –with-a52-tree=PATH a52dec tree for static linking
    –enable-x26410b H264 10-bit encoding support with static libx264 (default disabled)
    –with-x26410b-tree=PATH H264 10-bit encoding module with libx264 (static linking)
    –with-x264-tree=PATH x264 tree for static linking”

    You will need the source-trees of all those libraries and state the paths if they’re not some default. I haven’t tried building it statically but I have done the default build to get some feature …

  38. kurkosdr says:

    @Pog

    The virtual machine solution is the most cumbersome you could think of. You have two OSes to manage and upgrade. You have to set the virtual machine up, and most distro manuals don’t explain that compat issues with apps can be solved with a VM and how to do that. It’s something only businesses should use for that one precious app, it’s not something for home users.

    The statically link all unstable dependencies with the app is an okay solution. Too bad nobody in linuxland does it, because it’s not a “pure” solution. Even apps like VLC which statically link dependencies in the windows version, don’t do so for the linux version. You have to compile the app yourself. Again, a cumbersome process not explained anywhere in the manual.

    Meanwhile in Windows, here are the instructions: If an app is compatible with Vista or above, it will work.

    And you can stay in your old version for many years, because new desktop apps will run on it.

  39. Deaf Spy says:

    Most ordinary folks just browsing

    Yeah, we know. YouDon’tNeedThat(tm).

  40. ram says:

    It would be nice if the major libraries had more stable API’s. This is particularly a problem with media and graphics applications. The solution, implemented by projects such as Cinelerra, is to incorporate the actual source from projects it is dependent on and doing a static build. With the amount of memory in today’s machines it is not a bad approach.

  41. Agent Smith wrote, “I’m a fan and supporter of Free Software, but we must be honest here: it’s not an universal panacea.”

    Most ordinary folks just browsing and such can do with 100% FLOSS eliminating all kinds of problems that M$ and “partners” inflict: EULAs, re-re-reboots, malware, stickers, “proof of licensing”, etc. With FLOSS if you have the software, you have the licence and you can run it any way you want. That’s a panacea for most of us who just want to be free to do our thing.

  42. Agent Smith says:

    Well, anything Adobe makes, except PDF. Even Flash is commercial software, and a BLOB Adobe won’t let anyone compile. I’m a fan and supporter of Free Software, but we must be honest here: it’s not an universal panacea. Some things the commercial software does better. Open Office is Apache license, Chromium is GPL, I guess, VLC is GPL. Even Android apps are not Free Software, in its majority. Only the apps in F-Droid have different licenses. And Android is free in the price tag, it’s add supported or is charged. So, no, let’s not fool ourselves here.

  43. kurkosdr wrote, “The major problem with every Linux distro is not so much compatibility with windows, but the fact said distro is not compatible with itself, aka previous versions of itself.”

    Again, that’s not a huge problem for the end-user as long as either or both the distro and the developer make the code run on that distro. Backwards compatibility is not an issue if one keeps the application in virtual machine, say, and slides that virtual machine around to every distro/machine as needed. That’s fluffy, but for important applications would certainly work and may well be the way the application would run these days anyway. Lots of organizations run one virtual machine per user. It’s not much different from running one virtual machine per application with X. Lots of terminal servers run a single application on each machine in large organizations. There are efficiencies that way other than keeping the code/distribution small.

    It’s not a real problem in that one can find a solution because the source code and even old packages are available. Just retain every library an application requires in addition to the application itself.

  44. DrLoser wrote, ” The moment you know you have “commercial software” is when a single neighbour (could be 18,000 miles away, we live in a globalised world) pays you some specified fee, say $5, for that software.”

    Nope. That is one business-model of an infinite number of business-models.

    e.g. Even M$ might give a licence for Application X as an inducement to some folks to roll out “8.1” over XP, especially if 10K copies are involved. M$ would gladly take $1million instead of $2million if it meant keeping another company away from GNU/Linux. What do they care? From M$’s point of view, it’s “free money”. They don’t have to do any work for it after the first few $billion roll in.

    e.g. That single developer may have written that application to demonstrate his prowess to some company for which he wishes to work. The company might give him a big “signing bonus” and share options and a fine annual salary instead of a licensing fee.
    They could support X’s further development and distribute X for purposes other than selling licences, like running their own or clients’ businesses.

    e.g. There’s nothing more commercial than Linux yet, as far as I know Linus and friends do not sell licences. They sell interoperability, hardware independence, training and membership in the club.

    So, DrLoser denies reality while claiming his dark corner of the universe is the whole thing…
    e.g.

  45. kurkosdr says:

    “And “commercial software” is not something that the FLOSS culture welcomes”

    I like how FOSSies like to boast about RedHat “selling” free software.

    No they don’t sell free software, they give the software for free and sell “ancillary services”, because the software is available for everyone to download and recompile.

    But what happens when someone else sells the same ancillary services for half the price and none of your R&D costs?

  46. DrLoser says:

    Define commercial software.

    OK, I’ll try. I don’t want to put words into Agent Smith’s mouth, and he may be driving at a different argument, but from my own point of view, here goes.

    We’ll do this by induction, shall we?

    Here I am. I am a one-person company. It doesn’t actually matter what legal standing I have for the purpose of this argument. I am one, and I produce “Software Application X.”

    I now wish to distribute “Software Package X.” If I were you, Robert, I would do so for free. Problem solved! This is therefore not “commercial software.” You, as Robert Pogson Creator Of X, might indeed make money by supporting that software, by selling documentation of that software, or even (horrors!) by allowing other people to post grotty little advertising banners over Software Package X.

    But it wouldn’t be “commercial software.” The moment you know you have “commercial software” is when a single neighbour (could be 18,000 miles away, we live in a globalised world) pays you some specified fee, say $5, for that software.

    One seller, one buyer, one monetary transaction. That’s all it takes for something to be “commercial software.”

    By induction, you can multiply up the sellers, the buyers, and the monetary transactions. You are still dealing with “commercial software.”

    And “commercial software” is not something that the FLOSS culture welcomes, or indeed has room for.

  47. DrLoser says:

    I have built a few applications from source code and it is a problem how diverse the library-space is: years of constant upgrades with incompatible versions of dozens of libraries.

    And there goes the “Everything I need, I get via apt-get” argument, Robert.

    Which applications, btw? Were they domain-specific, small market applications? Or, say, MySQL or Office or GiMP?

    I’d say there’s an interesting discussion to be had here.

    At some point, the ABIs should be frozen.

    It was my understanding that Linus mandates “stable ABIs” outside the kernel, Robert. Inside the kernel (and for anything that directly accesses the kernel, ie drivers and glibc and other low-level libraries) anything goes.

    I don’t believe that ABI instability affects Linux user-land applications, however … possibly via a requirement for a specific glibc, which is a second-order problem.

    Perhaps you can shed some light on my apparently erroneous belief that ABI instability has nothing to do with the shoddy and pointless nature of most Linux desktop applications, Robert?

    (You’ve done a magnificent job on the site revamp, btw. I passed the first page through the W3C validator, and it came up with only three — inconsequential — errors.

    (Just goes to show what you can do when you have the freedom to examine, modify, etc …!)

  48. kurkosdr says:

    So, Linus finally admits that backwards compatibility (aka stable ABIs but most significantly APIs) is important.

    We “haters” have been telling you that since day 1: The major problem with every Linux distro is not so much compatibility with windows, but the fact said distro is not compatible with itself, aka previous versions of itself. Compare and contrast with windows, where all Vista apps and drivers run on the latest version of windows. Marketing desktop linux to non-geek populance risks high return rates from broken upgrades (*cough* dellbuntu debacle *cough*)

    And as you say, FOSS software with very few devs and users suffers from the problem too, not just proprietary software.

    And then there is the problem of new apps requiring you to have the latest stable/LTS. While all latest desktop windows apps run on Vista and even XP.

    Don’t expect things to get better. The X.org and PulseAudio guys don’t consider this to be a problem. “It’s your fault for not using only FOSS, and if it’s FOSS, you should be able to fix the compat breakages yourself”.

  49. Agent_Smith wrote, “no commercial software vendor / creator will put source code in the hands of the distros. That won’t happen. “

    Let’s see: OpenOffice.org? Check! FireFox? Check! Chromium browser? Check! Hey! Those don’t fit the pattern…

    Define commercial software. VLC which has ~100million users is certainly commercial software in that it is a commodity widely used in IT, yet the source code is available to distros, so, what the Heck is Agent_Smith going on about??? He must be thinking of M$ and “partners”, you know, those guys who want the world to keep account of how many copies exist and being paid for each one? They are so old-school. That’s not how things are done, largely, today. e.g. see Android/Linux. The world can and does make its own software and doesn’t need to pay per copy to get it.

  50. Agent_Smith says:

    The whole problem is a perspective issue: GNU/Linux was never created to be a commercial success. I explain: One software creator makes a program, let’s call it Audashow, and, puts the source code in Sourceforge, Fresh meat, whatever. Then, each distro downloads the code and packages it and makes it available for their users in their repositories. You see, there’s not a central repository and the packages are compiled by each distro. The difference is huge, since no commercial software vendor / creator will put source code in the hands of the distros. That won’t happen. So, the problem is: To be a commercial hit, GNU/Linux will have to be a different thing, something else than the community work, love and passion it always has been.
    That, or the commercial software vendors will team up with the distros, to distribute their commercial packages. But I don’t see that happening easily.
    Well, those were my 2 cents.

Leave a Reply