Ubuntu GNU/Linux Becoming Like That Other OS

“People that “hate” Unity, I can not take serious. Very strange that there are so much “haters” posting on forums nowadays. Seems to me that these people are shills, trolls or they have personal problems. If a distro does not work for me for whatever reason, I try to solve it or I move to something else. I do not see reason to develop hate. If you react so emotional to setbacks in stead of solving them then this is your personal problem.
And by the way, any system can get slow, often a clean install does wonders. Then make a backup, that is what I do with all systems, Windows included. Keeps me happy :-)”
In a discussion of the events of GNU/Linux in 2014, this comment appeared:

It’s the same old thing. An operating system gains reasonable popularity and it becomes godlike. It must not be criticized or the critic is declared mentally incompetent. That’s just wrong. If users become dependent on an OS and the developers of the OS go off on some tangent the users don’t like, that’s the developers’ problem, not the users. I long ago dropped Ubuntu because it didn’t work for me, breaking configurations with updates. I once had all my terminal servers drop out because the display manager would not run. My configurations were ignored. I went to Debian where users get much more respect. The policy that one package should not mess with the configuration of another protects users’ investments in their systems. Ubuntu thought it was fine that ~100 seats should be disabled when I installed a new set of icons, for Pity’s sake. For that, they overruled /etc/gdm.conf…

“implemented an option to allow app menus to be placed back in app window frames, Ubuntu developers got to work adding other long-delayed features to Unity. Among them a new lock-screen and an option to minimise apps to the Launcher by clicking on their icons.”I notice Unity is becoming more like the system it displaced years ago… I wonder if it’s the rebellion by users that’s finally awakened developers to the reality that users matter.

See 12 Months, 12 Highlights: This was Ubuntu in 2014.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , . Bookmark the permalink.

18 Responses to Ubuntu GNU/Linux Becoming Like That Other OS

  1. DrLoser says:

    You can configure your desktop x86 machine over and over again with what appears to be identical options yet the client will not perform on the x86 thinclient. So you have to go to the effort of installing a full build environment on the thin-client or have a PC with the same CPU.

    And since Robert isn’t going anywhere near Atoms, and since I would assume he has at least one “baby brother” to Beast with essentially the same architecture, Fifi:

    This is completely irrelevant, isn’t it?

    Quit the Gish Galloping. The fact is, Robert could quite easily spin kernels up on a secondary machine if he so chose.

  2. oiaohm says:

    DrZealot
    Completely irrelevant, Fifi. One of the nice things about the gcc suite is that you can target a build on a completely different machine. Indeed, on a completely different architecture.
    Again commenting on something you don’t understand.

    Completely wrong on gcc. Yes you can target to build a generic asm on a different machine/architecture. Gcc native means something completely nasty.

    You find this out when building replacement Rdesktop for thinclients. You can configure your desktop x86 machine over and over again with what appears to be identical options yet the client will not perform on the x86 thinclient. So you have to go to the effort of installing a full build environment on the thin-client or have a PC with the same CPU.

    Gcc optimizer includes extensions from intel in case of two or more ways to convert code to asm to bench on the current cpu then use the faster one for the cpu the compiler is sitting on. Yes there is a reason why Gcc at times takes down right forever to build native something compared to setting a generic cpu mode of the same cpu you have in your computer.

    Gcc supports generic and exact to current physical cpu. Including optimized matching the exact defects in current physical cpu. The intel complier also supports the same option.

    Most cases the difference between exact matched and generic is like 1 percent. But some of the Intel atoms have instructions that take like 8 times longer than normal. The cpu contains other instructions to perform the same thing yet these are not slow.

    DrZealot why are intel chips such a problem. Its microcode. Intel chips microcode are designed to work around failed sections of silicon by performing a diagnostic on start up of the cpu then black listing areas. So you may have what appears to be 100 absolutely identical intel chips but one behaved major-ally different due to the fact its working around some form of silicon defect. This major difference can be 10 percent slower. Intel cpus don’t put out and diagnostic messages to say hey the microcode has found a defect and is running abnormally only clue that you have a intel cpu doing this is the fact it not benching correctly.

    Yes known defective but working cpus are cheaper as well. Guess what end up in a lot of thin-clients.

    Only way to be sure to optimize for intel chips is run the complier on them.

    You can think of intel cpus like driving threw a city. You can plan out the best route generically but the road conditions on the day you drive may be different to expected so what you think is the best route turns out to be the slowest. Optimizing to the exact cpu is like getting current road conditions then planing route.

  3. DrLoser says:

    If you are building kernels you are normally build for performance.

    Completely irrelevant, Fifi. One of the nice things about the gcc suite is that you can target a build on a completely different machine. Indeed, on a completely different architecture.

    Performance doesn’t enter into it.

    If, for some unfathomable hobbyist reason, I wanted to build every single point release of the Linux kernel out there — or, slightly more usefully, the major ones with a Cartesian Join of say four or five different “key” settings, including things like SELINUX, APPARMOR, IOSCHED and XEN — then I would do so on a secondary machine; not on my main server.

    I haven’t tried it, but you could probably do this on an array of RasPi machines. That would actually be quite a cool thing to do. (You’d have to lash up a SAN-style disk matrix on the cheap, though … which would also be a cool thing to do.)

  4. oiaohm wrote, “1 Does your cpu sux with generic. 2 how important is performance to you.”

    In my case it’s neither. I wanted some features that haven’t made it into my distro, Debian Jessie, so far. systemd is somewhat removing that advantage because I can’t use virt-manager because of its dependency on systemd, but virtualbricks works just fine. There isn’t much difference getting long term support for a kernel from your distro or from one of the Linux developers except the number of testers. So far Greg Kroah-Hartman hasn’t given me any pain. I’m using 3.17.7 for now but I will take the next kernel he takes for long term support as my kernel. It will have all the features I want and I can custom-build it for my Beast or whatever machine I use to replace Beast eventually. Greg releases an update about every two weeks without any particular schedule, just when needed. It’s no problem at all to use Beast’s power occasionally.

  5. ram says:

    oiaohm said: “Its like some of the intel atoms suxed at running windows and generic Linux yet after rebuilding the kernel to match they started moving quite reasonable.”

    Yes, especially if one used the Intel compiler creating an “Intel/Linux”. Intel Atoms also cluster well, so one can almost achieve any level of computing power one desires. Their efficiency in terms of power consumption is also quite good. One can argue which architecture and chips are the most power efficient, but the Intel Atom certainly is among the top three.

  6. oiaohm says:

    DrLoser
    In fact, why not just spin up new kernel builds on a different machine?

    You could break the record for Linux Server uptime with Beast if you did that!
    If you are building kernels you are normally build for performance.

    Unfortunate gcc is partly evil and so are intel chips.
    http://en.chys.info/2010/04/what-exactly-marchnative-means/

    The difference between a Linux kernel built for generic cpu x86 types and one matched to the exact cpu x86 is an increase in performance from 1 to 10 percent depending on the chip and what you are doing and that is exactly the same kernel source with exactly the same options other than -march changed to match the exact cpu.

    This is why source based distributions exist.

    There is a reason why someone may want to build every kernel they use from source. If you are on a CPU that is only a 1 percent gain by exact matching you are most likely wasting your time but if your CPU happens to be one of the CPU that hate generic x86 code and you are getting 10 percent performance gains then hell yes it worth it.

    So the question if you should be building kernel from source is two fold. 1 Does your cpu sux with generic. 2 how important is performance to you.

    Its like some of the intel atoms suxed at running windows and generic Linux yet after rebuilding the kernel to match they started moving quite reasonable.

  7. DrLoser says:

    Eh… I hate Unity too. However, the outcome was Cinnamon which is just awesome.

    Have you tried the same argument with your dates, Dougie?

    “I used to hate you, but now you’re just awesome.”

    Well, it’s worth a try, I suppose. Far more likely to get you to third base than admitting that you sell snake oil for a living.

  8. DrLoser says:

    By the way, I wouldn’t call it “constant” rebooting, but rather, “variable” rebooting, since this is all about loading a new kernel…

    A desiderata of great consequence to the billions of unwashed out here who frankly don’t give a damn, luvr.

    I’d call it pissing into the wind, but then again I’ve never bothered to compare the compulsion to rebuild every single Linux kernel under the sun against the possibility of either a urinary tract infection, or the wind suddenly turning against me.

  9. DrLoser says:

    In fact, why not just spin up new kernel builds on a different machine?

    You could break the record for Linux Server uptime with Beast if you did that!

  10. luvr says:

    By the way, I wouldn’t call it “constant” rebooting, but rather, “variable” rebooting, since this is all about loading a new kernel… 🙂

  11. DrLoser says:

    Hmmm. Thinking about that.

    Other than the sheer fun of it, what do you get out of constantly building point releases of the Linux kernel?

    It seems like a very odd hobby to me. Between point releases, do you research what the possible benefits are, and whether they match your needs for the next ten days?

  12. DrLoser says:

    That’s incorrect. I have it on several systems and as long as it stays out of my way, I have no issues with systemd.

    Apart from the many and various issues that you have spent the last two or three months complaining about, Robert. It’s invasive, it’s bloated, it interferes with my control of init scripts, and it makes Gnu/Linux/Debian look more and more like Windows.

    Did I miss anything?

    Anyhow, it’s nice to know that you’ve shed your prejudices and are prepared to accept systemd in all it’s glory. Other than on the Beast, of course.

    It’s just that I reboot Beast frequently because I build up to date kernels and notice that the system boots to a usable desktop in double the time sysvinit did because it starts all services before starting the display manager, the choice Debian made, not systemd.

    Ignoring the possibility of re-arranging the dependencies, which we have explored earlier and which merits further examination, I fail to see how this is a significant problem.

    Let’s say you spin a new kernel up every ten days. Wait, let’s say you spin a new kernel up every five days. Wait, let’s say you spin a new kernel up every day.

    I forget the precise lag that you quoted before you got an interactive terminal up, but I believe it was in the sub-120 seconds range.

    Two minutes out of your day, for the privilege of spinning up a new kernel every day, is hardly an imposition, Robert. A cup of coffee, a visit to the bathroom (we are both past 50 years of age), a bit of love and tenderness to the dog, cat, goldfish, whatever pet you have … there are many things you can do with two minutes.

    Worrying about this sort of minor irritant will just drive you mad, to no good purpose.

  13. Deaf Spy wrote, “Constant re-re-reboots, anyone?”

    These are reboots to load a new kernel, not because the software has failed in some way. I could even eliminate the reboot but choose to do it the old way just out of habit.
    uptime
    14:56:44 up 10 days, 3:17, 6 users, load average: 0.27, 0.34, 0.26

    I wouldn’t call that re-re-rebooting. I even waited quite a while to do that reboot after rebuilding the kernel because The Little Woman also runs stuff on Beast. I rebuilt that kernel on or about 2014-Dec-16. I enjoy the low pressure environment that is GNU/Linux. I don’t have to reboot Beast because some twit in Redmond, WA decided to fix something he messed up a decade ago every month. Generally, I can reboot when I want and when it’s convenient.

  14. Deaf Spy says:

    It’s just that I reboot Beast frequently because I build up to date kernels
    Constant re-re-reboots, anyone?

  15. DrLoser wrote, “systemd, which we can both agree you don’t get on with particularly well”

    That’s incorrect. I have it on several systems and as long as it stays out of my way, I have no issues with systemd. It’s just that I reboot Beast frequently because I build up to date kernels and notice that the system boots to a usable desktop in double the time sysvinit did because it starts all services before starting the display manager, the choice Debian made, not systemd. I’ve had a tiny few problems with Debian over the years compared to weekly problems with that other OS. There’s no comparison of the reliability of the service Debian provides.

  16. DrLoser says:

    It’s the same old thing. An operating system gains reasonable popularity and it becomes godlike. It must not be criticized or the critic is declared mentally incompetent. That’s just wrong.

    Interesting, Robert. Before systemd, which we can both agree you don’t get on with particularly well, when was the last time that you criticized Debian?

    Still, cult members presumably enjoy themselves in some way or another, I suppose.

  17. Bob Parker says:

    I detest any interface that generates motion when the mouse pointer accidentally lands in any area. This includes Unity, MacOSX and any web page with menu bar across near the top that is not delayed a half second or so. As a desktop only user I find Xubuntu meets my needs just fine. The Ubuntu Software Center is total garbage but absolutely not needed when apt-get does every thing required.

  18. dougman says:

    Eh… I hate Unity too. However, the outcome was Cinnamon which is just awesome.

Leave a Reply