Linus Mellows In His Old Age…

“And because I’m dragging it out for another week, I’m going to be *very* bitter if anybody sends me pull requests this late in the game that aren’t for major issues. If you send me small irrelevant stuff that doesn’t fix major issues (oopses, security, things like that), I’m going to curse at you and ignore your pull request. So don’t do it.

The only things I want to see are fixes that people care deeply about. If it’s not critical, or you don’t have an actual problem report from an actual user, just put it in the queue under the christmas tree, and let it got for 3.8.

(Ok, while writing this I got another pull request that made me go "We don’t really need this". I’ll pull that, because technically it came in before I’d given people this warning, but …)
Linus”

see LKML: Linus Torvalds: Linux 3.7-rc8.

So Linux 3.7 is almost ready and I just built 3.6.9…

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , . Bookmark the permalink.

18 Responses to Linus Mellows In His Old Age…

  1. kozmcrae wrote of that other OS as FLOSS, ” I would wait until they clean out all the dead weight which would take some time.”

    Under no circumstance would I give M$ any more business as long as the same evil people run it. No apology or change in business strategy would encourage me to give them a second chance. They were not born evil. They chose to be evil and did unrepairable harm. Beyond that, the tech is artificially bloated and complicated making it entirely usuitable for use in IT. No matter how beautiful the code could be made a platform designed for ease of malware writers is not software we should use. I am appalled that some distros strive to “be like M$”. That’s crazy. M$ becoming like Debian would also be insane.

  2. kozmcrae says:

    I think a far more interesting question would be: Would you use Windows if it was open source?

    Not right away. I would wait until they clean out all the dead weight which would take some time. Then I would try it. Some things just have to die. M$ is one.

  3. eug wrote, “Would you use Linux if it went closed source?”

    distros as we no them would disappear without the source code. I could not use the latest Linux kernel without the source code. It’s a silly question. If kernel.org started distributing only binaries, Linux might not boot on some hardware and the kernel would be unnecessarily bulky.

  4. eug says:

    The most important hypothetical question you’ll see all day: Would you use Linux if it went closed source?

    http://www.networkworld.com/community/node/81950

  5. d. says:

    Well that’s all good for someone who has to maintain a system or small network in a workplace or school or whatever.

    But for a home user, the use cases are different. For example, I do graphics, and the latest 2.8 version of GIMP isn’t in the repos of ubuntu 12.04. I don’t want to change to a non-LTS version just to get it from a repository, because I can just as easily compile it from the source, then package and install it with checkinstall.

    Maybe if I were using Debian, I could ask the people who maintain the repositories to package the software that I want. Maybe they would even do it. But why would I go through all that trouble when it’s much simpler for me to install it myself?

  6. d. wrote, “If I can just add stuff to the repos myself, then what’s the difference between getting stuff outside the repos and downloading it from the repos (other than, you know, that other people get to use the repos after me). And if I can’t add stuff to the repos myself, then from my perspective, the repos are a top-down model.”

    Debian developers follow a code which ensures that software licences are respected and that the package will work with a Debian GNU/Linux system. That’s worth a lot.

    It is easy to find a FLOSS software package outside the repository that won’t work with a current Debian GNU/Linux system because of different versions of libraries etc. The Debian developers do that 0-level testing and package compatibility work and package building which is 90% of the job if users did it themselves. Rather than looking it as top-down, think of it as bottom-up with the work of one Debian developer allowing the work of dozens of software projects to be smoothly used by millions. It’s a very efficient system. The user who accepts this minor restriction gains a great deal less to worry about in maintaining systems. IMHO, the work of administering a bunch of PCs is cut down about 80% using Debian’s APT packager. Less malware and slowing down and re-re-reboots is the icing on the cake.

  7. d. says:

    “I have never heard anyone describe Debian as “monolithic”. They are organized chaos from around the globe. That’s why the repository is so huge. People with an interest in this or that add appropriate packages. ”

    Yes but that’s not the point. If I can just add stuff to the repos myself, then what’s the difference between getting stuff outside the repos and downloading it from the repos (other than, you know, that other people get to use the repos after me). And if I can’t add stuff to the repos myself, then from my perspective, the repos are a top-down model.

    I guess I’m not explaining my stance very well here… what I mean is, if there’s a software that I want to use, and I can’t find it in the repositories of my distro, I think there should be a way for me to easily install it anyway. This is mostly already true with 3rd-party ppa repositories on ubuntu, you just add the ppa to the sources and you can use synaptic or software center to install it. Or you can download a .deb package from a website and install it locally.

    It only becomes more troublesome if the software isn’t packaged or is only in .rpm or something. Even then it’s mostly not a problem, at least for me.

  8. Bill wrote, “An XSS java-script attack could download a small trojan to the ~home dir, and add a launch command to launch itself in ~/.bashrc, all without ever needing an administrator password or root privileges.”

    That’s highly dependent on the browser. Chrome would not do that, for instance. If this were the case, there would be no need for all those “buffer-overflow” and “off-by-one” attacks to get anything to execute. Also, GNU/Linux requires the file to be made executable. I have never seen a browser in GNU/Linux download a file to be executable. If chmod were inserted in ~/.bashrc this could work but an ordinary user could easily detect the problem and fix it. Some use different accounts for accessing the web to prevent such problems affecting locally created documents. Some browse the web from a live CD…

    I have surfed a lot with GNU/Linux in the last 12 years and never encountered any such attack. If it should ever be a problem, I know several preventative measures such as checking files for modification, working in a virtual machine or on a readonly file-system for executables.

  9. Bill says:

    I just switched over to GNU/Linux, from Windows, this year. I must say, I’ve been missing out! Such a great operating system, and no registry system! I honestly feel like I’m in OS heaven after using Windows for so long.

    As for defragmenting. I read the only way ext3 or ext4 will ever become heavily fragmented, is if a hard disk becomes near full. Once nearly full, files will start to become fragmented, because the filesystem can’t place them where it wants too.

    Viruses are also a possibility. They’re possible on any device or operating system. Less so on GNU/Linux, but still fairly easy if throwing caution to the wind. I use Firefox with No-Script add-on, disable Java plug-ins. An XSS java-script attack could download a small trojan to the ~home dir, and add a launch command to launch itself in ~/.bashrc, all without ever needing an administrator password or root privileges.

  10. d. wrote, “I just don’t like the idea that there’s one monolithic entity that controls what software should or shouldn’t be installed.”

    I have never heard anyone describe Debian as “monolithic”. They are organized chaos from around the globe. That’s why the repository is so huge. People with an interest in this or that add appropriate packages. To the extent that Debian developers are a random sample of humanity, they collectively meet mankind’s needs.

  11. oiaohm says:

    defragging does at times become required on Linux. Ext4 add a background defrag and you do have e4defrag tool.

    Minor difference in how writes are handled with Linux vs Windows makes a big difference to the defrag rate. Linux historically fragments slower. In general usage that slow its not normally noticeable.

    shakedefrag exploits the feature that is linux trick to fragmentation cure. You open a file you tell the file system to save it. File system looks at file sees is fragmented see a space big enough to write the file non fragmented and writes the file there. Bingo magic disappearing fragmentation.

    Windows Vista on supports firing up the defrag program in background when there is nothing todo. This is not like the Linux solution for resistance.

    With ext4(e4defrag0 and xfs online defrags for Linux you do have option of firing up every so often a defrag tool and tidy those up. Due to the auto detect of possibility of defrag it not that much of a requirement.

    Linux and other operating systems just operate a more defrag resistant way. Does not mean there is never a need for a defrag tool. Just for most users by the time they need it they might need a new hard-drive anyhow. So fixed by transfer.

    That was the goal of Linux fragmentation resistance. If you could make fragmentation only a issue after 5 to 7 years by then you don’t normally need a defrag tool because you are putting in a new hard-drive and copying to new hard-drive fixs it.

  12. d. says:

    “There’s very little a newbie might want that isn’t in there. ”

    Hm, maybe. But I just don’t like the idea that there’s one monolithic entity that controls what software should or shouldn’t be installed. It feels too much like the Apple or Win8 model. Where the big daddy knows best. They know what you need, if it isn’t there then you don’t need it…

    And a thing to remember is, a newbie is only a newbie until he’s not a newbie anymore. I think people should be encouraged to explore, and learn, not limited to a narrow model of computing. To me that’s the whole spirit of FOSS, that you’re free to look under the hood and tinker around, and learn how things work, and maybe contribute to projects yourself.

    Maybe it’s just me but I found it very easy to learn how to compile software from source. It doesn’t take to genius to run configure, make and make install, which is sufficient in 90% of cases (ok, after installing some dependencies but the configure script usually tells you what you’ll need, or you can read that from the instructions). Then you can run checkinstall which packages the software you just compiled and installs the package with the actual package manager, so you can easily get rid of it afterwards if necessary.

    As for dependency hell, I have to say to me it seems like an overplayed argument. I install stuff from here and there and I’ve never encountered it. Maybe I’ve just been lucky but to me it seems that it only applies in marginal situations.

    And even if it is a problem, I think the solution should not be to “just use the repositories and nothing else”. That just seems too limiting to me. There should be a way for the user to install 3rd party software without worrying about dependencies. Perhaps a package manager should automatically detect the conflict and install the conflicting libraries as static packages that are only used by the particular program. Or something like that.

  13. d. wrote, ” I don’t think it’s reasonable to require users to depend only on the software their distribution has deemed suitable.”

    For newbies, it definitely is proper to stick within a repository. Otherwise, dependency-Hell can occur and one is much more likely to get malware.

    This is one reason I recommend Debian GNU/Linux because they have such a huge repository. There’s very little a newbie might want that isn’t in there. The only packages I have downloaded outside their repository in years of use were upstream versions of some buggy packages, Google Chrome browser, LibreOffice and the Linux kernel so that I had the latest refinements sooner.

    Certainly, if a newbie is a specialist of some kind all bets are off but for the usual person wanting to play a few games, multimedia and browse the web, there just isn’t a good reason to leave the repository unless switching distros.

  14. d. says:

    “One major conditioned response to overcome is downloading outside of the repository. ”

    Actually, I disagree. I download stuff from wherever and compile my own software if I can’t find it in the repos. Haven’t had any problems so far. I don’t think it’s reasonable to require users to depend only on the software their distribution has deemed suitable.

    About defragging, I was just on a certain forum, where someone was asking about defragging programs, and someone suggested switching to linux or mac os. Immediately some people jumped up and where all like “why wouldn’t you need to defrag on those systems”… like they’ve so conditioned to having to do it on windows they think it’s an inherent part of any operating system.

  15. kozmcrae says:

    Defragging. I forgot about that one.

    d. wrote:

    “It takes time to break the conditioning.”

    One major conditioned response to overcome is downloading outside of the repository. That’s a big one. People just want to download willy nilly all over the Internet. If they can get it to work at all, it will eventually turn their Linux desktop into a slag heap. A good part of using Linux is about discipline.

    So far I think the discipline is downloading along with the software since I’ve yet to hear of any accounts of large scale borking.

  16. d. says:

    It’s like, people have been conditioned for so long to simply accept band-aid solutions like “defragging” or “anti-virus” as something inevitable, like paying taxes. Things you just have to do if you want to use a computer.

    Then if you tell them you don’t have to do those things on linux, they think it must be a trick, you must not know what you’re talking about, of course you need anti-virus! You don’t want to get viruses from the internet, do you?

    It takes time to break the conditioning. Funny thing is, some anti-virus companies are now releasing products for linux. They’re trying to keep themselves relevant by trying to convince people their products are necessary. After all, if people move away from windows and realize they don’t need anti-virus anymore… then where will they get their revenue?

  17. dougman says:

    Two things I see with new people switching to Linux is the constant questioning regarding “degfragging” and “anti-virus” solutions.

    I says, “Why do you need buy additional software, to make your PURCHASED software function correctly?”

    Linux does not suffer the ill’s of Windows.

  18. kozmcrae says:

    The World got extremely lucky with Linux. It’s development under a benevolent dictatorship is the most efficient possible. It’s giving us the best OS we could hope for.

    Unfortunately there are those whose minds have been brainwashed to believe that constant care, daily virus scans, reboots, re-reboots, essential auxiliary software and delicate registry edits are a normal part of operating system maintenance. Those poor fools have been indoctrinated by Microsoft. Will they ever learn? Only if the ever wish to be happy they will.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>