LibreOffice 3.6.2 Released

“Berlin, October 4, 2012 – The Document Foundation (TDF) announces LibreOffice 3.6.2, for Windows, MacOS and Linux, solving bugs and regressions and further improving the stability of the program for corporate deployments. The best free office suite ever is quickly becoming the de facto standard for migrations to free office suites, thanks to the quickly growing feature set and the improved interoperability with proprietary software.

The growing number of LibreOffice adoptions by private and public enterprises is a demonstration of the improvements brought to the legacy code by TDF, thanks to over 500 developers who are focusing on stability and quality (in addition to new exciting features).

The last public administration to migrate has been the city of Limerick, Ireland’s third largest city, where LibreOffice is now used on all 450 desktops in use at the city’s six main locations including the three public libraries, the fire department, the municipal museum and the City Gallery of Art.”

see The Document Foundation announces LibreOffice 3.6.2 « The Document Foundation Blog.

I noticed Gartner is proclaiming FLOSS office suites are obsolete while non-FLOSS are still good for 2-5 years for higher education… with the move to the cloud. Yet they claim they are not anti-FLOSS. ” We consider OSS a business strategy like anything else. In fact, we’ve been particularly vocal about how we feel that cloud is driving OSS adoption across a broad spectrum of solutions, and advocates that an IT organization’s adoption of cloud is a great time to consider replacing proprietary tech with OSS.” Good for them… The rest of us will use what works simply and easily. The cloud is great for global collaboration but scarcely useful for a smallish organization just writing memos and posters. Last I checked e-mail still works cloud or not.

Eventually, Gartner will get it right. They claim not to take paid/commissioned research projects. What other FLOSS do they consider obsolete? Mail.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

76 Responses to LibreOffice 3.6.2 Released

  1. dougman says:

    You read my mind, that was the “other-side” of my thinking, the obvious Windows maladies, that do not exist and are the result of non-technical users.

    Oh, its never M$ fault…NOPE!

  2. dougman wrote, “users will be getting updates every few days.”

    Won’t users love that? Reboots every Tuesday. Expect more mayhem in “the registry” and more unbootable PCs running that other OS. It’s all good. Whatever motivates users to switch to FLOSS is good.

  3. dougman says:

    Looks like M$ is again catching up to Linux, now they are discussing ‘constant updates’. Sounds a lot like Debian.

    http://www.computerworld.com/s/article/9232280/Microsoft_move_hints_at_the_death_of_Windows_service_packs

    So users will be getting updates every few days.

  4. oiaohm says:

    That Exploit Guy boy I hope you are not studying at that university.
    –http://pages.cs.wisc.edu/~solomon/cs537/html/memory.html#core_compaction–
    This is historic crud. Real world current day OS kernels that operating in protected mode do it way different.

    Current day OS kernels have virtual address tables at there disposal. Kernel space even uses virtual address tables. This removes the memory manager doing compaction from having todo track down rand location pointers. Applications and all kernel drivers are contained by the virtual memory tables so update the virtual memory tables pointers and you updated all pointers required.

    That Exploit Guy
    –but instead of doing all that in the middle of the night with a defragmentation tool, you do it every time you write something to a disk. Also, as pointed out in one of comments, the method does not solve the problem where a group of files may need to be kept close to each other, e.g. components for a video project.–

    Here is a problem this shows TEG lack of experience and knowledge on real world conditions. A degfragmentation tool in the middle of the night will not ensure that files need for a video project will remain close to each other or in a rapid readable state while doing operations through the day either.

    Ext file systems gain from doing at write time is 1 the file is normally already cached so is already in ram. Since its not fragmented in memory normally you can do big block dma operations to dump it to disc if you can find another location on disc to drop it non fragmented.

    Writing fragments all over the place is slow and consume more IO operations. Yes its faster to defrag a file that is fragmented than it is it write it fragmented. Less IO operations to write a non fragmented file. Yes lack of write defraging is one of the reasons why NTFS is slower at well.

    Its also less IO operations to read a non fragmented file. With inodes in ext they contain the new file size and where the new start point on disc is anyhow. So those write operations are the same fragmented or not.

    Even on a ssd not fragmented inside the length of file does require less IO operations to place the file to disc or read from disc.

    Files spread all over the disc continual blocks so they can be read in the min IO operations per file cause zero performance impact on a ssd. A file that you have to perform 1 IO operation to read compared to the same size file you have to perform 5 IO operations of course the one you have to perform 5 is slower.

    On spinning media. The problem is harder. Since it also comes down to where the other heads are. So if you have 4 files they are all written so that the tracks they are on line up with like the 4 heads the drive has so the drive does not have to move the arm to read the files they will read off faster. Problem is the sector locations the OS sees and the real physical sector locations inside the drive have no relationship to each other any more. So optimising for the disc is now impossible. So you cannot optimise placement of files for spinning media. All you can do is optimise number of IO operations per file. Before you say you can run a program and detect where is fast TEG think about smart sectors that replace dead sectors. Yes even optimised to reduce IO operations to drive the read still might be slow due to head movement to read replaced dead sectors by hard-drive firmware.

    Placement optimisation is dead on most spinning media. IO optimisation is all you can do on that type of storage media.

    That Exploit Guy is a collage student who basically knows jack about real world conditions. Systems can only perform so many IO operations. Each extra does hinder performance.

    –where the Flash memory blocks are abstracted away from the operating system itself.–

    Watch some of the videos at Linux conferences from guys who make those abstraction systems. You find that the abstraction system in fact attempts to detect the filesystem on disc so it can detect what sections of disc are really in use and what section it can clean up. The issue is windows does not support the ssd trim command for all ways flash can be connected. So a flash drive by USB resorts to detecting ntfs and fat(so hope you write them how Microsoft does or flash goes nuts). Linux file systems they know they will get trim commands for so don’t embed drivers for reading those.

    So yes there is a requirement for the file-system to work cooperatively with SSD. Block alignment stuff ext and older Linux filesystems don’t support yet. Neither does Microsoft.

  5. oiaohm says:

    That Exploit Guy the problem is the b-tree of the registry is balanced like the deleted entries are still valid because its a dirty b-tree.

    –B-tree Rule 5: For any nonleaf node: (a) An element at index i is greater than all the elements in subtree number i of the node, and (b) An element at index i is less than all the elements in subtree number i + 1 of the node.

    …

    B-tree Rule 6: Every leaf in a B-tree has the same depth.

    You don’t need to break either Rule to go disaster if you are not properly removing deleted entries. You obey those rules in my example for the insert. You just don’t process them on delete any more than marking them deleted.

    Build the example I told you todo and watch what happens. Note marked as deleted does not mean when you insert you magically clean them up. It means you balance that delete entries are are still worth while entries.

    Windows Registry Hive is a form of dirty b-tree.

    A dirty b-tree is balanced like if entries marked delete that insert has not replaced yet are valid possible active entries. So the tree is larger than it needs to be for now but it might be exactly the right size for future usage. Performance gain comes from not cleaning tree is good until the b-tree gets too dirty. How does the dirty b-tree get too dirty when inserts are not replacing the items that have been deleted and start folding out creating more and more levels to the b-tree while there are tones of deleted items in the tree.

    Windows hives is a form of dirty b-tree. Where in the rules That Exploit Guy says that a b-tree has to be clean of dead/delete/useless entries. You will find if read the book you are quoting it does not say that.

    Most coding books don’t cover the dirty b-tree and its advantages.

    That Exploit Guy
    –you cannot predict the future–
    This is exactly why clean and dirty b-tree’s exist.

    Keeping a b-tree perfectly clean of dead leafs might mean you have to be deleting and recreating structors more of then you really should. Like a leaf ends up containing no current entries so you remove it. Problem is what if the next insert wants that leaf straight back. You have just wasted O(log n)+ de-allocating processing operation time doing the delete only to have to reverse it so the insert costs more time..

    With a dirty b-tree a leaf with all deleted entries will remain around to be reused by insert at least for a time.

    This is what happens when you have a collaged trained with no real world experience optimising b-tree’s. They are not taught at collage about dirty b-trees so don’t believe they exist. Collage trained incompetence also always believe b-trees are always clean so miss handle dirty b-tree by not understanding the importance of compaction operations. Also the fail to read and understand what a b-tree really is. Also don’t know the speed advantages of using dirty b-trees. Used correctly dirty b-trees consume less cpu time over work cycles than clean b-trees. Used incorrectly a dirty b-tree will end up costing you. Basically being a huge idiot at large.

    Some databases use a form of dirty b-tree with garbage collection. Something is market deleted its left for so long for a insert to replace it if its not replaced then the removal is processed. Basically why do a O(log n) operation when you don’t have to.

    The delete process of a b-tree can get very complex with optimised b-trees. Its all about handling what happens in future using the least cpu time. Most importantly not doing operations in short amount of time you will have to reverse were able.

    Microsoft use to provide a tool called regclean that use todo the compaction. Problem is Microsoft added detecting useless keys to it stuff that up and got rid of the tool.

    Oldman run a hive backup on your R&D machine I have a sneaky feeling the hive will shrink and result in better performance for you.

  6. dougman says:

    Re: I suppose that’s possible if a CPU running GNU/Linux puts less draw on the PSU.

    Hmmmm, I never considered that. I’d would have to use an oscillioscope to check that.

    Which leads me to wonder, how much energy does Windows consume over Linux.

    I think its fairly significant, which would lead to a study, to be followed by a press release, citing that Linux is far more green and LEED ready then M$ Windows.

  7. Chris Weig says:

    Chris Wiggies reminds me of Clarence the clown, both fixated with feces.

    Yes, of course, everyone who utters the word toilet is “fixated” with feces. LOL. Since you take such an unsual interest in this matter, it seems to me that you harbor secret dreams you’d like to act out. Don’t be shy, dougman. There are women whom you can pay for such services.

  8. JR says:

    @ That Exploit Guy

    Many thanks for the reply.

    As far as your comment…”Would you much rather show this to Robert Pogson?”

    I assumed he would see it and read it anyway!

  9. That Exploit Guy says:

    @JR

    linuxaria:

    ‘With Linux’s ext3/4 these are actually designed to prevent fragmentation, they’re given a size by ratio…’

    As I said, this is generally a pointless avoidance method as there is simply no effective ways to estimate how much a file will grow in size, i.e. you cannot predict the future.

    ‘… and each time the file grows beyond that space it’s moved somewhere else and given a larger “buffer zone”.’

    This is just compaction done in a bizarre way, and it comes with all the problems associated with compaction in general – I/O penalty for reallocation, pointer/metadata updates, etc., but instead of doing all that in the middle of the night with a defragmentation tool, you do it every time you write something to a disk. Also, as pointed out in one of comments, the method does not solve the problem where a group of files may need to be kept close to each other, e.g. components for a video project.

    ‘With a windows defrag they happen per accident and the size can be anything from one allocation unit (usually 4096 bytes) to many gigs. There’s no real “plan” behind it, and it doesn’t matter how many times the file’s grown or what its size is. It’s even possible to have absolutely no space between files, so you end up with a FAT like scenario.’

    1) Assuming facts not in evidence.
    2) See above.

    webupd8.org:

    Would you much rather show this to Robert Pogson?

    rtcmagazine.com:

    Write amplication has fundamentally nothing to do with fragmentations in the file system but with the wear-leveling algorithm [1] where the Flash memory blocks are abstracted away from the operating system itself. NTFS is strictly not designed to manage SSDs directly and there is no reason to expect that mere, routine defragmentation is adequate to solve the problem in any way.

    [1] See here for the difference between dynamic wear-leveling algorithms and static wear-leveling algorithms and discussion on certain design issues.

  10. dougman wrote, “Regarding faulty capacitors, Linux will sometimes install on systems with bad caps whereas Windows won’t”.

    I suppose that’s possible if a CPU running GNU/Linux puts less draw on the PSU but I would not count on it. I have exploded more capacitors than I should have and none of them “came back”. An electrolytic capacitor often fails by gradually losing capacity but too often they explode or short. Nothing much will fix that except a new motherboard or a soldering job.

    My best ever was an attempt to increase the accurace/stability of a power supply for the magnet of a cyclotron. We increased the gain but went too far. The thing began to oscillate and the charging and discharging of the 100KW PSU heated the capacitors rather quickly… My boss was close to them and I was half a second too slow to hit the panic button… Fortunately they were “aimed” upwards.

    Another good one happened in the middle of a lesson. I was leaning on an ATX PC when its PSU exploded… Instantly I changed the lesson to swapping PSUs and recording the smell for future diagnostic events. What are the odds? One in a many million, I suspect.

  11. JR says:

    @ Chris Weig

    Your comment refers ……..

    Linux will even install on a toilet that’s clogged. I’ve done this a few times to get the toilet to drain again.

    You seem pretty well on top of toilet technology any idea what OS or embedded software runs this toilet.?

    http://www.kohler.ca/pr/pressrelease.jsp?aid=1194487057489

  12. dougman says:

    Chris Wiggies reminds me of Clarence the clown, both fixated with feces.

  13. Chris Weig says:

    Regarding faulty capacitors, Linux will sometimes install on systems with bad caps whereas Windows won’t, I’ve done this a few times to get people back online.

    Linux will even install on a toilet that’s clogged. I’ve done this a few times to get the toilet to drain again.

  14. dougman says:

    Regarding faulty capacitors, Linux will sometimes install on systems with bad caps whereas Windows won’t, I’ve done this a few times to get people back online.

    So yes, one can do a BAM install with Linux. 🙂

    The drive is you mentioned is relatively new, perhaps dated but so what? It was certified for use with this laptop and runs like a champ.

    The laptop, a 2008 Inspiron 1525 was given to me by a user that was going to throw it out anyway last year, they had Vista on it and the OS hosed the drive literally.

    Ordered new drive, 10-minute install, updated packages – another 5-10 minutes and it’s like brand new.

    The ex. owner was amazed on how fast it was now, actually offered to buy it back, but I said no. 🙂

    See the thing with Linux is, once you show people the difference they are awe struck to say the least. People have gotten less and less concerned about what OS they are using these days, Android is evident of that in itself, but they DO care when Windows doesn’t work. 🙁

  15. oldman says:

    “Unless of course you forgot the g before the b”

    Yep its a typo JR – it should be 900Gb. I was rounding on the numbers.

  16. JR says:

    @ Oldman

    Your comment refers…..

    “And I misspoke – I actually only have 1800Gb useable on my system of which just under 900b is available.”

    Maths was never my strong suite and unless you are taking the mickey something does not add up.

    A format of a 2TB drive should give you about 1820gb
    ok 1800gb and if you only have 900b available your OS should be complaining.

    Unless of course you forgot the g before the b

  17. That Exploit Guy says:

    @dougman

    ‘Busted capacitors can be resoldered, done it a few times. Dells were problematic in this regard, just make sure the polarity is right.’

    Backtracking already? I thought all you had to do was to put run Linux on it and then, BAM, problem’s solved!

    Also, no, do I look like I am bothered enough to try to save a motherboard that is both broken and out-of-warranty? If you want it, you should be able to find it in a tip somewhere.

    ‘Your AGP card sounds like bad video drivers, you could increase your case cooling or tack on a small fan to the GPU.’

    I pulled that out from a friend’s old computer, and unless you want me to mail it to you (though you’ll need to foot for the postage fee), it’ll simply get thrown out along with my big ol’ box of warez where the aforementioned DIMM and HDDs are stashed at the moment. 1 don’t even own a machine with an AGP slot any more.

    ‘Ahhh!… I figured it out your a junk collector or a pawn shop guy.’

    That really depends on who’s still using a 3-year-old piece of junk Seagate HDD and boasting about it here…

  18. oldman says:

    LOL, 2TB Boot Drive, how bloated is your Windows? Why do you need a 2000GB boot drive?

    I only have 10GB on my /boot partition and its at 58.% filed, so that’s 6GB for the Linux OS. Windows 7 calls for 16 GB available hard disk space (32-bit) or 20 GB (64-bit).

    With Linux I can do all the same, but at a third of the requirement in drive space. Makes you wonder what sort of bloated goodness M$ has stored under their hood

    Well doug, you see it works something like this.

    550Gb of virtual machine images for various multi system mixed OS prototypes that I run.

    160 Gb of template VM’s (windows mostly, I build my linux VM’s on the fly using redhat kickstart)

    120Gb for the 3 additional bootable copies of different versions of windows and windows server that I am working with at this time.

    about 30GB’s in iso’s of various products and OS images.

    And about another 300Gb in personal stuff scripts databases data etc. .

    And I misspoke – I actually only have 1800Gb useable on my system of which just under 900b is available.

    Does this answer your question of why I need a 2TB disk drive?

    Now I realize that this is unusual

  19. dougman wrote, “Occasionally, Windows might not read your user profile correctly—for example, if your antivirus software is scanning your computer while you try to log on. Before you create a new user profile, try restarting your computer and logging on with your user account again.”

    There’s a true multi-user/multi-tasking OS in action… (SARCASM!!!) That’s exactly what I did several times per week, delete the crap and then have the user log in again. That other OS is a make-work project for system administrators and they feel their jobs are threatened if a better OS comes to town…

  20. dougman says:

    Busted capacitors can be resoldered, done it a few times. Dells were problematic in this regard, just make sure the polarity is right.

    Faulty DIMM’s will take any OS through the loop. 🙂 You can use memtest86+ to check for problems.

    Your AGP card sounds like bad video drivers, you could increase your case cooling or tack on a small fan to the GPU.

    Ahhh!… I figured it out your a junk collector or a pawn shop guy.

  21. That Exploit Guy says:

    @dougman

    ‘LOL, 2TB Boot Drive, how bloated is your Windows? Why do you need a 2000GB boot drive?’

    Because:

    1) It’s cheap?
    2) It’s big?
    3) It’s faster than whatever old piece of junk you are hanging onto? (Speaking of which, I have got a bunch of old HDDs I haven’t chucked out yet. You want them?)

    Besides, do you even know what he’s got on that “boot drive”?

    My laptop alone has three virtual disk files that occupy a total of 150GB of space.

    ‘Overlooking all of what you just said, Why should M$ users be subjected to corrupted profiles to begin with?’

    Maybe because they use Seagate HDDs? (Oh, and don’t you like the fact that I pulled out 2 broken ones, exactly 250GB each, from my Linux box a few months ago? I bet you can totally rescue them with your magic Linux power… Oh, wait.)

  22. That Exploit Guy says:

    @dougman

    ‘Summary: Your computer is probably not broken. It’s your operating system that is causing you problems.’

    I should have kept my old Socket A motherboard and sent it to you instead of thrown it in the bin, then. It would be helluva fun watching you wonder why your magical Linux distro wasn’t sprinkling pixie dust on those busted capacitors.

    Hang on, I think I’ve still got a DIMM with faulty bits in an archive box that will guaranteed send whatever Linux you have spontaneously reboot at random moments!

    Don’t like that? How about an AGP video card with no fans out-of-the-factory that overheats within an hour doing not much at all?

    Don’t like that, either? I am pretty sure I’ve still got some other similar relics here and there I pulled out from a bunch of old computers. You name it, I have probably got it. I’ll even post a picture or two of these piece of junk here if you want to. How’s that?

  23. dougman says:

    Exploitz, it hurtsz! Excuse for what?? I don’t suffer the ills of M$. My paying customers unfortunately do and gladly pay me to fix their problems.

    Overlooking all of what you just said, Why should M$ users be subjected to corrupted profiles to begin with?? I says its sloppy code to that unfortunately will never be fixed.

  24. dougman says:

    LOL, 2TB Boot Drive, how bloated is your Windows? Why do you need a 2000GB boot drive?

    I only have 10GB on my /boot partition and its at 58.% filed, so that’s 6GB for the Linux OS. Windows 7 calls for 16 GB available hard disk space (32-bit) or 20 GB (64-bit).

    With Linux I can do all the same, but at a third of the requirement in drive space. Makes you wonder what sort of bloated goodness M$ has stored under their hood.

    Running command: “sudo parted -l”

    Model: ATA ST9250410AS (scsi)
    Disk /dev/sda: 250GB
    Sector size (logical/physical): 512B/512B

    Number Start End Size Type File system Flags
    1 1049kB 10.1GB 10.1GB primary ext4 boot
    2 10.1GB 250GB 235GB extended
    5 10.1GB 246GB 231GB logical ext4
    6 246GB 250GB 3999MB logical linux-swap(v1)

  25. That Exploit Guy says:

    @dougman

    ‘Was this helpful? Efff no..’

    With some rudimentary Google-fu, this is what I found. Of course, the fact that it’s freeware certainly doesn’t sit well with someone who wants to make money out of “stupid” users.

    Or how about making a backup for ntuser.dat? Heck, that sounds easy enough to do even in a DOS script, doesn’t it?

    Or how about System Restore for a single-user system? Don’t tell me you don’t even know how to push buttons now!

    With just a fraction of all the effort you put into tweaking those 1001 .conf files, compiling the kernel or generally mucking around with sweet nothing, you can come up with at least half a dozen solutions to fix or mitigate a user profile corruption problem. I spent about 30 seconds on coming up with each of the suggestions above, so what exactly is your excuse, dougman, to not be able to do the same?

  26. dougman says:

    Exploited Dudez,

    So you don’t have computers to manage eh? Then you must not be in the IT field then either, thanks for letting us know that and how pointless – meaningless your opinions are on this blog.

    Your a bumbling oaf for spilling your coffee, no more drinking or eating around the computer for you! 🙂

    Stories regarding M$ sloppy software offerings are NOT pointless, again thank you for your insightful opinion, but truth be told why should every facet of a story be conveyed when its common practice?

    Summary: Your computer is probably not broken. It’s your operating system that is causing you problems. If you are here, then you may have experienced problems with Microsoft Windows. That’s common. Microsoft Windows has a lot of problems. Remember your computer hardware does NOT get infected, your Windows operating systems does. Huge difference!

    It would be analogous of the automobile engine software acting up, but the mechanical part of the engine is sound.

  27. oldman says:

    The largest registry file, C:\Windows\System32\Config\SOFTWARE, in the Windows install that contains all applications I use on a daily basis (including MSO) has a size 59.5 MB. That’s barely a thing for a modern hard disk drive [1].

    And my R&D system with (probably) scads more of test dev stuff tha TEG runs with tops out at a whopping 90.5Mb. Again nothing for for my 2TB Boot Drive.

  28. That Exploit Guy says:

    @Robert Pogson

    ‘I was working at a place in the Arctic where NTUSER.DAT was regularly corrupted, like a few times per week, for a system of 100 XP machines.’

    Look, I can also say that at one point I installed XP on my PC and it just wouldn’t start any more. This is, of course, except I’ll never tell you the part between installing XP and starting up the machine where I accidentally spilled some coffee into the chassis.

    See how pointless this kind of stories is when there is no way to get the whole facts?

  29. dougman says:

    LOL… NTUSER.DAT and corrupted profiles are still the latest innovation from M$. You would think that M$ could get this right with all the billions of dollars they are taking in, perhaps we should cap corporate profits. 🙂

    http://www.youtube.com/watch?v=07fTsF5BiSM

    http://windows.microsoft.com/is-IS/windows-vista/fix-a-corrupted-user-profile

    This was the official M$ way to solve the problem, “Oh well, just create another profile, pffttt.”

    If you tried to log on to Windows and received an error message telling you that your user profile might be corrupted, you can try to repair it. You will need to create a new profile, and then copy the files from the existing profile to the new one.

    Occasionally, Windows might not read your user profile correctly—for example, if your antivirus software is scanning your computer while you try to log on. Before you create a new user profile, try restarting your computer and logging on with your user account again.

    Was this helpful? Efff no..

  30. TEG wrote to oiaohm, “until you can prove “windows generates about 50 junk registry entries per hour of operation” or “NTUSER.DAT that have got there b-tree to 100 000+ deep” with evidence, please buzz off.”

    I was working at a place in the Arctic where NTUSER.DAT was regularly corrupted, like a few times per week, for a system of 100 XP machines. Very little malware got in because it was a very strongly defended network. Apart from forgotten passwords and Patch Tuesday work, it was my main workload for IT. Why should anyone need to know of the existence of NTUSER.DAT if that other OS just worked?

  31. That Exploit Guy says:

    @Robert Pogson

    No more “B-tree is binary tree blah-blah”? It’s good to see you finally know when to quit, isn’t it?

    ‘B-trees are not perfect or there would not be so much work on improving them…’

    The largest registry file, C:\Windows\System32\Config\SOFTWARE, in the Windows install that contains all applications I use on a daily basis (including MSO) has a size 59.5 MB. That’s barely a thing for a modern hard disk drive [1].

    Also, you know what most modern file systems use for organising indexes, right?

    ‘Google gives me 15 million hits just for “regedit”. Why is regedit even necessary?’

    Why is vi or emacs + /etc necessary, then, if Linux is all about “install and forget” or any such nonsense?

    [1] Note that the default cluster size for NTFS is 4KB. This means even completely deregarding whatever caching or optimisation Windows might do for the registry files or the file system itself, the most disgustingly average 3.5″ thing (~.20 MB/s random read) in the market can still pump out at least 2 hundred clusters per second at random. Think how many registry entries you can fit into a cluster, let along two hundreds of them.

  32. JR says:

    @ That Exploit Guy

    What happened to LibreOffice? seems to have got lost!

    Anyway thanks for the Computer Science lesson on b-tree, not to say that I fully understood it.

    Your comment refers …..’Seriously, even my 2-year-old niece could come up with a better faux rebuttal than this.”

    I think that is a bit of a stretch, anyway back to business…

    My question to you ….

    ‘I take it you feel the same way about defraggers.?’

    was not meant as a trick question.

    Be that as it may, I see it has elicited comment about linux not needing to be defragged to which you replied that it was.

    It appears there are certain factors that come into play before necessitating defragging of a Linux file system as opposed to a Windows file system.

    Free space left on the disk also has an effect on fragmentation.
    Perhaps better to use a max of 80% of the disk.

    Anyway,

    Here some sites maybe worth a look:

    I would appreciate your comments on them.

    http://linuxaria.com/article/does-linux-need-defrag?lang=en
    http://www.howtogeek.com/115229/htg-explains-why-linux-doesnt-need-defragmenting/
    http://www.webupd8.org/2009/10/defragmenting-linux-ext3-filesystems.html

    Your comment re SSD drives:…”The short anwser is “no”, but if you are talking about SSDs, then defragmenting is certainly a pointless exercise.”

    This maybe of relevance:

    http://www.rtcmagazine.com/articles/view/101053

    http://www.softwaretalk.info/does-low-disk-space-affect-ssd-performance.htm

  33. That Exploit Guy says:

    @oiaohm

    ‘A proper B-tree implementation has a cleaning up of tree after deletion. Right?? From your belief.’

    From the same book, p. 519:

    B-tree Rule 5: For any nonleaf node: (a) An element at index i is greater than all the elements in subtree number i of the node, and (b) An element at index i is less than all the elements in subtree number i + 1 of the node.

    B-tree Rule 6: Every leaf in a B-tree has the same depth.

    See what that means here? In a B-tree, the elements in each branch is predictable. This mean if you pluck away the element at index i in a given node along with the subtree under it, you will not lose anything smaller than the value at index i or anything larger than the value at index i + 1. And since every leaf in a B-tree has the same depth, the tree will remain balanced, naturally.

    Also, until you can prove “windows generates about 50 junk registry entries per hour of operation” or “NTUSER.DAT that have got there b-tree to 100 000+ deep” with evidence, please buzz off.

  34. B-trees are not perfect or there would not be so much work on improving them

    The current issues seem to be that in a multi-user/multi-tasking system the complexity of the B-tree process causes problems. One way or another, the labour at each node is increased and the whole cost of searching, for instance, is the sum of switching levels and working at each node examined, not just switching levels. A linear search involves having just one level… and we know that’s not optimal. You can find a case where the b-tree also clogs up.

    Compare O(log2(n)) (nodes examined) X 2 (work at each node) with O(logm(n)) X 2N. Pick large enough values and you can make the right expression larger than the left. This is compounded with having to do I/O or fetching from another cache. If you have larger nodes, you have more I/O per node stored elsewhere. There’s no magic to optimizing search and B-trees are not the best answer to the problem in every case. With hundreds of millions of installations of that other OS, there is a registry somewhere that’s messed up. I read a lot of editing of the registry. That’s real, not my imagination.

    Google gives me 15 million hits just for “regedit”. Why is regedit even necessary?

  35. oiaohm says:

    TEG what do you get when you get a b-tree that not correcting for the fact entries have been deleted. A Windows Registry Hive.

    –Face it – you simply don’t known how to implement a B-tree, or else you wouldn’t make a statement this obviously ill-informed and stupid.–

    A proper B-tree implementation has a cleaning up of tree after deletion. Right?? From your belief.

    Sorry to burst your bubble TEG.
    Search a b-tree and insert into a b-tree have to be done a particular way to be a b-tree.

    A b-tree using a different delete strategy or like windows hives that is almost no delete strategy are still a b-tree.

    B-tree fail predicable ways. This is why you should implement a sane deletions strategy of some form. To prevent the predicable fails of b-tree. Don’t implement a sane delete strategy you B-tree will fail maybe years down the track. Yes its will not if.

    You will find your self with b-tree talking about delete strategies. Delete strategies are like file cabinet file management strategies don’t have them the complete thing fails on you.

    Lot of disc stored b-tree tolerate being dirty tree for performance reasons even some ram base tolerate being dirty for short time frames for performance reasons. If the clean up don’t evidentially happen b-tree will go badly wrong.

    B-tree Delete best O(log n) worst O(log n) something wrong here. Windows registry deletes take a magical almost 0 no matter how large the registry is. They should take O(log n) if you do cleaning on delete.

    But remember O(log n) delete is only for the most common B-tree delete strategies. You are not required to use either of those strategies for what you working on to be a B-tree.

    Doing cleaning on delete will result at times in having to rewrite the complete b-tree to disc again due to structure change. So in ram the delete costs O(log n) add disc storage it costs way more than this.

    I was clear particularly delete where you don’t evaluate. To evaluate and do alterations to clean up cost O(log n) plus possible full rewire to disc.

    There is a reason why there are 3 correct options here when implementing a b-tree. 1 do cleaning on delete 2 do cleaning before delete both. 1 and 2 take the possible risk of having to rewrite it to disc frequently.

    3 regenerate every so often and live with the temporary bad state of the b-tree. Option 3 results in less disc writes and mostly better performance or at least not a performance hit large enough to worry about.

    Problem is Microsoft takes option 4 with registry hives. Simply don’t do deleting from the b-tree proper-ally at all and don’t set up automatic regeneration every so often. Attempt to reuse the deleted sections of the b-tree without altering the structure. This saves even more on disc writes in the short term but leads to the run away problem.

    TEG by old books nt registry is a b-tree since the define has cleaning tree up is optional(using sane strategies deleting). Running a b-tree on disc has issues. Attempting to work around those issues bring prices to pay.

    Can you beat the so called –B-tree Delete best O(log n)– Yes you can because if you have many entries deleted in the same leaf next to each other. You don’t have todo a Number of delete entries x O(log n) for the delete clean. You can get away with almost 1 x O(log n).

    TEG I have implemented optimised b-trees for the limitations of spinning media.

    Windows Registry Hive is attempt to do a optimised b-tree that is not done right. The delete strategy of the Windows Registry Hive is wrong so it does not remain a high performing b-tree.

    TEG this is why when you came claiming its a b-tree everything is right you are so wrong. What delete strategy is the b-tree using is critical to how stable and effective it will be over the long term.

    TEG make your self a true bare basic b-tree and implement a reuse deleted insert method(this is almost exactly the same as per spec b-tree close enough that is valid) and a tag as deleted delete method(skip common proper clean delete strategy completely) and watch how fast is screws up. Surprisingly slowly at first then watch how much cpu time you save. But it is slowly creeping to hell. This is the Windows Registry Hive. Issue is Windows Registry Hive was designed when cpu’s and discs were a lot slower. Also you had more programs with own private .ini files not using the registry. So the saving back then were sane. Today same method is not sane.

    TEG about time you eat some humble pie. You have your knowledge about b-trees wrong.

  36. That Exploit Guy says:

    I shouldn’t have skipped “Rule 3” and “Rule 5”, should I?

    B-tree Rule 3: the elements of each node are stored in a partially filled array, sorted from the smallest element (at index 0) to the largest element (at the final used position of the array).

    Notice from way, way, way back:

    ‘At each node, we do O(log m) work to choose branch.’

    Because each node is naturally sorted during element insertion, we use binary search to choose branch, hence O(log m) for search time.

    Linear search my rear end.

  37. That Exploit Guy says:

    @Robert Pogson

    ‘Why, yes it is, a binary tree with some frills, like doing linear searches at each level.’

    You are out of your depth.

    From the same reference in my last comment, p. 445:

    A binary tree is a finite set of nodes. The set might be empty (no nodes, which is called the empty tree). But if the set is not empty, it follows these rules:

    1. There is one special node, called the root

    2. Each node can be associated with up to two other different nodes, called its left child and its right child. If a node c is the child of another node p, then we say that “p is c‘s parent.”

    3. Each node, except the root, has exactly one parent; the root bhas no parent

    4. If you start at a node and move to the node’s parent (provided there is one), and then move again to that node’s parent, and keep moving upward to each node’s parent, you will eventually reach the root.

    From the same reference, pp. 518-519:

    Every B-tree depends on a positive constant integer called MINIMUM. The purpose of the constant is to determine how many elements are held in a single node, as shown in the first two B-tree rules:

    B-tree Rule 1: The root can have as few as one element (or even no element if it also has no childern); every other node has at least MINIMUM elements.

    B-tree Rule 2: The maximum number of elements in a node is twice the value of MINIMUM.

    Although MINIMUM may be as small as 1, in practice much larger values are used – perhaps several hundred or even a couple thousand.

    The Subtrees below a B-Tree Node. The number of subtrees below a node depends on how many elements are in the node. Here is the rule:

    B-tree Rule 4: The number of subtrees below a nonleaf node is always one more than the number of elements in the node.

    Whom I am supposed to trust? A random blogger with some purported 40 years of experience looking at IT, or a book written by a reputable academic from the University of Colorado?

    Don’t be ridiculous.

  38. TEG wrote, “B-tree is not a binary tree”.

    Why, yes it is, a binary tree with some frills, like doing linear searches at each level. If you allow multiple nodes at each level, the level search may still be log(n) but in reality it’s log(n)+a short linear search which can get as bad as you want for large numbers of nodes. Still, that other OS slows down. We’ve all seen it with our own eyes. It slows down for multiple reasons. One is definitely some problem with the registry one of M$’s worst ideas ever. The registry may have made sense in the old days with one app at a time running but it makes no sense at all with 100 processes sucking on it. Face it. The registry is a bottleneck, one of several.

  39. That Exploit Guy says:

    @Robert Pogson

    ‘TEG and oiaohm slagged each other about binary trees. The fact is that a balanced binary tree is nicely bound by a logarithmic scale but an imbalanced tree can be nearly linear in its search.’

    Nah-uh! A B-tree is not a binary tree. Rather, it’s a self-balanced tree structure that you may consider an expansion of the concept of so, but since B-tree does not obey the fundamental rule that each node of a binary tree may only have two childern – left and right[1] – it simply cannot be classified as a binary tree.

    Maybe both you and oiaohm should try better luck with your Google-searching and Wikipedia-browsing next time.

    [1] Main, M., Data Structures & Other Objects Using Java, 3rd edn., Addison-Wesley, Boston, p. 445

  40. TEG and oiaohm slagged each other about binary trees. The fact is that a balanced binary tree is nicely bound by a logarithmic scale but an imbalanced tree can be nearly linear in its search. ie. 100K steps of “go left and check a bunch of things” compared to random “go right”, “go left” until the item is found a few layers in. In bad cases a binary tree can be more work to search than a linear array.

    e.g. Make a binary tree of stuff that’s already sorted somehow…

  41. That Exploit Guy says:

    ‘Cleaning has todo with storing a b-tree to media.’

    Care to show me a code sippet as to why such is the case, then? Let me guess – you’ll just say something along the line of:

    “My code is protected by NDA.”

    “It’s too long for a blog post.”

    “It’s somewhere. Find it youself.”

    Then you’ll go back to spewing more garbage again. Face it – you simply don’t known how to implement a B-tree, or else you wouldn’t make a statement this obviously ill-informed and stupid.

    ‘I have seen windows registries hives particularly NTUSER.DAT that have got there b-tree to 100 000+ deep. Of course the user takes ages to login and it magically improves when you remove the NTUSER.DAT. Each look up 6 times longer than a sane search.’

    You know, I have seen the sun growing 100 000 lightyears larger in diameter and spawning 50 smaller suns every hour. I have no evidence to prove that it’s happening, and no one else sees that it’s indeed happening, but it’s all true because I SAY SO.

    Do you honestly expect anyone to believe in this kind of patent nonsense?

    Buzz off.

  42. oiaohm says:

    That Exploit Guy the windows registry hive does not obey a proper b-tree.

    Cleaning has todo with storing a b-tree to media.

    –It’s simply a way to organise data so that the search time for an entry is minimised. —

    Simple way to organise data yes. Its like saying a filing cabinet is a simple way to organise data. If you don’t keep it clean its disaster.

    Really watch that video again and notice its only talking about in memory usage were you are starting clean.

    That Exploit Guy b-tree once used in media storage sense where you are not starting it clean. With constant on going modifications over time fails in a predictable ways.

    B*-tree and B+-trees maintain balance better. Htree that is not covered in that video exists specially for ext file systems. Htree on Linux has restricted depth so it cannot go nuts.

    That Exploit Guy from wikipedia and any good course notes will have equal.
    –A B-tree is kept balanced by requiring that all leaf nodes be at the same depth. This depth will increase slowly as elements are added to the tree, but an increase in the overall depth is infrequent, and results in all leaf nodes being one more node further away from the root.–

    The issue here the registry is storing a b-tree on disc. Notice something important. Does a B-tree depth naturally decrease as elements are removed. So any add triggering depth to increase is on going nightmare since it does not auto reduce depth. Cause of the search of windows registry getting insanely slow is this error.

    Answer a b-tree does not naturally reduce without some form of compaction or cleaning operation being performed. Cleaning of some form basically.

    That Exploit Guy some search tree systems are self cleaning some are not.

    That Exploit Guy
    –Oh, sure! You mean Log_(100,000)x(10,000,000)x(10,000,000)x(10,000,000)x(10,000,000) or something like that?–

    No I don’t. The issue is B-tree nature if not cleaned to slowly but progressively increase depth. Some of the NT registry storage causes this to happen even faster.

    I have seen windows registries hives particularly NTUSER.DAT that have got there b-tree to 100 000+ deep. Of course the user takes ages to login and it magically improves when you remove the NTUSER.DAT. Each look up 6 times longer than a sane search.

    To put the term of B-tree into something of the term of filing cabinets. As a section of the filing cabinet got filled people kept on going out and buying more filling cabnets. Instead of waking up the existing filing cabnets are 90 percent containing nothing. So the filing cabnets use used place 1 file in each filing. Instead of placing them in one draw where they all would fit. This is exactly how a B-tree fails. Add entries to b-tree be deleting entries and at some point it gets lost.

    When you are saving that to disk. Every time it gets lost it adds on.

    add-delete not properly managed on a b-tree sees you in complete disaster. Particularly delete where you don’t evaluate if you really need the b-tree depth you have can I now reduce yes/no. Stock standard computer sicense b-tree does not require you to clean up after yourself.

  43. That Exploit Guy says:

    @oiaohm

    ‘That is if its a proper working B-tree that has cleaning.’

    Isn’t that another prime example of oiaohm looking up Wikipedia and not understanding what’s written there?

    “Cleaning” has nothing to do with B-trees. A B-tree is what we call a “search tree” in computer science. It’s simply a way to organise data so that the search time for an entry is minimised. Here, watch this instead of jamming your foot farther into your own mouth. It’s hilarious to watch you desperately trying to pretend to know this kind of basic things the first time around, but now that’s just getting repetitive and painful to watch.

    ‘Log_(100,000)x(10,000,000)’

    Oh, sure! You mean Log_(100,000)x(10,000,000)x(10,000,000)x(10,000,000)x(10,000,000) or something like that?

    Seriously, even my 2-year-old niece could come up with a better faux rebuttal than this.

    ‘B-tree that is being changed will require to be rewritten every so often to clean out the dead junk and allocate space so new junk can be inserted without causing a new node to be formed so risking increasing depth.’

    An interesting thing about Australian bush flies is that they like landing on people’s faces whenever the opportunity presents itself. Australian summers are usually very hot and dry, and in order for a bush fly to survive, it must constantly look for water sources – even the perspiration on the human face – to replenish it fluids. Thus, the term “Aussie salute” is humourously coined to refer to the repetitive action of brushing flies off from one’s own face.

    Oiaohm, will you accept my “Aussie salute”?

  44. dougman wrote, “I’ve never heard of anybody regularly defragmenting ext3 or ext4 file systems, and there are no tools to do so that I know of.”

    There’s usually no advantage whatsoever except in corner cases like nearly running out of space and then deleting some large files, or a bunch of files written and rewritten being almost but not quite the same size. This results in files scattered in segments all over the drive requiring extra seeks for reading or writing. GNU has lots of tools to fix this, usually depending on a backup system:

    • tar czf backup_file_somewhere_else.tgz some_directory_mounted_on_some_file_system;rm -fr some_directory_mounted_on_some_file_system/*;tar xzf backup_file_somewhere_else.tgz
    • e2defrag exists for ext2 file-systems which can be made from ext3 by removing the journal.
    • cp -r somedir anotherdir_on_a new_fs_elsewhere

    I’ve been using GNU/Linux for more than a decade and have only defragged a few times mostly as a part of moving a file-system to a new hard drive or storage.

  45. oiaohm says:

    ext3 is careful about placement of new entries to prevent tree growth. Ext4 adds a online defrag that works in background to clean up any b-tree issues that do happen.

    That Exploit Guy Linux ext filesystems does something windows NTFS does not. Explains its resistance.

    It is why this works. http://www.vleu.net/shake/

    When you write a file even back to an existing file. Ext if its fragmented and there is a location to move that file to where it will not be fragmented ext take the chance to do it. NTFS just puts the data in the old fragmented storage. So once a file is fragments on NTFS it stays fragmented until a defrag tool finds it but on ext file systems a file that is fragmented might cease to be so due to natural operations.

    NTFS is not performing re-evaluation on existing files having new contents written this is why NTFS fragments faster. Ext4 adds a on-line defrag for those files that are fragmented and don’t get operations performed on them to trigger a natural clean up by re-evalution. shake is another way to trigger the re-evalution for older ext2 and ext3.

    ok-defrag is gone by the way That Exploit Guy it was proven to hinder not help. Why take a file system off line when you can run shake in background and basically get the same result.

    Minor errors cause major problems.

  46. oiaohm says:

    That Exploit Guy
    –Really? Let’s say you are absolutely correct in this regard, why not go and try solving log_{10}(100,000,000) on a calculator? I have a nagging feeling that even counting the zeros in the number is a bit above your level.–

    That is if its a proper working B-tree that has cleaning. The formulae you are using for b-tree does not match the MS registry implementation.

    Your M is way smaller than what a Windows registry hive slowly but surely turns itself into.

    Windows registry ends up due to its defect looking Log_(100,000)x(10,000,000) or worse. At this point it would be faster to-do a raw search from one end of the registry to the other than than use the b-tree.

    It fragments from the writing and delete process windows registry adds more and more leafs to tree trying to fill internal free space in the hive(big bad mistake for lookup performance). Some is b-tree some is trying to keep registry file small.

    Worst case in windows registry you can end up with as many layers as currently active entries.

    b-tree out of order to be effective is breaking down into smaller and smaller number of entries per leaf heading in the direction of 1 entry per leaf. Then from there in the direction of all entries in the top node going down an insanely deep tree to get to one entry without anything else in that tree.

    http://en.wikipedia.org/wiki/B-tree its stated in the wikipedia and every other book about the problems with b-tree. –Insertions and deletions cause trouble–

    B-tree that is being changed will require to be rewritten every so often to clean out the dead junk and allocate space so new junk can be inserted without causing a new node to be formed so risking increasing depth.

    The problem with windows registry is it attempts to avoid expanding the file. Result is it fills in spaces that should be left for future writes with small leafs. Windows registry basically has the same problem as fat file system.

    b-tree is a great idea unless you plan to be deleting and writing to it like a mad man.

    B+ tree was invented particularly to solve the issues with b-tree.

    That Exploit Guy basically if you know your b(+/-) trees you would know when something says b-tree this is best for read only storage. If something is b+tree is for read/write storage.

    Windows registry fails exactly how you expect a b-tree to fail. Now if you are using b-tree for read/write you should have a compaction and tidy up process designed. The space saving that b-tree normally does not contain that MS added to the registry makes matters worse for search performance due to adding more leafs.

    So windows has a bug lack of registry remake program in by default.

    Windows slow down does part trace to the registry.

  47. That Exploit Guy says:

    @dougman

    Also, if you want to do an SJVN-style copy-and-paste job, at least try and have the courtesy to link to the source.

  48. That Exploit Guy says:

    @dougman

    ‘Once the latest exploits on Windows is patched, then the Exploit Guys release the latest 0-day and you as the software lessee, suffer waiting 30+ days till that new exploit is patched, then the cycle repeats itself over and over. Windows 8 and METROFAIL will be more of the same.’

    The last emergency update for Windows was released on a Sunday, not a “Patch Tuesday”. Check your facts.

    It’s kind of cute of you try to sling mud at my direction, though.

    ‘Oldman, thinks people do not value their time which is incorrect. Given a choice, of <6min install time vs. +90min install time, people would choose LibreOffice hands down, then having to wait for the latter to finish it's bloated installation process.'

    As many people here have already asked, what exactly is the point of installing and uninstalling the same piece of software over and over? This appears to be nothing more than a problem based on a hypothetical scenario manufactured purely to make the problem exist in the first place, and I don’t see any effort from you to make it relevant to any actual usage, either.

    ‘Fragmentation is/has/was never been a problem in Linux’

    So Mr. MIT wasted his graduate years learning nothing of substance assuming that, of course, he actually went to MIT at all.

    To save me explaining the obvious, may I recommend you to read this instead?

    Of course, there is also this forum post where one Ubuntu developer flatly points out that Linux filesystems not needing defragmentation is “one of the most common myths of Linux”, and that, I think, is quite worth taking note of.

    ‘It it makes you wonder why Microsoft doesn’t make NTFS work the same way’

    Traditionally, NTFS reserves about 12.5% of the total volume size for the MFT alone by default. If you think whoever didn’t think about fragmentation, then think again.

    If I am not mistaken, Windows employs or has at least at one point employed various methods to avoid fragmentation. Note that some of these methods, which include reserving free space at the end of each file, are also used by ext3 and ext4 filesystems extensively. Finding the best spot to write files? NTFS does that, too! The problem is that each of these methods by design have their own drawbacks. Think about this: what if the file grows beyond the free space for it, or there is no room to put a new segment of the file close to the existing ones? It’s not really hard to see why such methods are over-hyped features rather than genuinely effective ways to minimise the impact of fragmentation in the long run.

  49. kozmcrae says:

    Chris Weig wrote:

    “The real question is: why does the Cult of FLOSS vehemently defend FLOSS against proprietary software solely on the basis of it being FLOSS, and not on the basis of quality and features?”

    It shouldn’t be a feature that an OS can hold its own against malware but in Microsoft’s case, that would be a feature, a missing feature. That’s why I initially switched over to FLOSS, because Windows XP couldn’t keep from getting owned even with the help of several anti-malware packages. Switch to GNU/Linux, problem solved. I’m still primarily a FLOSS user. Microsoft hasn’t given me any compelling reason to use their software. Not even close.

    So now that I’ve said that I personally will not leave FLOSS for Microsoft, it’s time for you to attack me personally. Isn’t that how the Cult of Microsoft conduct themselves in such a case as this? Yes, I believe it is.

  50. dougman says:

    I get updates on my linux box daily, which pulls from the local mirror hosted at a local college. Far better model than having to wait once a month for patch Tuesday.

    Once the latest exploits on Windows is patched, then the Exploit Guys release the latest 0-day and you as the software lessee, suffer waiting 30+ days till that new exploit is patched, then the cycle repeats itself over and over. Windows 8 and METROFAIL will be more of the same.

    Oldman, thinks people do not value their time which is incorrect. Given a choice, of <6min install time vs. +90min install time, people would choose LibreOffice hands down, then having to wait for the latter to finish it's bloated installation process.

    Regarding updating LibreOffice, here is the release schedule: http://wiki.documentfoundation.org/ReleasePlan

    Typing a few commands in terminal once a month to update the package is nothing, as the entire process takes 5-6 minutes at best, NOT an hour and a half for any of the current M$ offerings.

    Now that I think about it, M$ monthly update cycle is ungodly long too! Why is that! No wonder they want you to update your PC when you're shutting down. Also to add fuel to the fire, often times it requires multiple reboots for everything to install!!

    Fragmentation is/has/was never been a problem in Linux, Windows however will always have that problem. yes, it is true that solid-state drives make up for the failing of Windows in that regard, but still the question is "Why do they suffer fragmentation to begin with?"

    Windows 7 now automatically runs a defrag on its NTFS file system, compared to Windows XP which never did this. This is a great idea on Microsoft’s part, rather than letting things stockpile up and forcing the user to defrag while waiting for minutes or even hours while it churns away.

    This got me thinking back to when I read more about other file systems, most notably ext3 and ext4 filesystems on GNU/Linux (which are standardly used now), which never need defragmenting. Yes that’s correct, they do not need to be defragmented. In fact, I’ve never heard of anybody regularly defragmenting ext3 or ext4 file systems, and there are no tools to do so that I know of. The Linux System Administrator Guide states:

    “Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can’t be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system.”

    It it makes you wonder why Microsoft doesn’t make NTFS work the same way, by allocating blocks close together in the first place, therefore eliminating the need to defragment, like ext3 and ext4 do. It’s a mystery, but at least Microsoft has put on a band-aid to the fragmentation issue of NTFS. Now, if we can only get Microsoft to implement a built-in solution to cleaning up the mess of temp files that Windows and applications leave all over the place ….

    NTFS has been revised with several versions, and while some improvements have been made, no attempt has been made to make it better at housekeeping. Maybe that’s low on Microsoft’s priority list. While the ext file system on GNU/Linux allows users to upgrade the file system, and get direct benefits. I did find it amusing that one of the “features” added around the time of Windows Vista, for NTFS, was symbolic links. Unix/Linux has been doing this for decades, so it’s almost as if Microsoft is playing catch-up with NTFS.

  51. That Exploit Guy says:

    @JR

    ‘I take it you feel the same way about defraggers.?’

    The short anwser is “no”, but if you are talking about SSDs, then defragmenting is certainly a pointless exercise.

  52. That Exploit Guy says:

    @oiaohm

    ‘That is fine in theory but windows registry is not a perfectly working b-tree. It does not clean its self out of dead entries effectively.

    ‘The slow down takes time because windows generates about 50 junk registry entries per hour of operation.’

    Really? Let’s say you are absolutely correct in this regard, why not go and try solving log_{10}(100,000,000) on a calculator? I have a nagging feeling that even counting the zeros in the number is a bit above your level.

  53. JR says:

    @ Chris Weig

    Just as a matter of interest how many FLOSS believers do you know.?

  54. Chris Weig says:

    Chris Weig was mocking dougman because he could install and unistall Libreoffice in a matter of minutes.

    No, I was mocking him because he — like almost any other FLOSS believer I know — paints a picture in which uninstalling and reinstalling a software seems to be a daily activity.

    BTW, Windows Update (or rather Microsoft Update) also updates Microsoft Office. No need to uninstall and reinstall for minor changes and bugfixes.

  55. JR says:

    @ That Exploit Guy

    Chris Weig was mocking dougman because he could install and unistall Libreoffice in a matter of minutes.

    Contrary to what you believe I was not trying to teach him the benefits or as you so rightly pointed out the non-benefits of using a registry cleaner.

    I take it you feel the same way about defraggers.?

  56. oiaohm says:

    Mongrol its general operations. Windows is altering registry keys. The way its altering keys smart on one hand stupid on the other.

    http://technet.microsoft.com/en-us/library/cc750583.aspx

    Indexing last update values, last update look up…. and the list goes on that make up the 50 Mongrol there is a reason for about those background service crud. Each of these has to be deleted. Each delete might leave a blank space in the registry causing it to bloat. Each might result in the b-tree in file out of order to be effective. When does windows compact to clean this up basically never.

    Solution is not registry cleaners its break out ERD like disk back up registry and restore it so triggering compaction.

    Paul Robichaux is a third party That Exploit Guy who does not understand the issues with the registry. Wrong person to quote.

  57. Mongrol says:

    “The slow down takes time because windows generates about 50 junk registry entries per hour of operation.”

    [youtube http://www.youtube.com/watch?v=rmLAj9iIfQk&w=420&h=315%5D

  58. oiaohm says:

    That Exploit Guy Note the 50 is without you really using it. So any from user interaction makes matters worse. This is why the slowdown is so random.

  59. oiaohm says:

    That Exploit Guy
    –A b-tree has a worst-case search time of log_{m}(n), where m is the depth of the tree and n is the number of entires in the tree[1]. To put it in layman’s term, a search in a b-tree involves going through only a small amount of the entries present and not the entirety of so.–
    That is fine in theory but windows registry is not a perfectly working b-tree. It does not clean its self out of dead entries effectively.

    Yes you can notice a preference difference dumping the complete windows registry and rebuilding the hives. Mostly because you clean the dead out the registry.

    –at least hundreds of thousands of unused entries will be needed in order to make any slight noticable impact in performance.–

    The slow down takes time because windows generates about 50 junk registry entries per hour of operation.

  60. That Exploit Guy says:

    @JR

    ‘I don’t know how technically minded you are but if you feel so inclined take revo uninstaller and uninstall Office 2007 and when you get to the part where it searches for registry entries related to office 2007 to remove, let it search and see how many entries it finds.’

    According to p. 24 of this book, Windows Registry data are organised in a b-tree. A b-tree has a worst-case search time of log_{m}(n), where m is the depth of the tree and n is the number of entires in the tree[1]. To put it in layman’s term, a search in a b-tree involves going through only a small amount of the entries present and not the entirety of so.

    Now, applying this principle to left-over entries in Windows Registry, you will notice that, due to the nature of logarithm (log(a) << a for large a), at least hundreds of thousands of unused entries will be needed in order to make any slight noticable impact in performance. All that stuff people selling you registry cleaners about improving the “performance” of your Windows install with their products? It’s mostly just a sales pitch to convince you to by things you don’t need.

    Now, of course, you have probably encountered some scary-looking warnings along the line of “missing COM object library” from some registry cleaners. Is that something to worry about? Well, as someone who have used Norton Utilities before (and have stopped using it for years), I can tell you that sort of warnings usually just point to stray entries left over in HKEY_CLASSES_ROOT pointing to files that have been removed along with the applications using them. When the registry cleaner looks up the registry and finds these stray entries, there tends to be just no possible way for it to find out if the files in question are still used by any applications at all. This is why the modus operandi of a registry cleaner is usually to give you a big, flashy warning regardless of how the stray entires ended up there in the first place. In most cases, it’s nothing to be worry about. Some people even go as far as to calling registry cleaners snake oil outright for that reason. I digress, but even I have stopped using such things for years, so there’s that.

    [1] Often, log_{m}(n) is simplified to log(n) since n >> m. See here.

  61. Chris Weig wrote, “The real question is: why does the Cult of FLOSS vehemently defend FLOSS against proprietary software solely on the basis of it being FLOSS, and not on the basis of quality and features?”

    I don’t defend FLOSS on that basis. The world can make great software including FLOSS. FLOSS is widely used and its openness adds to the qualities one should like about it. You can look under the hood.

    vlc gets over 100 million downloads. OpenOffice.org and LibreOffice get the same order of magnitude of use. Linux is used on more than a billion computers. That’s a lot of people choosing FLOSS, not having it forced on them as non-FREE software needs/wants/chooses to inflict itself on retail shelves. FLOSS doesn’t need defending. It’s the right way to do IT.

  62. JR says:

    @ Chris Weig

    Your comment refers ……

    “OMG, you’re a genius, dougman! You can follow written instructions! I had the hardest time believing you could actually read.

    I suggest that you print out the instructions on a T-shirt and wear it proudly. Much better, too, than the usual FLOSS nerd stuff, like: “I’m a virgin and want to get laid.”

    I don’t know how technically minded you are but if you feel so inclined take revo uninstaller and uninstall Office 2007 and when you get to the part where it searches for registry entries related to office 2007 to remove, let it search and see how many entries it finds.

    Then while you are waiting for revo to find all the registry entries you can spend some time on your next comment.

    Perhaps then it may have more substance, your comment that is.

  63. Chris Weig says:

    So why is it that you denigrate so badly the software that you use so freely? You use both kinds of software proprietary and FLOSS, but only piss on FLOSS. Why is that?

    Wrong question. The real question is: why does the Cult of FLOSS vehemently defend FLOSS against proprietary software solely on the basis of it being FLOSS, and not on the basis of quality and features?

    FLOSS for you is always “good enough”, even if it is worse than a closed source equivalent. That always goes together nicely with your motto: “You don’t need that”, when pressed why FLOSS can’t do what closed source software X can do.

  64. Chris Weig says:

    To summarize, why does M$ Office take so long to install and uninstall?

    To summarize, is it part of your daily routine to uninstall and reinstall LibreOffice? Wait, I guess it is.

  65. oldman wrote, “Caring in an age of 4Tb harddrives is equally nonsense.”

    Since we left 1-40gB hard drives behind, storage is not critical but it should still not be wasted. My point was that it is silly to pay for anything that will not be used. Only a fool buys more than he needs in order to waste more food, energy, whatever. I am a fool for the gigantic house in which I live but “the little woman” chose it and I don’t win arguments with her.

    When I was a youngster, I had an allowance. In those days, one could buy several Coketms, admissions to movies etc. with 25cents per week. One time I decided I would splurge on a milkshake at the neighbourhood pharmacy which had a fast-food joint built-in. I ordered a large strawberry milkshake. Never was so much suffering dealt to a youngster by his own choice. I could not finish half of it. I learned my lesson that day. Obtain what you need not more. Too much of anything is not good for you. The same goes for software requiring excessive time for training or time wasted finding features buried in the bloat or extra effort keeping up with the bugs.

  66. kozmcrae says:

    @ldman wrote:

    “Caring in an age of 4Tb harddrives is equally nonsense.”

    @lman says he chooses his applications based on quality alone. He’s a lier. Here he is pushing for a bloated Microsoft Office by saying hard drives are big enough. Like hard drive space really matters.

    So why is it that you denigrate so badly the software that you use so freely? You use both kinds of software proprietary and FLOSS, but only piss on FLOSS. Why is that?

    Let me tell you all what @ldman has to say about that. “Shithead you will get your answer when I am ready.”

    All it takes is one sentence. But @ldman is not man enough to tell us why he pretends to be software agnostic.

  67. oldman says:

    “To summarize, why does M$ Office take so long to install and uninstall?”

    Who cares fool!

    “A measure of the bloat: M$’s pro version of the office suite is a full CD (~700MB). LibreOffice.org is about 200MB. I doubt many people use all the bloat in LO, let alone a tenth of what is in M$’s office suite. Paying M$ for stuff we don’t need is nonsense.”

    Caring in an age of 4Tb harddrives is equally nonsense.

  68. dougman wrote, “why does M$ Office take so long to install and uninstall?”

    It’s probably a number of factors: huge bloat X encryption to secure the cash cow X layers of security to protect the cash cow, very little for end users. I have long counted the value of M$’s stuff as negative (ie, less than $0) because of all the crap that goes along with it.

    A measure of the bloat: M$’s pro version of the office suite is a full CD (~700MB). LibreOffice.org is about 200MB. I doubt many people use all the bloat in LO, let alone a tenth of what is in M$’s office suite. Paying M$ for stuff we don’t need is nonsense.

  69. dougman says:

    Standard modus of a Troll, Wiggies deflects from the questions posted and decides to go of on a tangent discussing instructions, reading, T-shirts and virginity.

    To summarize, why does M$ Office take so long to install and uninstall?

  70. Chris Weig says:

    OMG, you’re a genius, dougman! You can follow written instructions! I had the hardest time believing you could actually read.

    I suggest that you print out the instructions on a T-shirt and wear it proudly. Much better, too, than the usual FLOSS nerd stuff, like: “I’m a virgin and want to get laid.”

  71. oiaohm says:

    dougman not all bloat. A libreoffice install on windows is also slower. All of the following slower than Linux on windows: Memory allocation, Process creation and Disk operation.

    Computers are not magic.

  72. dougman says:

    Only took 2 mins to download, 2 mins to remove and reinstall the updated version.

    axel http://download.documentfoundation.org/libreoffice/stable/3.6.2/deb/x86/LibO_3.6.2_Linux_x86_install-deb_en-US.tar.gz

    Downloaded 154.1 megabytes in 2:21 seconds. (1604.40 KB/s)

    tar zxf LibO_3.6.2_Linux_x86_install-deb_en-US.tar.gz

    Open a terminal then type sudo apt-get remove libreoffice*

    Building dependency tree
    Reading state information… Done
    Note, selecting libreoffice-bundled for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-impress for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-stdlibs for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-math for regex ‘libreoffice*’
    Note, selecting libreoffice-desktop-integration for regex ‘libreoffice*’
    Note, selecting libreoffice-debian-menus instead of libreoffice-desktop-integration
    Note, selecting libreoffice-debian-menus for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-dict-en for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-dict-es for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-en-us for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-dict-fr for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-base for regex ‘libreoffice*’
    Note, selecting libreoffice-unbundled for regex ‘libreoffice*’
    Note, selecting libreoffice-debian-menus instead of libreoffice-unbundled
    Note, selecting libreoffice3.6-calc for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-writer for regex ‘libreoffice*’
    Note, selecting libreoffice-java-common for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-draw for regex ‘libreoffice*’
    Note, selecting libreoffice for regex ‘libreoffice*’
    Note, selecting libreoffice3.6-ure for regex ‘libreoffice*’
    Note, selecting libreoffice3.6 for regex ‘libreoffice*’
    The following packages were automatically installed and are no longer required:
    pinentry-gtk2 gnupg-agent
    Use ‘apt-get autoremove’ to remove them.
    The following packages will be REMOVED:
    libobasis3.6-base libobasis3.6-binfilter libobasis3.6-calc libobasis3.6-core01 libobasis3.6-core02 libobasis3.6-core03 libobasis3.6-core04 libobasis3.6-core05 libobasis3.6-core06 libobasis3.6-core07
    libobasis3.6-draw libobasis3.6-en-us libobasis3.6-en-us-base libobasis3.6-en-us-calc libobasis3.6-en-us-math libobasis3.6-en-us-res libobasis3.6-en-us-writer
    libobasis3.6-extension-beanshell-script-provider libobasis3.6-extension-javascript-script-provider libobasis3.6-extension-mediawiki-publisher libobasis3.6-extension-nlpsolver
    libobasis3.6-extension-pdf-import libobasis3.6-extension-presentation-minimizer libobasis3.6-extension-presenter-screen libobasis3.6-extension-python-script-provider libobasis3.6-extension-report-builder
    libobasis3.6-gnome-integration libobasis3.6-graphicfilter libobasis3.6-images libobasis3.6-impress libobasis3.6-javafilter libobasis3.6-kde-integration libobasis3.6-math libobasis3.6-ogltrans
    libobasis3.6-onlineupdate libobasis3.6-ooofonts libobasis3.6-ooolinguistic libobasis3.6-postgresql-sdbc libobasis3.6-pyuno libobasis3.6-writer libobasis3.6-xsltfilter libreoffice-debian-menus
    libreoffice3.6 libreoffice3.6-base libreoffice3.6-calc libreoffice3.6-dict-en libreoffice3.6-dict-es libreoffice3.6-dict-fr libreoffice3.6-draw libreoffice3.6-en-us libreoffice3.6-impress
    libreoffice3.6-math libreoffice3.6-stdlibs libreoffice3.6-ure libreoffice3.6-writer
    0 upgraded, 0 newly installed, 55 to remove and 1 not upgraded.
    After this operation, 499MB disk space will be freed.
    Do you want to continue [Y/n]? Y

    120 seconds later, type sudo dpkg -i *.deb twice from a terminal in both folders then your done. How hard is that??

    Why does M$ Office take like 45 min to uninstall, than an hour to reinstall? BLOAT

    Why does M$ Office seem to fail mid-way, then you have to wait for it to uninstall its failure. Over-priced junk

  73. dougman says:

    I convey to businesses that they can save $200 per each computer by using LibreOffice. For a 10-seat office that’s $2000 that could be spent elsewhere, say like a nice Xerox printer instead.

    Honestly, why should anyone shell out money to create text files and spreadsheets?

    One of the cool things with LibreOffice, is you can export to PDF without having to pay additional money for Adobe software and the LibreOffice suite can export to mediawiki, so you can share technical data with your co-workers.

Leave a Reply