Google Tries To Kill EXT* File-systems For ChromeOS

Someone over at Google has decided to drop support for EXT* file-systems in favour of M$’s stuff…“Chromium OS is for consumer devices which should not need support for mounting external ext4 storage. In principle, we should drop unnecessary features. There was a case that an unnecessary feature was used for a security exploit.” This has ticked off GNU/Linux users who want to move files between other GNU/Linux operating systems and ChromeOS, including developers

The only justifications seem to be

  • EXT* support is unnecessary and increases security surface without justification – That’s kind of hard to swallow since EXT* has had very few problems that I know of and definitely is widely used by many people including developers. If consumers aren’t aware of EXT* then this move won’t impress them much. Forcing developers to change file-systems seems like the unnecessary thing here. I’ve had to add FAT-support to my custom kernels because folks wanted to use USB devices from that other OS and “partners”. I found that annoying and unnecessary. If I had a Chromebook, one of the first things I would change would be a limitation to FAT, either by tweaking ChromeOS or replacing it. Would replacing ChromeOS make Google happier?
  • EXT* support is extra work – Come on. The kernel boys and girls do some, and the file-system is backwards-compatible with previous versions… What extra work? Oh, you want to rename the mount-point? Give me a break. Who does that? Give them a “file-system busy” message.

Does Google want to give a fair segment of users a reason not to use ChromeOS? What were they thinking? I’ve read endless criticism of ChromeOS on TV, in this blog, and elsewhere on the web and no one has complained that it supported EXT* file-systems, not even M$… If you want to increase the “attack-surface”, drop the file-system… 8-(

See Issue 315401 – chromium – Drop support for ext2/3/4 from Files.app cros-disks.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , . Bookmark the permalink.

63 Responses to Google Tries To Kill EXT* File-systems For ChromeOS

  1. DrLoser says:

    Time to put up That Exploit Guy give me the link to the AT90 wear leveling code. It was not in the PDF you quoted its not in any of the extra parts. You say you are not buying prove your self.

    Well, this should be interesting. oiaohm vs TEG on a technical issue involving a relevant and accurate cite. I can’t begin to imagine which of the two knows more about the subject.

    My money’s on TEG.

    Oh, and just to join in the fun, I suspect that the link will involve the six characters “AVR116.” I know nothing at all about this subject, so I’m definitely going out on a limb here.

    I expect to be proved wrong … but not by oiaohm.

  2. oiaohm says:

    That Exploit Guy it does not change the fact that both FAT and NTFS are mostly incompatible with modern flash media. SD standard says 4MB allocation units in fact that does not suit either FAT or NTFS. But when you get to general USB flash drives there is no officially standard limiting how stupid a device can be.

    In fact I don’t need to cite the survey data because you have been doing such a great job of quoting manufactures without reading spec sheets. 8051 is the dominate and there is tones of reference for USB flash drives using it.

    Absolutely, I am so totally buying that… not.
    Time to put up That Exploit Guy give me the link to the AT90 wear leveling code. It was not in the PDF you quoted its not in any of the extra parts. You say you are not buying prove your self.

  3. That Exploit Guy says:

    That Exploit Guy I should have been more exact AT90 reference code lacks wear leveling code.

    Absolutely, I am so totally buying that… not.

    You really do think people are this stupid, don’t you?

    By the way AT89USB refers to the programmer for the AT89C5131A-L.

    Absolutely. They call the chip “flying magical unicorn” as well.

    See? I can BS my way through a subject as well.

    They do produce survey reports

    Which you never cite, because they don’t actually exist.

  4. oiaohm says:

    That Exploit Guy tell me what brand USB Key on the shelf contains a AT90USB.

    It is possible to give you a huge list to contain arm, custom risc or 8015. I don’t know of 1 that contains AT90USB.

  5. oiaohm says:

    That Exploit Guy I should have been more exact AT90 reference code lacks wear leveling code. Its not usable in production its only toy code. If you go and look at the AT89C5131A-L reference code it contains wear leveling code in fact a few different wiring design options for hook up it to the flash. Proto example mucking around is not much interest when you are wanting to get product out door. The AT90 code is fairly much equal to nothing. A flash drive without wear leveling is not going to last too well might not even last 6 months.

    I should have said AT90 comes with no production use-able reference code for a flash drive.

    By the way AT89USB refers to the programmer for the AT89C5131A-L. Some people reference chips by the tool they will need to modify it since there is only 1 chip you will be using in a flash drive.

    AT89C5131A-L is fairly much ready to go using Atmel information to make a flash drive not with the best performance but works well enough. The AT90 stuff is not ready.

    There are people who disassemble hardware out there who do release what is found in devices. They do produce survey reports on what they find.

  6. That Exploit Guy says:

    That Exploit Guy no you did not find the AT89USB. It runs at 48 mhz. It is full USB 2.0. You find the AT89USB in some cheap USB 2.0 keys.

    There is no such thing as an “AT89USB” from Atmel. You are welcome to prove otherwise. I am not interested in hunting for Santa Claus.

    Only the AT89C5131A-L is found in USB keys. Sorry you have never seen any survey data on contained hardware of USB keys have you.

    So is this going to be like last time you picked a random chip and then pull some BS about it from thin air, which you now call “survey data”?

    Is your brand going to be on the final product yes or no. Cheep flash drives are brand less as long it appears to work for 6 months it fine. Using AT89C5131A-L for a flash controller is not that hard if you are not worried about quality.

    The problem of using a generalised microcontroller has nothing to do with “quality”. It’s all about the costs involved in getting a product out of the door.

    The AT90 has no reference code for creating a USB flash drive.

    It has. You would have noticed it all comes with the kit had you not been too busy making up stories instead of reading up.

  7. oiaohm says:

    That Exploit Guy no you did not find the AT89USB. It runs at 48 mhz. It is full USB 2.0. You find the AT89USB in some cheap USB 2.0 keys.

    Only the AT89C5131A-L is found in USB keys. Sorry you have never seen any survey data on contained hardware of USB keys have you.

    Of course, for mass production, you want something that gives you the least trouble to use
    Is your brand going to be on the final product yes or no. Cheep flash drives are brand less as long it appears to work for 6 months it fine. Using AT89C5131A-L for a flash controller is not that hard if you are not worried about quality.

    Siliconmotion older ones are 8015 micro-controllers as well the current breeds from them are 32-bit, RISC not arm or 8015. So some of their controllers are better breed option like samsung. SM3257EN from Siliconmotion open up it spec sheet That Exploit Guy and notice is a 8015. SM3257EN is the most common Siliconmotion USB key chip out there. Everything Siliconmotion that is not USB 3.0 for flash drives is a 8015.

    Now go down the shops are look at the USB keys on the shelves and note how few are USB 3.0. Remember normally only the USB 3.0 keys have the better controllers in them anything USB 2.0 branded on it will normally be a 8015.

    Something generalized can equal cheap. That Exploit Guy what is going on here is very simple. OEM/ODM making flash drives want to be able to order controllers from who ever. So a defacto standard has formed around 8015. All the lower price controllers are fairly much 8015.

    As part of the AT89STK there is the reference firmware source for creating USB flash drive yes they don’t even have to code the bugger from scratch.

    The AT90 has no reference code for creating a USB flash drive.

    http://www.phison.com/English/ProductViewEmbedded.asp Please look up the first one I mentioned. That Exploit Guy they sell themselves as the world best they are also the least secure and using the worst processor selection possible.

    Basically once you get past skin deep on USB Keys the Microprocessor types used inside are quite limited. arm, something 100 percent vendor custom with only vendor compliers or 8015. 8015 is the majority in a large way. If you are wanting to choose vendors you will have your firmware in 8015 or arm.

  8. That Exploit Guy says:

    That Exploit Guy “CPU production license fee” sorry there is such a thing particularly when you are doing mass production. You buy design pay fee to produce it in what ever foundry is most suitable to your assembly.

    Bawhawhawhaw… That’s one of the most ridiculous fairy tales I have ever heard.

    What, do the USB stick manufactures drill their own crude oil and extract the precursors for the plastic as well? XD

    I miss read I though you were quoting the correct atmel cpu for a flash drive. AT89C series something, yes sometimes called AT89USB. So +1 difference… The AT90USB is never found in USB keys because it too slow.

    The AT89C* family of USB MCU are “full-speed”, meaning that they are practically all USB 1.1 chips. So are the AT90USB family.

    Of course, for mass production, you want something that gives you the least trouble to use, like an actual flash memory controller rather than something generalised like AT90USB* or AT89C513*.

    Really That Exploit Guy you were picking on my dyslexia having something close enough that I would miss the error and copy it.

    I know people that are dyslexic. Seriously, stop insulting them.

    Also, you are simply basing your lies on what I am feeding you. I can practically give you one bit of irrelevant crap and you will spin miles of yarn out of it without batting an eye, and whenever you get caught out for lying, you just abandon the old lies and come up with new ones as if people were stupid enough to forget what you have written several posts before. A classic example of this is your wall-of-gibberish on NDISWAN.

    I don’t know how gullible you think other people are, but I know for sure you are the kind of lowlife dipstick I hate the most. So, in all sincerity, screw you.

  9. oiaohm says:

    That Exploit Guy “CPU production license fee” sorry there is such a thing particularly when you are doing mass production. You buy design pay fee to produce it in what ever foundry is most suitable to your assembly..

    I miss read I though you were quoting the correct atmel cpu for a flash drive. AT89C series something, yes sometimes called AT89USB. So +1 difference. Yes that difference is all the difference between a AVR number and a 8015 number.

    The AT90USB is never found in USB keys because it too slow. Really That Exploit Guy you were picking on my dyslexia having something close enough that I would miss the error and copy it.

    My error with the cpu should have been straight simple to see that I had miss read if you had worked with USB keys.

    https://github.com/adamcaudill/Psychson this has a copy to a 8015 firmware reversed. Yes 8015 firmwares for usb flash drives exist and are the majority.

  10. That Exploit Guy says:

    That Exploit Guy 8015 is chosen by cheap USB devices because they don’t have to pay a CPU production license fee.

    The AT90USB* family uses the 8-bit AVR architecture developed by Atmel. The most well known use of the AVR architecture is the Arduino microcontroller board.

    There is no such thing as a “CPU production license fee”.

    There is also no such thing as “the 8015 microcontroller code for USB key”.

    I don’t know if you realise, but this constantly lying through your teeth and taking other people for fools is simply insulting to those you are talking to. You either stop telling me fibs and start substantiating your argument with sources that are relevant and meaningful, or don’t bother replying to me.

  11. oiaohm says:

    That Exploit Guy 8015 is chosen by cheap USB devices because they don’t have to pay a CPU production license fee. Atmel AT90USB equals having to pay a license of the accelerate functions even so its a 8015 with all the 8015 issues.

    https://github.com/adamcaudill/Psychson
    I can even give you the brand of the controller. Phison made/designed in japan. Phison controllers are on over 60% of all USB keys being produced. Phison has just ramped the clock speed of the 8015 through the roof.

    Atmel AT90USB supports firmware signing the Phison does not. Even so Atmel AT90USB only has 8kb of ram for tracking operational state. So even usb flash drives based on AT90USB have huge allocation units because they don’t have enough ram todo anything else but at-least AT90USB based ones are secure from firmware replacement.

    That Exploit Guy the issue here is AT90USB is a 8051 as well and 8051 cannot address enough ram so they cannot manage what a lot of us would call suitable allocation units for fat or ntfs on large USB Flash drives. Wear leveling requires a table listing what order the allocation units on the media appear to the interface. Yes you need to be able to look this up fast and it has to be in ram.

    For some reason 8051 has become the most common controller for flash drives even that its really not suitable.

    AT90USB takes almost the same firmware as a Phison so has basically all the same issues bar the fact it has signed firmware so cannot be high jacked todo what ever.

    The really good usb keys for having nice small allocation sites on large USB keys (ok if you call 1 or 2MB small) are samsungs and they are arm 32bit micro-controllers. More ram to track allocation blocks so able to manage more allocation blocks. Even better samsung embed a capacitor so when you pull the drive out it can finish off it writes before it dies. Problem is majority of USB keys are not this quality and even the high quality keys have the wrong allocation unit size for fat or ntfs. Yes samsung is making their own file system due to the issue. Most USB Keys are built as cheap as possible.

    When you get down to controllers that are 8051 based from any brand that are in USB Keys you are looking at 80 percent of them. Only about 1/5 USB Keys produced have something near decent. I am really worried what size the allocation will look like when we get to 1TB keys that are made by the cheap guys. Kingston Digital they are using samsung solution of a arm chip. If the cheap usb flash drive makers stretch the 8051 based chips to 1TB keys its like 200+ Meg allocation size to disappear in one blink even ext2/3/4 will not be able to tolerate this. As I said that ext2/3/4 is pure lucky. But just being lucky does not work forever.

    The physical operation of current day USB Flash drives are not compatible with fat or ntfs. There is no sign that they ever will be again. This is why the stopping using fat or ntfs on USB flash drives has to be considered. The operation of usb flash drives is not getting better as they are getting larger.

    This is a true round peg square hole and it manages to work so no one has noticed that is wrong.

    The current version dosFsLib system library vxworks does not work as I described because it was fixed. Also what I was describing are internal operation the library was performing without informing applications based on it. Devices on the other hand since they don’t have to write what version of vxworks is inside the device use older versions some with nasty security issues. Problem is that 20 size offset will keep on turning up due to the fact a lot of the devices with it have a operational life of about 12 years. Yes the cheap short cut problem hey we have this firmware laying on the shelf it does the job lets minor-ally tweak it so it runs on this board done.

  12. That Exploit Guy says:

    By the way if you read the 8015 microcontroller code for USB key allocation groups are called blocks or page size.

    The 8015 controller? How oddly specific!

    One would assume that, for a USB mass storage device, a USB-specific microcontroller (e.g. Atmel AT90USB*) with firmware decoding SCSI commands would be a more suitable choice for the purpose. Oh, well, OiaohmKnowsBetter(TM).

    I did not say the reason was sane. That is a number vxworks developers pulled out the air because it suited their needs. Control meta data is where vxworks has mounted the drive and so on. You also find some vxworks devices only accept one fat cluster size.

    VxWorks FAT support is provided by the dosFsLib system library, and it works nothing like you describe.

    Also, “20 blocks”? Are you sure that’s the number you want to go with? 😉

  13. oiaohm says:

    Allocation Groups is the problem That Exploit Guy.

    https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups/Kernel/Projects/FlashCardSurvey
    Good link. By the way if you read the 8015 microcontroller code for USB key allocation groups are called blocks or page size. Sorry page size and blocks are not universally defined. Flash segments under that code are called units. But I will stick to linaro since that will most likely make it simpler for you. Using block and page would have made it simpler for me. It why the linaro page their starts off with defines before talking about anything.

    Write Size Unit is an abstraction that OS sees. Write Size Unit is the smallest unit the controller on a flash drive will accept to place data into its ram to que up to write to flash not directly onto the flash. This write size unit has no alignment with number of blocks the controller is writing or deleting on flash at a time. Even the page size is abstraction that is created by the controller. So both in that links write size and page size are nothing more than virtual data.

    That Exploit Guy do read the bottom of that link that is 2013 things have changed. But even so there is a allocation assignment of 24 MB. There are 16 MB units there.

    Remember an erase command is called on a complete real allocation group at a time. Now controller can be caching the copy allocation unit in controller ram. This is the magical disappearing allocation unit. Happens when flash drives get fairly full and there are not many free allocations units. Yes spec says they should copy on write. Answer is the controllers don’t they expect they will be safely removed. Failure to safely remove can equal a complete allocation unit gone because it was just erased waiting for the data from ram on the controller to be rewritten. It will be somewhere you were just writing to like update the FAT tables.

    Note also the fun that devices will report a 4MB allocation unit size but really be 1.5 or 8 MB. That makes aligning file systems correctly hard to impossible.

    That Exploit Guy fat and ntfs are not designed to cope with a complete allocation unit on a flash drive pulling the disappearing act. Ext2 and Ext3 can tolerate it if put on the drive properly.

    Writing to random addresses on the medium will result in read-modify-write operation being performed on a whole allocation group, because each write access requires a garbage collection. For instance on a typical SDHC card with 4 MB erase blocks, a workload writing 4 KB file system blocks to completely random locations results in a write amplification factor of 1024.
    This bit here from your link. That Exploit Guy allocation units are handled as soild blocks. So you modify one section of a allocation unit the complete allocation unit has to be rewritten. So you could say the real sector size of usb flash drives is the allocation group.

    Ideal is if the file system can que up and align its writes with the allocation units on a flash drive. This is why ext2 and ext3 and ext4 kinda work with some option tweaking but are not perfect. exfat in fact is designed to que up and align its writes with the allocation units of a USB drive. Yes there are other file systems designed for this as well.

    The reality is we have to at some point accept that flash drives are not anything like the media of old. Their unit size is just huge. 4KB issue in spinning media is really nothing to the super size allocation units USB flash drives have.

    Yes it write to a USB flash drive properly you will be writing in allocation unit blocks at a time where ever possible to avoid any write amplification factor. Not writing as per allocation units you are taking speed hit. Fat cannot set its cluster sizes to match USB flash drive allocation units.

    There is alignment and there is alignment.

    Also noted is the FAT optimization in the USB Flash Drive controller. This means put anything other than FAT on the device you are in for hell. Repartitioning it differently you are in for hell as well. Again this makes your life more complex. Is this a optimized drive or is this not a optimized drive. Does a USB flash drive report how its controller is optimizing. Answer is no it does not. Do you feel lucky.

    If the USB Flash drive does not contain FAT optimization using FAT will in fact take more overhead and be wearing faster than using Ext2 or Ext3. Yes problem here just because a device is formated FAT32 does not mean its FAT Optimized.

    Yes your alignments are different on a FAT Optimized drive to a not FAT optimized. Why is this possible everything the OS is sending to a USB flash drive is fairly much virtual and its up to the USB flash drive to decide how it handles it.
    That’s just patently silly. Are you saying that you can’t have your “control meta data at the beginning” (whatever that means) unless you have the FAT reserved sectors number set to the oddly specific value of “20” and not, say, “21” or “19”?
    I did not say the reason was sane. That is a number vxworks developers pulled out the air because it suited their needs. Control meta data is where vxworks has mounted the drive and so on. You also find some vxworks devices only accept one fat cluster size. Devices built coping this model are stuff what usb drive or user wants the device will only accept what it wants. Thank fully only one OS vendor did this and everyone one else having the same idea copied that size. The fact it ruined killing of the use of fat would allow devices with the problem to be assured to be exterminated.

  14. That Exploit Guy says:

    Users are notice this.

    That’s not good enough. In the same way you make your argument, I can assert that people notice unicorns, fairies or things that obviously don’t exist. As famous magician James Randi puts it, “You can’t prove a negative.”

    Obviously, I am not going to sit next to a chimney every Christmas Eve for 65 years just to see if Santa Claus doesn’t show up. Of course Santa Claus won’t show up, but the exercise itself won’t help prove logically that Santa Claus doesn’t exist. Therefore, it is up to you the claimant to show me that Santa Claus actually exist, or, in this case, there exist these broken FAT implementations in set-top boxes that upset users.

    Of course, this is to put aside the fact none of this “broken FAT implementations” BS has nothing to do anything.

    The reason is absolutely horible. This allows you to 1 to 1 copy into ram and shove you control meta data at the beginning.

    That’s just patently silly. Are you saying that you can’t have your “control meta data at the beginning” (whatever that means) unless you have the FAT reserved sectors number set to the oddly specific value of “20” and not, say, “21” or “19”?

    Also, if your copying method involves writing extra “control meta data” to the copy, then it’s not one-to-one.

    Thank you vxworks particular editions then the other embedded OS’s that copied. One of vxworks secuirty exploits come from a person writing data in the first 20 blocks of fat then vxworks failing to blank it.

    Absolute BS. I’ll let to spot the obvious flaw in this lie. (It’s actually pretty typical of Peter Dolding.)

    Disappearing block effect of flash means you do require to run a full file-system structure scan.

    “Disappearing block”? Do you mean “state gone”? 😉

    This is the difference between old magnetic media and flash. Magnetic you don’t have a complete block go poof unless you have things like raid controllers in mix.

    “Blocks” and “pages” are all logical constructs of their respective abstraction layers. The underlying storage medium may go bad (as in the case of bad sectors), but blocks never spontaneously “go poof” or disappear.

    That Exploit Guy read link. The 4MB is not made up.

    That’s an “allocation group”, not a page. See here for explanation.

  15. oiaohm says:

    That Exploit Guy people get upset with set top boxes and other things that they put FAT drives into and they don’t work even that the spec sheet says they should. Lot turn out to be hard coded. The manual tells you to format key inside device then it works. To user this works but they have not noticed now its layout is changed in bad ways no longer correctly aligned if it was in the first place.

    their factory FAT formatted thumb drives weren’t working as expected?
    Users are notice this. Ask how many had to have their set top box format device before it worked and other devices. They write it off because that is what the manual tells them is required. You will find this is lot more common. Fat is not as unified as people would hope.

    “for no particular reason, hard-coding the BPB_RsvdSecCnt value to “20”.”
    This is a particular reason. The reason is absolutely horible. This allows you to 1 to 1 copy into ram and shove you control meta data at the beginning. Thank you vxworks particular editions then the other embedded OS’s that copied. One of vxworks secuirty exploits come from a person writing data in the first 20 blocks of fat then vxworks failing to blank it.

    Without the journal, the only other option is to scan through the file system for inconsistencies.
    This is a problem in flash. Disappearing block effect of flash means you do require to run a full file-system structure scan. Journal gain in flash should be quite small after power loss. This is the difference between old magnetic media and flash. Magnetic you don’t have a complete block go poof unless you have things like raid controllers in mix. Journal helps you a lot on magnetic media. Journal does not help you that much on flash. In fact the reason why ext2 is used in a lot of devices is without a journal you reduce the number of write so extending the flash chip life span at the price of slower start up after power loss. There are copy on write modes in ext2. Ext2 can operating in very resistant way.

    A page size of 4MB or even 2MB for a USB thumb drive is almost unheard of. Seriously, if you want a number that justifies your previous statement of the file allocation table “being in one block“, why not try 8MB. We all know it’s all made up by you anyway.
    http://lwn.net/Articles/428584/
    That Exploit Guy read link. The 4MB is not made up. The reason why the blocks are so huge is the Micro-controller inside you flash drive has to store in its ram where all the blocks are. 8MB is not unheard of in fact on some of the super new ones 16MB is not unheard of. This is the problem if you don’t know this basic fact there is no way you know what a safely formated flash drive looks like. 2MB 4MB and 8MB are the common block sizes for flash for the wear-leveling systems on multi Gigabyte Flash drives. 128 KB to 256 KB is true for flash media under 1G. Issue is we are using the same micro-controller for a 128 GB flash drive as what was used for a 32meg flash drive. So its not built for it so the number of blocks fit in the amount of ram controller has the block size has just been made huge. The block size flash drives are using is not expected to get smaller. The fat idea of primary and backup next to each other just does not work on these new drives.

    http://www.zdnet.com/blog/storage/usb-drive-life-fact-or-fiction/849
    The most troubling is the silent error where the write operation reports success, but a read finds that the data was not written. Trust, but verify.
    This is from your wear leveling quote That Exploit Guy. This is the behavior the file-system on flash has to cope with. Worse is the fat that you can do a read before power off that says that data is written but its currently only in the flash controllers ram. It is not in flash yet. You have all the bad behaviors of a raid controller with magnetically media when you are using a USB flash drive but with no option of power backup controller to prevent the losses.

    That Exploit Guy when you know how modern day flash drives work and the block sizes make using NTFS and Fat insane. They are just not compatible.

  16. That Exploit Guy says:

    TEG liked aligning me with a quote that is not mine.

    Has it ever occurred to you that I simply wasn’t making a reply to you?

    DrLoser TEG point about flash drives is missing the most critical point is how the filesystem puts it data on drive in relationship to the zones of failure.

    Again, typical Peter Dolding’s word salad.

    Yes it says 16 bit. Reality is 15 bit become some implementations out there are signed. This is also almost never set right. Worse some implementations of fat don’t check reserved sectors and presume its always 20.

    At this point, I don’t even know if I should give a damn about replying or not. More than often, the conversation just end up being other people trying to debunk one after another fictional issue about Windows, DOS, FAT, NTFS… that no one except you has ever observed. You see, if anything you said were indeed true, wouldn’t pick up on the fact that their factory FAT formatted thumb drives weren’t working as expected?

    I nevertheless admire your gall to pick up a source, see that it says “usually 0x20”, and then make up some BS about “some implementations”, for no particular reason, hard-coding the BPB_RsvdSecCnt value to “20”.

    “Ummm… Yeah, some implementations hard-code the value to 20. Trust me, I have see them, erm… Like, totally.”

    All the information in the ext superblock for FAT is at the start.

    “ext superblock for FAT”? You mean as opposed to a “JFS superblock for UFS”? 😉

    NTFS has fairly strong zoned MFT.

    Are you seriously trying to compare an ext2 superblock to NTFS MFT? They don’t even serve the same purpose, for cryin’ out loud!

    The shot gun method that ext uses makes rebuilding of it drive almost always possible.

    No. The backup copies of the superblock guarantee only that you have a better chance of recovering the inode for the root directory. Everything else is a variable.

    The main reason why ext3 has a journal is not to fix power loss issues but to prevent the huge amount of time it takes to track down the shotgun method of backup up the file system data.

    No. The purpose of a file system journal is to allow the file system to redo operations committed before an interruption (e.g. power loss, system crash) and bring the itself back to a consistent state. Without the journal, the only other option is to scan through the file system for inconsistencies.

    If a ext2 has lost all state and cannot rebuild you are talking quite major damage to the device not just a odd segment reporting blank.

    A snapshot of the file system superblock is hardly a representation of a file system “state”. That would be like me taking a strand of your hair and pretending it’s you.

  17. That Exploit Guy says:

    I am not talking inconsistent state.

    No one was talking to you.

    I am talking about state gone.

    “State gone”? Is that some kind of mathematics you have invented on your own?

    The reason why state gone happens with fat and NTFS is they are not design for devices where the real segment size is 2 or 4MB that is common in modern day flash drives.

    A page size of 4MB or even 2MB for a USB thumb drive is almost unheard of. Seriously, if you want a number that justifies your previous statement of the file allocation table “being in one block“, why not try 8MB. We all know it’s all made up by you anyway.

    Heck ext2 was not designed for this nightmare but by pure luck its design avoids having all the superblock data and the reset of the file system data in the 1 segment.

    The superblock is called “superblock” because it is supposed to be the one block dedicated for storing certain important file system metadata. In ext2, redundant copies of the superblock are usually present, but only the one in block group 0 is active.

    So bad sector alignment of this information.

    Again, sector alignment is all about the page size. Have you been paying attention to anything I have said?

    FAT does not include options to place these critical bits where every you like in reason.

    File systems usually have fixed locations reserved for important metadata. Even ext2 is no exception. Any redundant copies of the metadata are useful only when you are running a diagnostic tool and, say, restore a damaged superblock. They are usually not active during the course of normal operation.

    If one lot if file system data is intact you can go from inconsistent to consistent state.

    Yes, with a diagnostic tool, or journal replaying, or anything that identifies and weeds out the inconsistencies.

    Ok you will lose some data but not all.

    That’s exactly what we try to avoid, isn’t it?

    I am not talking about keeping on working on a inconsistent file system.

    Then, pray tell, just what on earth are you talking about?

    People throw then in the bin because they think they are buggered when really there is nothing wrong other than pulling the flash drive out at the wrong time causing it to lose the first 2 to 4meg and the file system in use not designed to cope with this.

    Again with this fictional “state gone” issue…

    If you were unfortunate enough to have a thumb drive so broken that it caused half of the file allocation table to disappear, then why would you still want to hold on to drive?

    On top of that, even cheap, off-brand thumb drives have some form of wear-levelling in place, and if it was indeed true that FAT (which records almost nothing except cluster allocation) incurred such a tremendous amount of wear to a thumb drive, then thumb drives would not have been useful to anyone to begin with.

  18. oiaohm says:

    ext file systems also include a write block size. So ext file systems can in fact achieve alignment with modern day flash. The write block size in ext was to prevent fragmentation. This is what is so funny about ext file systems. All core features to support flash were implemented by some fluke on spinning media. The odds of something like this happening is long. Even more funny is most of these features caused major issues but by pure stubbornness were not reversed.

    The issue with ext2 after loss of power is how long a ext2 partition takes to check it self. Not damage as such.

    Ext file systems core is kinda insane but at the same time so sane its not funny.
    Primary copy of the key file system data in ext is alway at the start of the partition. Backup copy of the key file system data is somewhere in the partition. Backup data is written before the primary data. Backup data is not overwritten over the prior back copy. So to confirm that a ext2 partition is not damaged it scan the complete partition finding all the backup data including old copies. You could find a few thousand old copies before you find the current one then you are only sure you have the current one once you have scanned the complete drive.

    Ext3 journal helps by hey we have a list of operations performed so making finding where the backup super-block and related information is quicker. Ext maintains this do not overwrite unless obsolete for all the file location data.

    Ext file systems have Major different behavior to a lot of file systems. The magical advantage of the ext backup method is the odds that both copies of the ext core filesystem data will be in the same flash segment is between low to impossible. Low if write-block size is not set.

    If you follow linaro formating recommendations you final usb drive will not look anything like how it came from the store.
    Recommendations
    1) Only partition data and MBR in first flash segment.
    2) Align start of partition to second flash segment.
    3) set ext write block size correctly to match flash segment size.

    Result of this is between 2 to 8 megs of unpartitioned space at the start of USB drive. Why is this important if you can find the start of the first partition you can rebuild the partition table. If the start of the partition is gone you can rebuild partitions like ext2/3/4 by knowing where they start and end then use this information to scan the space for backup copies of the missing information. The start and end just happens to be in the partition table. This is a securely formated usb key to be safe from power failure effects as much as possible. Power failure effects on USB flash drives at one of two things 1) one complete block somewhere on it gone by by 2) is fried we don’t need to bother about case 2 since recovery is normally not possible.

    Just 1 flash segment does not sound like much but since its 2 to 8 megs its a lot of data. So how ever you format a USB flash drive you have to be able to cope with this to be safe from power failure. Correctly formated USB key has redundancy. A USB key fresh from the shop will not have this because its in fact incorrectly aligned. Default alignment is max capacity not max stability.

    https://code.google.com/p/chromium/issues/detail?id=315401#c128
    Google putting ext2/3/4 back in chromeos. Removing fat and ntfs would be more valid. exfat support from a technically point of view is more valid to stay.
    Why will USB vendors be unlikely to ship USB keys formated as per data security recommendations. Reason they will sell less USB keys because less will fail.

    There are other linux file systems that could be better. Technically it was possible to hack around the removal of ext2/3/4. Place another small OS between chromeos and the device and have the device report the drive as “Media Transfer Protocol (MTP)” or if the flash controller happens to be a hackable one make it report Media Transfer Protocol instead of a USB flash drive when it sees a chromebook. Really if google wanted to save some patent payments drop fat ntfs and exfat and require buying a conversion dongle to use those partition types.

  19. oiaohm says:

    exfat allows you to align the primary and backup data independently.

    http://www.sans.org/reading-room/whitepapers/forensics/reverse-engineering-microsoft-exfat-file-system-33274

    Basically some file systems are designed for USB flash drives example exfat, Some by luck are compatible with USB flash drives being ext and lot are in fact incompatible. Unfortunately the incompatible include fat and ntfs.

    exfat can have cluster sizes exactly set to the USB flash drives segment size. Yes exfat supports 2MB and 4MB clusters. exfat can achieve perfect alignment where NTFS and FAT32 just cannot. You cannot set NTFS or FAT32 cluster size to match. So has way less failure rates than NTFS or FAT on USB flash drives. Problem here is lack of support due to Microsoft patents.

    EXT file system achieves close enough not to suffer the worst issues. Reality EXT file system is not designed for modern flash and just by luck it ok. If google was dropping FAT, NTFS and EXT then saying exfat and some other flash compatible filesystem only it would have been annoying but it would have been justifiable.

  20. oiaohm says:

    If you force the system to continually work with a file system that is known to be inconsistent, of course you are bound to lose data and end up with even more problems
    I am not talking inconsistent state. I am talking about state gone. The reason why state gone happens with fat and NTFS is they are not design for devices where the real segment size is 2 or 4MB that is common in modern day flash drives. Heck ext2 was not designed for this nightmare but by pure luck its design avoids having all the superblock data and the reset of the file system data in the 1 segment.

    If you just have a flash drive fat 32 with only the base directory all file system information can be in the first 4MB in fact in the first 2MB of the disc. So bad sector alignment of this information. FAT does not include options to place these critical bits where every you like in reason.

    If one lot if file system data is intact you can go from inconsistent to consistent state. Ok you will lose some data but not all. I am not talking about keeping on working on a inconsistent file system.

    In fact state gone is great for makers of flash drives. People throw then in the bin because they think they are buggered when really there is nothing wrong other than pulling the flash drive out at the wrong time causing it to lose the first 2 to 4meg and the file system in use not designed to cope with this.

    TEG liked aligning me with a quote that is not mine.

    DrLoser TEG point about flash drives is missing the most critical point is how the filesystem puts it data on drive in relationship to the zones of failure.

    https://www.pjrc.com/tech/8051/ide/fat32.html The issue is number of reserved sectors option. Yes it says 16 bit. Reality is 15 bit become some implementations out there are signed. This is also almost never set right. Worse some implementations of fat don’t check reserved sectors and presume its always 20. As soon as you make a FAT32 formated drive that is safe you have lost compatibility. NTFS has equal problems.

    http://www.cgsecurity.org/wiki/Advanced_Find_ext2_ext3_Backup_SuperBlock
    The reality is ext design basically shot gun spreads all core data about the file system to allow recover all over the disk. ext design is not the most disc usage effective file system. Multi copies of the superblock that allows you to locate and rebuild the damaged file-system. All the information in the ext superblock for FAT is at the start. NTFS has fairly strong zoned MFT. So not as well spread. If you are not spread you need correct alignment.

    The shot gun method that ext uses makes rebuilding of it drive almost always possible. This is how come Linux for years on ext2 run without journal. The main reason why ext3 has a journal is not to fix power loss issues but to prevent the huge amount of time it takes to track down the shotgun method of backup up the file system data. Ext2 is way stronger than Fat. Its just the design difference.

    If a ext2 has lost all state and cannot rebuild you are talking quite major damage to the device not just a odd segment reporting blank. Basically its good luck that ext2 and latter file systems happen to be compatible with how usb flash drives are blocked. Any guess what the performance problem with the shotgun method is. On spinning media the shot gun method equals having to read and write all over the place that is highly in effective. Flash drives don’t have the issue with writing all over the place.

    Yes it one of the funny oddities. Ext file system design flaw on spinning media is what makes it flash safe. NTFS and FAT are so optimized for spinning media bad issues happen on flash.

  21. DrLoser wrote, “it’s well-known that paper tape doesn’t suffer from bit-rot. Paper tape was good enough for the old days. I have no idea why this efficacious method of backup has been neglected for so long.”

    The reason why paper-tape does not suffer much bit-rot is the size of the bits. The thing is the lowest density of storage, perhaps on par with paper and pencil. If bits do shred, one can always examine the tape frame by frame and reconstruct the data. This just won’t do in the modern age. Too bulky. Too much labor involved. Paper-tape readers have advanced but they are still bulkier than a smartphone I would guess. It just doesn’t do to have the peripheral device larger than the computer especially when the performance is so low. No, we feel lucky these days and store stuff at higher density with some redundancy at some level to save us.

  22. DrLoser says:

    I have seen USB hard drive enclosures used for backup in middle-sized schools. It wasn’t the most reliable means but it was easily the most affordable. There’s just no good reason for preventing a Linux kernel from using native file-system formats.

    Presumably, as always, the “good reason” is simple economics. And as so often you have drifted a very, very long away from the use case that prompted your OP.

    It was all about Chromium OS. Remember that? The whole thinking behind this decision appears to be based on the amount of data to be backed up. Not “an infinite number of files.” Not “the data of a large organisation/school system.” Not even “268 million files.”

    Now, I’m going to try to follow DeafSpy’s advice and resort to sanity here: for starters, unless other information is cited, I believe we are talking about “Chrome OS” as in Chromebooks here. I do not believe we are talking about “Chromium OS,” which seems to be the term for “Open Chrome OS.” Not sure that it makes a difference, but it’s nice to start a discussion by defining terms.

    I am further going to stretch the upper limit of the hardware under discussion to what many might consider an absurd degree: let’s apply the use case to a top-end Chromebook Pixel. Not cheap at £1300, but a technological marvel.

    And it boasts a whopping 32GB of solid-state drive. So, let’s assume an equally unlikely high target of 160 million files: that’s 200 bytes per file. And it’s easily within the limitations of FAT, even though this is obviously an insane scenario.

    This would actually be a completely foolish way to back up a Chromebook, and never forget that one of the touted advantages of a Chromebook (touted on this very site) is that it doesn’t much need local storage in the first place.

    For starters, you could avail yourself of the three years’ 1TB Cloud storage that comes free with this particular device. Presumably less on cheaper models, but still.

    Or you could avail yourself of a NAS. I’m sure a NAS could manage a 32GB load.

    Or you could, perhaps, lash up an NFS link. I’m not so sure about this one, but I’m sure you could Google for it.

    Or you could set up Amanda, rsync, Time Vault or Clonezilla … or dozens of others. The advantages of FOSS really shine when you start to think about this particular use case.

    Or you could even rely on a trusty old-timey tool like FTP.

    As a final resort, it’s well-known that paper tape doesn’t suffer from bit-rot. Paper tape was good enough for the old days. I have no idea why this efficacious method of backup has been neglected for so long.

    You do realise that your entire argument in this particular case would be just as (un)sound if you replaced “Ext3” with “paper tape,” don’t you, Robert?

    Incidentally, all TEG’s points about USB drives are 100% valid. Don’t listen to oiaohm’s theories on this subject. It’s one of the very few things that he knows nothing at all about.

  23. TEG wrote, “to pass it as facts by claiming to be an M.Sc. or something”

    I suppose that is some innuendo leaning my way. See A Drift Chamber For Use At Low Energies. See also, Status Report On The University of Manitoba Cyclotron for the kind of work I did after graduation.

  24. That Exploit Guy says:

    Please substantiate your claim. Without any verifiable references, why should we believe you?

    Ha, you got me there!

    You see, the truth is, my grandpa died when I was pretty much still a baby, so I wouldn’t know even if he was the most horrible man that had ever walked the earth.

    Obviously, though, given that I was my grandpa’s grandchild, you wouldn’t expect me to give an objective comparison, would you? After all, it’s not like I said that and tried to pass it as facts by claiming to be an M.Sc. or something.

  25. That Exploit Guy says:

    Omg, Hamster, where did you pull this complete non-sense from? Man, you never cease to amaze me.

    From the very same person that claims to be able to jam WiFi signals with his own thoughts, you expect no less than that. Even Professor X has nothing on this “natural born” superhero!

    I will make a desperate attempt to keep the conversation sane.

    Desperate and futile, may I add?

    You see, the angular momentum of a disk is very important to keeping a block from disappearing. Otherwise, it’s just kind of uncertain, you know, like quantum-mechanically uncertain. In fact, if you open the plastic casing of a USB thumb drive, instead of blocks (as in “Lego blocks”), there is a 50% chance you will find Erwin Schrödinger’s cat staring and hissing at you.

    Freaky stuff.

  26. luvr says:

    TEG said, “Also, don’t you ever liken yourself to my grandpa again. You are nothing more than pathetic little flea compared to the man he was.”

    Please substantiate your claim. Without any verifiable references, why should we believe you?

  27. That Exploit Guy says:

    Those routines find chunks not in any file and/or repair a FAT from the other copy of the FAT.

    That is to grossly understate what chkdsk does, though I don’t expect any better from you or Peter Dolding.

    The drifting chunks could easily be crushed by further writes if the user is not alert to do the repairs following a crash.

    That’s just some tortured metaphors that mean absolutely nothing in actuality.

    If you force the system to continually work with a file system that is known to be inconsistent, of course you are bound to lose data and end up with even more problems. I just don’t see how that is supposed to be different with ext2 in this regard.

    People were very serious about backup on floppy in those days because they had to be.

    So it’s somehow FAT’s fault that floppy disks are inherently unreliable as a storage medium?

    DOS+FAT was a PITA. When I moved from Lose ’95 to GNU/Linux and ext2 life rapidly improved.

    This is all very anecdotal, isn’t it? I don’t know about everyone else, but I can clearly picture a circa 1999 Robert Pogson’s personal computing set-up with a ext2-formatted hard disk drive and stacks of FAT-formatted floppy disks. The preferred use of one operating system on an inherently more unreliable storage medium, of course, would mean that the chance of experiencing data loss on it would be more likely than on the other file system used on the less unreliable storage medium.

    Anyway, have you ever heard of this thing called the “scientific method”? I thought a M.Sc. such as yourself would be an absolute fanatic about that!

  28. That Exploit Guy says:

    TFA was about using FAT or ext* for accessing ChromeOS.

    Last time I checked, you were the one bringing up “large organisations”. Are you telling me that you aren’t mature enough to take responsibility for what you have posted?

    You must be on Hell of a devil’s spawn if you demanded references from your grandparents before you would listen to their stories of the “old days”.

    Way to invalidate everything you have ever said.

    Also, don’t you ever liken yourself to my grandpa again. You are nothing more than pathetic little flea compared to the man he was.

  29. TEG wrote, sarcastically, “obviously chkdsk /r and fsck.vfat are just a myth.”

    Those routines find chunks not in any file and/or repair a FAT from the other copy of the FAT. The drifting chunks could easily be crushed by further writes if the user is not alert to do the repairs following a crash. I remember many floppies that lost data in the old days that way. On a hard drive with lots of space, broken files can exist a while before being recovered but on the smaller devices FAT was deadly. People were very serious about backup on floppy in those days because they had to be. DOS+FAT was a PITA. When I moved from Lose ’95 to GNU/Linux and ext2 life rapidly improved. That year, ext3 became available and I could count on one hand how many files I’ve lost since. With FAT, it was not unusual to lose files in bunches. The only reason to choose a non-journalled file-system these days would be for raw speed with less valuable data. With the web being a backup, a kernel-builder for instance might get a little more speed out of ext2 than ext3 but there are better ways to gain speed like SSD, or using RAM-storage.

  30. TEG wrote, “Having you ever tried, I dunno, not making claims that you have no verifiable source to back up?”

    I created this blog from nothing to give myself a platform to speak/write/communicate and I certainly will not refrain because you don’t trust me. You must be on Hell of a devil’s spawn if you demanded references from your grandparents before you would listen to their stories of the “old days”. I’ve been many places and done many things about which I write. If you’re not interested, go away. We don’t need you here.

  31. TEG wrote, “uses a USB thumb drive as backup storage”

    TFA was about using FAT or ext* for accessing ChromeOS. That could be a memory card, a USB flash drive or a USB hard drive. I have seen USB hard drive enclosures used for backup in middle-sized schools. It wasn’t the most reliable means but it was easily the most affordable. There’s just no good reason for preventing a Linux kernel from using native file-system formats.

  32. Deaf Spy says:

    They are effective against spinning disc issues.

    Omg, Hamster, where did you pull this complete non-sense from? Man, you never cease to amaze me.

    flash issues
    I will make a desperate attempt to keep the conversation sane. These are handled by firmware. The file system, no matter what, does not care about it, nor it can. You either do RAID, or you rely on firmware to minimize your losses.

    You mistake a journaling file system for a transactional log database. As usual, you are clueless.

  33. oiaohm says:

    That Exploit Guy you mention exactly why it causes abusive wear. The real flash segment size results in the complete list of block that make up the FAT(file allocation table) of fat filesystem being in one block. Yes the primary and backup fat table lot of time is in the same flash segment being rewritten over and over again until it fails. There is a reason why node style filesystems has advantages on flash. Result is not all the file-system data is in one location to disappear. Fat filesystem layout is not designed for flash. MFT placement and design on NTFS is also not that well designed for flash either.

    True, obviously chkdsk /r and fsck.vfat are just a myth.
    They are effective against spinning disc issues. Absolutely worthless against most flash issues where a complete block disappears that just happens to contain all the critical file system information. Both those tools require data that will be gone. You end up using like photorec to scan the media for signs of data.

    That Exploit Guy CHS translation is not the only block method for USB flash drives designed for Linux.

    On a USB thumb drive, wear-levelling and ECC are done by the firmware
    This is sometimes true sometimes not. A Linux only USB thumb drive reports differently(they do exist). USB thumb drive can report as any type of ATA or MMC device including some nice flash device modes that show the real flash blocks. The modes that show the real flash blocks are also Windows and OS X incompatible.

    There are only two possible outcomes for sector alignment: aligned or not aligned. There is no in-between.
    No there are more options. All aligned and all critical file-system data in 1 flash segment or All aligned and critical data placed with correct redundancy over at least 2 flash segment. Problem a lot of straight of shelf formated fat USB keys have all critical data in 1 flash segment. Basically there are 4 out comes. Most people would not guess on a 8G key that a flash segment is commonly 4MB. Yes F2FS does not work on these. Welcome to the fun of write leveling on top of write leveling. You file system block size will be smaller than your flash segment size. So now when selecting to write sectors you have to level out were you write so you are not causing as many rewrites/deletes. Information in the Linux block level when it comes down to deciding where to write far more complete than Windows for USB flash drives. Problem is using fat and ntfs who designs are far more restricted the block level of Linux cannot do its magic.

    By the way its also the USB firmware job to report correct information for alignment of partitioning tools. So a partition installed incorrect alignment this is in fact a failure of the firmware of the key(yes there are a lot of cheap ones that do fail to report). The Linux ext and most other Linux formating and partition tools access this information. Windows included formating tools don’t know how to access the alignment information this is why you have to download the tool from the standard body for USB devices to format them. Not aligned should not happen on modern USB keys with Linux unless the key is defective and not reporting correct information. Your big stuff up is all critical data about the file system in one flash segment.

    The is sector alignment in the case data split across segments causing performance hits I was not bothering disproving because that is using invalid formating tools. Case of sector alignment resulting key data being in only one segment because bad segment alignment is the one you have to worry about.

    That Exploit Guy sorry sector alignment is not a issue that anyone using modern tools to setup the drive should be worrying about unless you see error that the information cannot be got or using file systems like fat or F2FS that have limitations on placement of key data. F2FS is for 2MB segmented flash.

    Sector alignment options is a metric against a filesystem num nuts you self. That Exploit Guy not all file-systems support alignment options of key sectors to operate safely. Fat is so badly fitted to the segments on a flash drive that its key data cannot be aligned to flash segments. Ext3 and Ext4 can be. NTFS also cannot ensure MFT alignment to prevent a single flash segment taking out all key information about the file system. NTFS and Fat design suits spinning media. exfat has more alignment options of critical sectors so can be formated safe on flash media. This is the problem Fat NTFS are both not suitable. Ext3 and Ext4 can be.

  34. That Exploit Guy says:

    I have about 2 million files so FAT32 is usable, but for a large organization FAT32 would be a poor choice for backup.

    I ain’t sure about you, but I sure as hell haven’t heard of a large organisation that uses a USB thumb drive as backup storage, given obviously that a single USB thumb drive does not usually hold enough to satisfy the need of a large organisation and that USB thumb drives tend to be too brittle to trust the data of your entire organisation on.

    One might even argue that an better alternative would be a large quantity of redundant digital tapes or hard disk drives serving as cold backup, but what can I say? I am kind of old-fashioned like that ;).

  35. That Exploit Guy says:

    TEG demands that I read everyone’s mind and spy on everyone’s peripheral devices. He thinks like M$.

    Yeah, demanding you to substantiate your argument is sure thinking “like M$”. Having you ever tried, I dunno, not making claims that you have no verifiable source to back up?

    Sure that beats whining about Wikipedia not accepting your edits.

  36. That Exploit Guy says:

    Fat is a horible choice because lose power no recovery path and possible complete loss of data access.

    True, obviously chkdsk /r and fsck.vfat are just a myth.

    FAT table is also what cause abusive wear.

    True, obviously what we know about singly-linked list file systems is just a myth. Oiaohm Knows Better (TM)!

    Ext3 and Ext4 supports active wear leveling and ECC for flash on newer kernels made after the time of that old review document you dug out.

    On a USB thumb drive, wear-levelling and ECC are done by the firmware, if at all. Vanilla ext* (as opposed to any experimental garbage that you tend to cite to justify your argument) simply has neither the code nor the understanding of the topology of the NAND flash cells to perform any kind of wear-levelling or ECC on a thumb drive.

    Sector alignment are disproved by the very document you quoted against fat.

    There is nothing in any of my links that disproves such. This is not to mention that the issue of sector alignment is pretty well known among people that insist on formatting their own thumb drives.

    Ext2 for Power Loss tolerance is equal to Fat but beats Fat on everything bar compatibility.

    Sector alignment is not a metric that file systems compete on, numbnut. It is something that you need to figure out when you insist on doing DIY-formatting for your thumb drive. There are only two possible outcomes for sector alignment: aligned or not aligned. There is no in-between.

    You will also find NTFS loses on these same points.

    I can also claim that the opposite is true. What’s your proof?

    That Exploit Guy please go learn to read document before quoting them.

    Bawhahahah… Tell that to yourself.

    Format a USB key non fat and its performance become crippled this is always due to the controller in the device being FAT only.

    The high-level format is one of the many things that typical USB thumb drive firmware doesn’t care about. Aside from wear-levelling, ECC and all other things that you think the firmware doesn’t do, the job of the firmware is to give a CHS (cylinder-head-sector) translation of the NAND flash cells to the upper-level software so that the upper-level software can treat the NAND flash cells as if they were a spindle of magnetic disks.

    CHS translation is very important to making the thumb drive work because, after all, all of your beloved file systems, including FAT, NTFS, ext*, JFS, XFS, ReiserFS, etc. etc… understand CHS and CHS only. This also means the “sector size” that the thumb drive gives you is only there to make the drive fit into a CHS description – it’s not the actual atomic unit in which the drive reads or writes data – and often because the actual chunks in which the NAND flash cells are divided into (called “pages”) are much larger than the nominal sector size, you need to align the file system so that it starts right where a page boundary is and have the cluster size (or “block size” in ext*) set at the same size as the page size. The consequences for incorrectly formatting your file system are simple: your drive will have to repeatedly read a page, incorporate the new data and then overwrite the page just to satisfy a series of undersized write requests or overwrite more than one page just to satisfy a single misaligned write request, and you end up with a thumb drive that has a significantly sorter life span than expected.

    Again, don’t take my word for it.

    This does not solve FAT fault tolerance issues.

    There is no “FAT fault tolerance issue” to solve. If fault tolerance was all that mattered, we would be all be using ZFS with block-level checksum and redundancy. Of course, I personally trust FAT more than ext2 to not turn into a pile of wet noodles under abuses.

  37. TEG wrote, “The least you could do, therefore, would be to provide statistics to support the claim that developers use ext* for their USB drive and none of this subject-shifting nonsense about what you can use.”

    TEG demands that I read everyone’s mind and spy on everyone’s peripheral devices. He thinks like M$.

    Wear is not an issue with any of my USB flash drives. They are typically plugged a few hundred times before I lose them or they “shrink”( my euphemism for my stuff becoming bloated ). For backups, there are a lot of USB hard (magnetic, not FLASH) drives which last indefinitely.

    FAT, of course, has its own unique set of limitations like a tiny number of files (FAT32 – 268 million versus ext3 – limited only by number of blocks on the device). I have about 2 million files so FAT32 is usable, but for a large organization FAT32 would be a poor choice for backup. My digital camera is the only device in the house that insists on FAT. I have had only one visitor drag in a FAT USB drive, which was when I discovered I had built one kernel without FAT-support.

    Asking why anyone would use anything but FAT is the wrong question. One should ask why anyone would use FAT. Why would anyone use FAT or Lose ’95? It has no journal, eh? Where else in IT do we use no journal, swap? For storage of any kind a journal is vital unless you’re feeling lucky.

  38. oiaohm says:

    http://www.micron.com/~/media/Documents/Products/Software%20Article/SWNL_choosing_linux_fs_for_flash.pdf
    However, the FAT table used to organize the file system can be susceptible to corruption if power loss occurs.
    That you must not have fully read That Exploit Guy. Fat is a horible choice because lose power no recovery path and possible complete loss of data access. FAT table is also what cause abusive wear. NTFS has other issues. Compatibility is about the only thing Fat has going for it. Really I don’t mind if google removed ext support and instead include one of the other options that really do work well.

    Also you did not read page 9. Ext3 is faster to read and write than fat and shock horror is resistant to power loss. Ext3 and Ext4 supports active wear leveling and ECC for flash on newer kernels made after the time of that old review document you dug out.

    That Exploit Guy basically your first 3 points Drive wear, Power loss tolerance, Sector alignment are disproved by the very document you quoted against fat. Ext2 for Power Loss tolerance is equal to Fat but beats Fat on everything bar compatibility. Ext3 beats Fat with better Power Loss tolerance. You will also find NTFS loses on these same points. exfat on the other hand is more modern design and almost impossible to find anything formated it also has worse compadiblity.

    That Exploit Guy please go learn to read document before quoting them.

    Format a USB key non fat and its performance become crippled this is always due to the controller in the device being FAT only. This does not solve FAT fault tolerance issues.

  39. That Exploit Guy says:

    I can use whatever file-system I want on my USB drives.

    You have been specifically asked to use statistics to support your claim. The least you could do, therefore, would be to provide statistics to support the claim that developers use ext* for their USB drive and none of this subject-shifting nonsense about what you can use.

    Why the Hell shouldn’t a user use ext or jfs or whatever on a USB drive?

    I can think of some good reason as to why no one should use ext* or jfs on a USB thumb drive. Drive wear is one reason. Power loss tolerance is another. Sector alignment is yet another. Compatibility is yet another.

    To simply put it, ext2 is a poor choice for hot-plugged devices that often get pulled out of the port abruptly, and any journalled file system can drastically increase the wear of the USB thumb drive holding it. Don’t take my word for it.

    USB thumb drives usually come factory pre-formatted, and that’s for a good reason. If you insist on formatting your own thumb drive, chances are the end result will come with crippled write performance. Again, don’t take my word for it.

    Compatibility is pretty much a no-brainer when it comes to USB thumb drives. I can plug in and use a FAT-formatted thumb drive on anything from a Windows box to Mac OSX to a Linux box to a SPARC server running Solaris to a several-billion-dollar space rover running VxWorks to a five-dollar made-in-China music player running god-knows-what without the need of any third-party software or modification. Can you do the same with ext* or JFS? FAT blows pretty much everything else out of the water when it comes to the amount of operating systems it is supported on, and this is why most USB thumb drives in market come FAT pre-formatted.

  40. oiaohm says:

    DrLoser as normal cannot read.
    This is the only officially support distro, but building ChromiumOS should work fine on any x86_64 Linux distro running a 2.6.9+ kernel
    Guess what happens to be x86_64 distribution some Chrome OS devices. Some chromeOS developers install Ubuntu inside chromeos using crouton.

    Is there any good reason why Chrome OS uses a proper GPL version of the kernel, whereas Android (the commercially used version, not that dead-end community thing) uses the completely inaccessible Gubuntu instead?”
    Gubuntu inside google uses standard Ubuntu kernels no extra customization by Google.

    Android community and commercial at this stage does not use Mainline Linux kernel for everything. ChromeOS does use mainline kernel. Reason for the difference is power management and IPC experimentation done with Android that has not worked out how to be made into final form for Mainline. Android Certification to google play does not approve of kernel space drivers being closed source by the way. So closed source drivers for android have to be implemented userspace. Reality here their is quite min difference between between AOSP project and products consumers buy. Difference the closed source user-space drivers and applications are missing. A few of those userspace drivers are kinda critical like display output. Android kernel and Gubuntu kernel are not same source not even close.

    Really there is no good reason for Chromeos to use a custom modfied kernel like android does. All of the android experiments with power management are merged mainline. The stuff that is missing being android IPC. To a web-browser android IPC has no practical usage. Its effort and cost that is just not worth it. Chrome is a desktop application it kinda expects a desktop kernel and a mainline Linux kernel is a desktop kernel.

    DrLoser something else to be aware of. Google custom hardware init chip in chromeos devices is also used in Google server. ChromeOS is a good testing ground to see how mainline compares to their custom internal server kernel.

    Google custom server kernels are not linked to Gubuntu or ChromeOS.

    Dalvik is not ChromeOS.
    http://www.pcworld.com/article/2686712/run-any-android-app-on-your-chromebook-with-this-hack.html
    Running android applications on ChromeOS does not use Dalvik. Java stack also has very little todo with ChromeOS the fragments of the Javastack are only require for the application re-complier chromeos-apk. This is a very interesting method.

    Drloser even talking about Dalvak and Javastack when talking about chromeos really shows you don’t know the topic and should be keeping your mouth shut until you do some more homework and do understand what you are talking about.

    Yes its fine to pick on me for doing the same kind of stuff yet for some reason oldman lets Drloser slide on this crap all the time.

    https://github.com/vladikoff/chromeos-apk/blob/master/archon.md
    ARChon runtime is already available for Windows, OS X and Linux Desktop. So converted android applications run everywhere.

    At what point do you think the community should draw a red line under Chrome OS and consider it beyond the pale?
    At this stage we are not really seeing any major actions that are intent to be harmful to open source. Like the firmware of chromebooks majority is open source and inspect-able. In fact there are less closed source firmware in chrome-books than most other devices on the market. We have a few cases of developer lazyness like hey lets kill of ext just so we don’t have to implement something but this is status normal. Happens with open source projects that are not chromeos as well.

    kurkosdr misses the developer issue. Chromeos applications can use hard and soft links in their application construction. These are not supported on fat or ntfs exactly like ext or other Linux native feature file systems. So you are coping an application on to a chromeos device to test it you really do wish to use ext or something Linux native file-system features drive.

    Really I think google developers have been far too polite. There is a real issue putting ext in fact anything not exfat, ntfs or fat on a USB flash drive. Worse if a key comes formated Fat putting exfat or ntfs on it you might reget it as well. The issue is the flash controller in a USB key for wear leveling might be in background reading the file system structure to work out what is unused space. Yes they can reduce users issues by removing ext support but they will cause developers issue. This is just a annoying land mine.

  41. kurkosdr wrote, “Developers use ext for USB drives? Any data to back this up, or we should just accept the anecdote”.

    I can use whatever file-system I want on my USB drives. Why the Hell shouldn’t a user use ext or jfs or whatever on a USB drive? It’s his drive. He can even encrypt it if he wants. I’ve often used various file-systems on USB drives, usually because I want file-permissions and owners and other good things not available with FAT. Those kinds of things may not be useful for carrying around in pockets but they are wonderful on auxiliary storage on a terminal server or backups on a server or just storing things in some way similar to how they are stored on a real file-system. The appeal of FAT for small embedded gadgets is that it is lightweight. That has some merit but users should be able to use a heavyweight file-system if they want those additional features. Folks who want nothing to do with M$ should also have such choices. There is very little problem moving files around between GNU/Linux systems but just try and move files to/from M$’s OS and all Hell breaks lose. I was once sent a CD where the creator had not respected case. It was hundreds of megabytes of data with random typos here and there. I had to decode it all and fix it. I then pointed out to the sender the problem and sent him a corrected file-system. Everything about M$’s software is designed to mess with competition like that and the world should reject that like any band of criminals is rejected.

  42. DrLose, in a burst of productivity, wrote, “as long as you are prepared to accept Ubuntu 12.04 and 64-bit.”

    That’s what the Google devs use, so they support it. That doesn’t mean other distros won’t work. It’s just a matter of what libraries exist probably.

    DrLoser also wrote, “Is there any good reason why Chrome OS uses a proper GPL version of the kernel, whereas Android (the commercially used version, not that dead-end community thing) uses the completely inaccessible Gubuntu instead?”

    Gubuntu is Google’s internal configuration of Ubuntu GNU/Linux for their desktops. It has nothing much to do with the kernel used with Android/Linux which comes more or less built from the code at kernel.org and has a set of symbols defined to build stuff that devices running Android/Linux need, stuff like device drivers for device P, where P is one of hundreds of devices that can run Android/Linux. Since storage on smart thingies is scarce, manufacturers probably only include drivers and code that they need for their particular device, although they could presumably ship everything under the sun like GNU/Linux distros often do. A smart thingy is usually “embedded” and unless their is a plug for it nothing new gets added in terms of hardware whereas with legacy PC there are a variety of PCI and other expansion slots, USB slots, and the possibility of changing the motherboard and expecting software to work, at least with GNU/Linux. 😉 I’ve several times moved a hard drive from one random PC to another or changed the motherboard to something quite different and GNU/Linux boots up happy as a clam. Android/Linux is not usually used in such rich environments.

  43. DrLoser says:

    But that deals nicely with my point number one. Now for the other two:

    2) Various, but not all, bits of the Java stack above Dalvik.
    3) Probably other bits of which I am unaware.

    Do feel free to expand on my misapprehension of point number (3).

  44. DrLoser says:

    As you well know, the kernel is under GPLv2 and Google or anyone else distributing it modified is required to distribute the source code, so, yes, you can build it yourself.

    I didn’t know that, as it happens, Robert. Once again, you have caught me out. I admit: I was wrong.

    Apparently it is possible to build the Chrome OS kernel on that basis, as long as you are prepared to accept Ubuntu 12.04 and 64-bit. And jump through the usual trivial hoops (we can both do that). Now, a few questions:

    1) Why would anybody do that?
    2) Have you done that and tested the process?
    3) Is there any good reason why Chrome OS uses a proper GPL version of the kernel, whereas Android (the commercially used version, not that dead-end community thing) uses the completely inaccessible Gubuntu instead?

    It’s all a bit confusing. I’d love to keep up, but to be honest I’d rather study medieval Byzantine theology.

  45. DrLoser says:

    As usual, I have to repeat the question for oiaohm, who is apparently immune to the normal desire to read what other people write:

    At what point do you think the community should draw a red line under Chrome OS and consider it beyond the pale?

  46. DrLoser says:

    kurkosdr wrote, “ChromeOS OEMs already have to pay for FAT32 patents (the VFAT bit to be precise) and they can’t get around that.”

    I’m coming to the very disappointing conclusion that neither one of you knows what you are talking about.

    Is Microsoft still able to charge a patent license for any form of FAT whatsoever? It can hardly be more than 5¢s per USB stick. After all, Torvalds himself made a major contribution to invalidating the claim in German courts, circa 2012.

    Have I missed something? Is patent law really that slow?

    Well, it might be.

  47. kurkosdr wrote, “ChromeOS OEMs already have to pay for FAT32 patents (the VFAT bit to be precise) and they can’t get around that.”

    No, they don’t. Patents expire in a few years. They’ve been around since 1995

  48. kurkosdr says:

    “Well, some users who use ext* are important to Google, like developers,”

    Developers use ext for USB drives? Any data to back this up, or we should just accept the anecdote, which is also your only justification of why Google “should” keep ext in ChromeOS.

    And why should ChromeOS developers use ext anyway? And why would a ChromeOS developer need to transfer files to a chromebook? And even if he does need to transfer some files for testing, just an old USB stick formatted with FAT32 is enough.

    The only people who care about ext* in ChromeOS are people trying to use ChromeOS as a Linux distro. Which is a niche Google doesn’t care about. They want you to use ChromeOS as Chrome and maybe as Android.

    Remember: Every feature is a feature that must be debugged and supported. This is not some kid slapping together a distro by throwing in stuff from upstream. If Google keeps ext* filesystems, they have to support the feature. It doesn’t offer them any significant value, and most people don’t care, they remove it. There is no conspiracy.

    “M$ threatened to sue the world over FAT so lots of users have reason to avoid it, those who actually make a living with FLOSS.”

    ChromeOS OEMs already have to pay for FAT32 patents (the VFAT bit to be precise) and they can’t get around that. Bad ugly de-facto standards… etcetera.

  49. kurkosdr wrote, “most users don’t care about ext* support, so Google has no reason to maintain and support the feature”.

    Well, some users who use ext* are important to Google, like developers, so you bet Google has lots of reasons to maintain and support the feature. M$ threatened to sue the world over FAT so lots of users have reason to avoid it, those who actually make a living with FLOSS. You could argue that Google needs only to stick with consumers or users of that other OS for users of ChromeOS but remember, it’s all about applications, and guess what, Google is wanting to have users use some local applications that may well have been developed on a GNU/Linux system by a GNU/Linux user not using FAT.

  50. DrLoser wrote, “you can’t build it yourself, but, hey, it’s the kernel!”

    As you well know, the kernel is under GPLv2 and Google or anyone else distributing it modified is required to distribute the source code, so, yes, you can build it yourself.

  51. ram wrote, “All my USB sticks are ext2 formatted. Similarly for nearly all of my customers.”

    I have a digital camera that uses some variety of FAT. I could use ext* for everything else. If I had a big USB drive for backup, I might use JFS. I think it’s more mature and less likely to give problems, not that I’ve had any problems with ext since ext2. JFS was contributed by IBM and I think they know what they are doing…

  52. oiaohm says:

    DrLoser Chrome OS in developer mode can in fact install all the parts it requires to rebuild the kernel. Of course to install your own kernel you have to change the signing key out. Chrome OS makes it hard to change kernel not impossible.

    Future ChromeOS most likely will not have X11 in by default but instead go for Chrome straight to screen.

    The big thing under the hood of ChromeOS is a fairly standard Gentoo. ChromeOS internal package manager is Gentoo package manager.

  53. ram says:

    All my USB sticks are ext2 formatted. Similarly for nearly all of my customers. If Google actually does something as dumb as this I will have to recommend against Google Chrome purchases. At this point I’m forced to recommend my customers wait and see before making any more Google Chrome purchases.

  54. DrLoser says:

    From an entirely neutral point of view, Robert (neither of us are neutral, but let’s imagine):

    At what point do you think the community should draw a red line under Chrome OS and consider it beyond the pale?

    It’d be interesting to get a brief summary of vital FLOSS components. I’ll start you off with:

    1) A Linux kernel (you can’t build it yourself, but, hey, it’s the kernel!)
    2) Various, but not all, bits of the Java stack above Dalvik.
    3) Probably other bits of which I am unaware.

    Oh, wait, X!

    No, that’s not it, either.

  55. DrLoser says:

    It would be like saying, lets get rid of convertible cars as it is pain in the butt to build, safer for the occupants and no one needs or uses convertibles anyways.

    Not really, Dougie. In fact, not at all. What a completely pathetic false analogy.

    1) Car manufacturers build convertibles. Guess why?
    2) People pay for convertibles. Guess why?
    3) There is a cost to the manufacturer involved in separating out a production line for convertibles. This cost is the mediation between points (1) and (2). It includes the cost of safety remediation, which is by no means cheap, let alone free.

    On second thoughts, Dougie, it’s not such a bad analogy after all. Shame you couldn’t follow it through.

  56. dougman says:

    “everyone uses FAT32, NTFS or exFat for USB drives”

    LOL…Sure they do KUKU.

    Linux users mainly use EXT3/4, but are free to use whatever they wish and OSX uses HFS+, so not everyone uses the three formats you mentioned.

    What’s funny is that the three you say are supposedly used by everyone, are controlled by M$ and sued TomTom over FAT32.

    But, that’s not what this post is about is it? No, its about a idiot Google dev. thinking that he knows what’s best and in fact, the issue is at most trivial and cosmetic.

    It would be like saying, lets get rid of convertible cars as it is pain in the butt to build, safer for the occupants and no one needs or uses convertibles anyways. *Smug-face*

  57. kurkosdr says:

    “In this case, it’s a lot of developers who appreciate those file-systems”

    No, they don’t, everyone uses FAT32, NTFS or exFat for USB drives. This is what you can’t grasp: most users don’t care about ext* support, so Google has no reason to maintain and support the feature.

  58. kurkosdr wrote, “What makes the heritage of Linux (and it’s GNU tools) so important? Why should everyone adopt the tools and filesystems Linux uses?”

    That’s the wrong question. Google has used those tools for years now. The real question is why change for no benefit whatsoever? Why impact users at all? Why further reduce the flexibility of ChromeOS? Typically, technology seeking adoption should seek to enhance flexibility/performance etc. Why go the other way? I see car-makers produce lighter vehicles to enhance mileage but they still retain doors and windows which they could chuck to enhance mileage. They don’t do that because those features are appreciated by users. So are file-systems. In this case, it’s a lot of developers who appreciate those file-systems. Now that Android/Linux is taking over the world, it’s probably reasonable to rethink the use of FAT on USB drives especially since M$ is sue-happy. There’s a huge attack-surface…

  59. kurkosdr says:

    “It’s the same attitude as the choice of BSDish/non-GNU stuff for Android/Linux. They go out of their way to minimize some important heritage like Big Brother and Newspeak/changing history.”

    This one deserves a special mention: What makes the heritage of Linux (and it’s GNU tools) so important? Why should everyone adopt the tools and filesystems Linux uses?

  60. kurkosdr says:

    The “reducing attack surface” is usual PR-speak. It may be true, but it’s not a primary reason.

    The primary reason is that Google thinks ext* suck, and since most usb sticks don’t use it, it’s not worth keeping around. If ext4 starts doing the usual ext4-isms, where do you think users will come to whine for support? The kernel devs or the ChromeOS support personel? Who do you think has warranties to honor? (hint: not the kernel devs)

    Pog doesn’t understand that every promised feature is a feature that must be debugged and supported. Google is not Canonical.

    “This sounds more like the Google guys trying to lower the profile of Linux in the ChromeOS/Android universe”

    “Linux” is not a platform. Linux is like a tool you use to create platforms. Get over it. It’s precisely this concept of “we are one and should be counted as one on marketshare stats, except when we are not compatible, in which case we are not one” that exists in Desktop Linux and it’s distros, that Android was created to eliminate. Sure, Android has UI mods applied on top by OEMs, but the Google Compatibility Specifications make sure the Android platform is one thing for each version, aka the APIs are common. You can add APIs but not change or remove existing ones.

  61. I doubt M$ has much to do with it. This sounds more like the Google guys trying to lower the profile of Linux in the ChromeOS/Android universe. It’s the same attitude as the choice of BSDish/non-GNU stuff for Android/Linux. They go out of their way to minimize some important heritage like Big Brother and Newspeak/changing history. One can build a house without wood or concrete but it’s a really bad idea. Google is not immune from bad ideas. Everyone gets them from time to time.

  62. dougman says:

    “Microsoft moles start to take over something they typically get the job done. In the case of Nokia, Yahoo and Novell, where the companies are fairly large, the hijack (completion of the job) can take several years to complete. The vulture just keeps circling the prey, weakening it by driving away resistance.”

    It would be not hard to get someone on the inside of say Google, to start pulling instances such as this, whereby features start becoming dropped, etc..

    Currently, I can drop a thumb-drive into my Chromebook and it just works, but this asshat wants to remove EXT for purely cosmetic reasons? Color me suspicious..

    This Sato idiot should be pulled from the project for even recommending something so stupid.

  63. Stuart DeGraaf says:

    This is pathetic. Android doesn’t “support” extN for sdcards either – without root – despite using extN for the system itself. (Of course, getting root for my new Samsung Galaxy Pro 8.4 via Linux – not LoseDoze – is proving to be a PITA.) I can understand not compelling people to use extN or making exFAT the default, but why the prohibition against extN? Why in God’s name would Google force people to use Mega$loth formats and filesystems with defective permission schemes? I hope Richard Stallman can figure out a way to go after Google for a GPL violation.

Leave a Reply