SSD

“Increase the speed, durability, and efficiency of your system for years to come with the Crucial MX300 SSD. Boot up in seconds and fly through the most demanding applications with an SSD that fuses the latest 3D NAND Flash technology with the proven success of previous mx-series SSDs. Your storage drive isn’t just a container, it’s the Engine that loads and saves everything you do and use. Get more out of your computer by boosting nearly every aspect of performance.”
 
See Crucial MX300 525GB SATA 2.5 Inch Internal Solid State Drive – CT525MX300SSD1
Yes, it’s looking like SSD have matured sufficiently for me to begin using them. I notice quite a few users on Amazon are using them to compensate for the slowing down of That Other OS. I don’t have that problem, thanks to Debian GNU/Linux, but I still could do without the seek times and transfer-rates of spinning discs for loading software and temporary data. I still see a role indefinitely for spinning discs with huge data. There cost still rules. Now, though, even ordinary consumers are using SSD with few problems. Why not me?

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , , , . Bookmark the permalink.

76 Responses to SSD

  1. oiaohm says:

    1) Blurry shots from the inside of a datacentre. and
    DrLoser the shots do exist. Part of marketing material and shareholder information from google then Facebook has theirs in there shareholders information.. The ones on-line are Blurry is right you have to get the paper form advertising material to get the non blurry photos even some nice up close shots. Its not like either party keeps what that are doing in the data centre secret.

    https://techcrunch.com/2017/03/09/googles-compute-engine-now-offers-machines-with-up-to-64-cpu-cores-416gb-of-ram/

    Its the second group that don’t exist of course Dr idiot would get this backwards.

    It’s worth noting that AWS’s EC2 service already offered 128-core machines and memory sizes up to 2095.944GB.
    Of course google is not selling what AWS is to customers yet. But this does not mean google is not using systems this size for their own usage..

    Tell us, little precious demented one, what have Google and Facebook done?
    I have already told you that interconnected motherboards to build large server. Same way the large EC2 128 Core machine at AWS works.

    The reality is there are customers who want servers bigger more powerful than you can build with a single motherboard. And there are customers who want a terabyte or 2 of ram. Of course this being reality does not suit DrLoser attempts to insult me just proves DrLoser has not done proper research on topic as per normal before attacking.

  2. DrLoser says:

    Photographs of the inside of a data center?

    Boy, what a total putz you are, Fifi.

    blockquoteDrLoser there is no point showing a goose like you the photo who waste 3 points of pointless insults.

    I don’t work on your silly little point system, you pathetic little dweeb. I work on evidence.

    You have none.

    You pathetic little putz.

  3. DrLoser says:

    DrLoser the reality is you only know Microsoft. You don’t know enough to look at photos of Google and Facebook data centres to in fact see what they have in fact done.

    Awfully remiss of me, I know. There are two types of photos that I cannot bear to look at:
    1) Blurry shots from the inside of a datacentre. and
    2) Candid pictures of a forty year old guy in fish net stockings and a red leather miniskirt, under a dim light post, somewhere in the far outback.

    Strangely enough, Fifi, the second set of photos exist. And the first do not.

    Tell us, little precious demented one, what have Google and Facebook done?

    Pay up or leave, you pathetic little putz.

  4. oiaohm says:

    DrLoser the reality is you only know Microsoft. You don’t know enough to look at photos of Google and Facebook data centres to in fact see what they have in fact done. You are now attacking me because your clueless.

  5. oiaohm says:

    Naturally you have evidence here for either Google or Facebook. No, wait, you don’t.

    DrLoser In fact its in the google and facebook specifications for their open compute motherboards that they have SPI ports. There is no need for SPI ports unless you are join them up into larger systems.

    Interesting enough Microsoft specification for open compute project don’t include SPI ports of any form.

    Then again, Fifi, you know better. Except …. you don’t.
    There is in fact photos of the google and facebook data-centres if you know what a SPI cable looks like you can see them front and centre. Up to 50+ machines linked into a single system in a single photo.

    DrLoser there is no point showing a goose like you the photo who waste 3 points of pointless insults.

    Sorry its not that I am without evidence. Its that you have incorrect presumed.

    Now since you have insulted present the images and documents that prove me wrong first. Don’t bother posting pack of garbage and expect me to hand over cites.

  6. DrLoser says:

    So in the big server places like google and facebook you have you large servers and you standard sized servers.

    Naturally you have evidence here for either Google or Facebook. No, wait, you don’t.

    I can’t speak immediately for Facebook (although I have contacts), but what I can say is that the current Microsoft standard is — as stated earlier — 64G or 128G per server.

    What I can also say — and this is anecdotal, but people who work in Big Data and the search arena frequently natter and even exchange jobs, so I imagine this is at least plausible information — is that Google data centers still work on the original Map-Reduce, let’s go with bucket-loads of cheap, model. My sources suggest that the typical Google server might even be as low as 32G of RAM: probably up to 64G right now, but not much more.

    Google are famous for running their servers hot, Fifi. They do this because their data centers are built around massive redundancy, and cheap individual parts are easily replaceable. I doubt Google has a single terabyte RAM server in there.

    Then again, Fifi, you know better. Except …. you don’t.

  7. DrLoser says:

    Density, indeed.

    DrLoser you have the reason wrong. The reason is density. Currently $10000 of ram consume more rack-space than $1000 of SSD flash.

    Shall we try that simple little mathematical inequality again, little tatterdemalion darling?

  8. DrLoser says:

    DrLoser go ask them again. You will find that is 64G to 128G per motherboard per case.

    And so on and so on, through the Standard Wall of Fifi Gibberish.

    Look, me little red-leather mini-skirted darlin’, you’re drifting further and further away from the original point made by Deaf Spy that you have yet to disprove. To whit: given the alternative of a terabyte of RAM, and a terabyte of SSD, practically all purchasers of what you call “large servers” are going to pick the latter.

    For price reasons, if nothing else. I thought you FLOSS types were hypersensitive to not being ripped off? Perhaps you should have a quiet talk with the Miser In Charge about this.

    Anyhow, having consulted my expert friends — I doubt you have a single expert friend, Fifi, indeed I’m not sure your friends in general are particularly numerous — I have to admit that I got some of my estimates on the usage of terabyte RAM servers wrong.

    Specifically, I missed out Virtualisation. Mea culpa. I was remiss. This is clearly another case where having a terabyte of RAM is beneficial. I’d think you’d need to have a high-speed, high QoS, internal network to make this worthwhile, which of course adds more infrastructure into the thing, but, yes, if you’re going to gate about five hundred desktop users (Robert would describe this as “thin client,” which should excite him no end) into a single virtualisation server, then you probably need that much RAM.

    Interesting that you didn’t come up with the virtualisation rebuttal, isn’t it, Fifi?

    Well, not really. You are an ignorant dimwit who is bereft of useful information and clearly cannot cite any useful data in this particular field.

    Or indeed any other, but that is another matter.

  9. oiaohm says:

    Big data, edge servers, web crawlers, I would estimate around 99+% of the “large server” market seem to have standardised at around 64G to 128G RAM per server. Go ask one of your many friends at Google, Facebook, Bing, Akamai, etc … I know I can.
    DrLoser go ask them again. You will find that is 64G to 128G per motherboard per case.

    https://en.wikipedia.org/wiki/Intel_QuickPath_Interconnect

    The quickpath interconnect cables between those motherboards.

    Each box with a motherboard can be a server in it own right. Or it can be cable connect to other boxes to make 1 large server. Yes only connecting 10 to 20 boxes together and you have got to 1TB of ram. A server can be built from more than 1 case. A large server is built from more than 1 case.

    So google and facebook are running a mixture of medium sizes and large. Build from the same set of parts.

    And you’re deceiving yourself if you think that even the big players in the business of running big data centers with large servers are going to spend, what, $10,000 on RAM when they can get the same practical results with $1,000 of over-provisioned SSDs.

    The $10 000 of ram can work out cheap over 12 months running than $1000 of SSD in running costs.

    DrLoser you have the reason wrong. The reason is density. Currently $10000 of ram consume more rack-space than $1000 of SSD flash.

    Basically go to your so call friends visit their server rooms and ask about the interconnects and them to show what each of the invidual servers look like. I was not referring to blade servers but some of those can be SPI interconnected on the backplane so are in fact 1 server. Yes where you have to boot the blade server as one bit and not allocatable as individual servers.

    So in the big server places like google and facebook you have you large servers and you standard sized servers.

    Sorry DrLoser you just want to be insulting and again you are absolutely wrong. You have looked at the boxes facebook and google and the like order not how they deploy them.

  10. DrLoser says:

    Deaf Spy this means you would be in the medium down size of server. Large Servers you are talking greater 1TB ram a lot of the time.

    Very rarely, in fact. Do not confuse the manufacturer’s stated capacity for, say, an HP C7000 blade server with the typical use.

    Big data, edge servers, web crawlers, I would estimate around 99+% of the “large server” market seem to have standardised at around 64G to 128G RAM per server. Go ask one of your many friends at Google, Facebook, Bing, Akamai, etc … I know I can.

    Now, if you’re talking HPCs, or meteorological servers, or medical research servers that are built to process huge matrix calculations across very large homogeneous data sets, then I agree, several terabytes of RAM would be high on the list of requirements.

    But you’re deceiving yourself, Fifi, if you think that this is a common requirement. And you’re deceiving yourself if you think that this is the type of application to which Deaf Spy alludes.

    And you’re deceiving yourself if you think that even the big players in the business of running big data centers with large servers are going to spend, what, $10,000 on RAM when they can get the same practical results with $1,000 of over-provisioned SSDs. Only a demented fool would do that across an estate of 1,000 servers or more.

    Oh, wait, I forgot. Youare a demented fool.

  11. oiaohm says:

    In an environment, where I can’t afford to buy 1TB of RAM, I will absolutely get an SSD to store my reporting and temporary databases, and achieve a huge performance boost. But that is obviously beyond ram and you.
    Deaf Spy this means you would be in the medium down size of server. Large Servers you are talking greater 1TB ram a lot of the time.

    Also you really need to look at the current open computer motherboard spec,
    Decathlete_Server_Board_Standard_v2.1
    This mandates that 1 ram slot per cpu socket is for NVRam min. This gives you 32 to 64G of NVram on motherboard in the DDR4 boards.

    Deaf Spy even if you are using SSD to boost you speed you should not be overlooking NVram resources the server has. This could be straight in ram-slots or this could add-on cards.

    A lot of workloads when you track the IO when you have NVram caching on spinning rust end up just as fast as SSD would have been with less power usage.

    But these are going to be like you SSD Flash/HDD hybrids as cost saving for capacity not ideal solutions.
    Deaf Spy note what I said here. You have gone with cost saving for capacity. This is not me saying doing this is wrong. If you are doing a cost saving for capacity its not the ideal solution for that workload. You admit that you cannot afford the ideal solution for the workload.

    Only places you see SSD Flash either idiots who did not check if HDD spinning rust could service the workload so have under used flash costing themselves money those idiots also poke fun at people who understand this for only running HDD or people who cannot afford NVram and Ram based solutions that are the best of the problem at hand. Of course our objective should be to ask hardware makers to produce us in volume the better technology so that one day you can truly afford it.

    Note what Deaf Spy said “temporary databases” do you need those databases to live past a reboot. Or are you just massively sending you SSD flash to a early grave. SSD flash is not built for that.

    Please note I said SSD Flash/Nvram hybrids.
    http://www.radianmemory.com/nvram-on-pcie-ssd/

    Deaf Spy really for your workload do you really what a SSD Flash or would a SSD Flash/Nvram be better. The answer is a SSD Flash/Nvram is better.

    This is my problem the pure SSD Flash is dead end technology in servers. With Nvram if it would start expanding capacity would kill SSD flash completely out the server world.

    ram the thing that does in nvram in mobile applications is means to survive being drowned in water. SSD Flash in mobile devices and laptops have an advantage over NVRam. This is not a server workload. So if you wanted a usage case that fits SSD Flash specs its laptops, tablets and mobile phones.

    Deaf Spy basically your post is a poor me I had no choice as a reason for being wrong. The reality is if you are reaching for a normal flash SSD you are in the cheap bin. If you are reaching for a hybrid SSD flash/Nvram you are in the more expensive side and you care about getting decent lifespans out your hardware.

    I have been using SSD Flash all the way along because there is SSD Flash and SSD Flash/Nvram and SSD Nvram. Price goes up each time and hardware over all power usage drops and lifespan increases with the increasing price tag.

  12. ram says:

    Yeah, I think oiaohm has SSD’s worked out pretty well. It is starting to look like a dead end technology, with the exception of mobile and other high vibration environment applications. Definitely don’t want to expose “spinning rust” to vibration or frequent temperature fluctuations.

  13. Deaf Spy wrote, “Robert, why didn’t you get a pair of oxen?”

    Bad experiences back on the farm with large animals and a local by-law…

  14. Deaf Spy says:

    In an environment, where I can’t afford to buy 1TB of RAM, I will absolutely get an SSD to store my reporting and temporary databases, and achieve a huge performance boost. But that is obviously beyond ram and you.

    And yes, stuff works for people.

    Totally, like oxen. Oxen work for people and are much more environment-friendly than rototillers. Robert, why didn’t you get a pair of oxen?

  15. oiaohm says:

    http://origin-www.seagate.com/www-content/product-content/ssd-fam/nvme-ssd/_shared/docs/100765362e.pdf

    Scroll down to section 2.7. Then notice it only promises to store you data in a room at 40C for 3 months. You find that a lot of SSD Flash are like this. So infrequently access data is safer on a HDD. SSD Flash and NVram give you basically the same 3 month promise if you don’t power up your drive. NVram is just a little more absolute than in 3 months it will lose everything but it will hold on to your data in higher temperatures.

    So SSD flash don’t start reading as good as you start fully reading their spec sheets looking for key things like data retention values.

  16. oiaohm says:

    Intel you want to hate their spec sheet. Like they say the drive operates from 0-70C they leave out that is half speed from 45-70C

    Intel 730 : active 5.5W, idle 1.5W that is only 450G. Lets level the playing field.

    So if you are talking 8TB of storage using Intel 730 or Seagate.
    Intel 730 to be 8TB storage 17.7 of them. That becomes a 26Watt idle.
    2TB HDD active x4 gets you to 8TB 7×4=28W
    8TB HDD active 11 Watts is now looking really good even if you never idle. Particular if you task is not using more IO than a spinning rust can provide. So SSD not being unutilised fully is a nightmare.

    https://arstechnica.com/information-technology/2017/02/specs-for-first-intel-3d-xpoint-ssd-so-so-transfer-speed-awesome-random-io/
    Here the thing the next generation flash items.
    Moore’s Law works for SSD
    Turns out this does not work for flash based ssd or items like 3d-xpoint in power usage vs storage size. So yes Moore’s Law is allowing increased density to point.

    Please note the new spec sheet includes airflow requirements. When you start digging those out on SSD you start seeing a big problem. A single SSD requires more airflow than a raid card or a enclosure with 10 HDD in it. Airflow in a large server has involves powering fans to make it happen. So you have another motor spinning burning watts.

    You see Flash SSD asking for 400+ LFM Please note old spinning rust HDD tops out at 50 LFM and good spinning rust HDD need less than 30LFM. You standard 4 inch fan you find in computers running flat is 400 LFM that is 1.8 Watts .
    400/50 is 8 so HDD increase wattage usage by just 0.225 of a watt for fans and that is using a horible drive.
    Intel 730 is not idle 1.5W. Its in fact 3.3W idle just adding on the required fans to get the air out of the enclosure to keep its temperature right. Now is a large server you have to cool all that heated air back down to a usable temperature as well as move the air around. So one step of cooling doubled the power usage of the Intel 730. As you add in everything in the large server setup cooling system the power usage different will step by step disappear until flash SSD is more power than HDD.

    So difference in volume of air that handled by the cooling system between a 11Watt 1 HDD running flat out and and 1 Flash SSD idling 1.5W ends up making the Flash SSD more power expensive because cooling is not cheap in large systems.

    Most people think the most expensive item to run is like the servers in a data centre. Most of the time the most expensive item is the cooling its such a large percentage of the operating bill its not funny.

    You really what high performance if you are going to have to be using 400LFM for an item.

    NVram is gaining from Moore’s law on power usage and performance and reduced heat generation. None of the Flash technologies are gaining in power usage or heat generation from Moore’s law effect. This is why I see Flash technology as road to failure technology its basically treading water while everything else is improving around it.

    Now for the small systems and median systems where the cooling bill is not your one of your major bill flash SSD can look kind of ok.

    Laptops and phones SSD dumping heat into the local environment not having to care about the cooling bill.

    The thing to remember large systems price parts air flow requirements for cooling that can be the biggest wattage cost not what the item consumes directly.

  17. oiaohm wrote, “SSD flash + cooling use more power than HDD + cooling in large setups.”

    Flash uses very little power idling. In read/write cycles, power rises, but it’s still far below a hard drive.
    Intel 730 : active 5.5W, idle 1.5W

    Seagate 8TB HDD: idle 7.5W, average 11.5W. 2TB HDD: idle 4.5W, average 7W

    So, size matters. Active, a hard disc drive uses a lot of power. Moore’s Law works for SSD…

  18. oiaohm says:

    no a SSD flash setup like that is more power effective than HDD.
    I will make the clearer.

    SSD flash + cooling use more power than HDD + cooling in large setups. You would think the spinning rust would be generating a lot of heat from its motor but no that is not the case. SSD flash in read and write operations generates a lot of heat.

    Lot of test are done out of rack cases so the heat production of the SSD is ignored and how much power to get rid of the heat is also forgotten.

    As you start looking at Total deployment costs SSD start losing their shine all over the place. Small to median size systems where they are not packing systems as tight these issues do not show themselves as clearly.

    So there is a very big change when you move to a truly large system where heat generation is a fairly big factor. NVram and Ram has lower resistance so turn less power passing through them into heat compared to SSD Flash. So the heat out of SSD flash is the base technology. Why some SSD Flash drives are being done hybrid SSD Flash/NVram is getting the common reads and writes on NVram is 1 faster 2 colder. Some of the newer hybred are starting to add control options to allow the OS to set what is stored on the NVram.

    This is why I really see no Future for SSD Flash. SSD Flash/NVram hybrids might make themselves a place. But these are going to be like you SSD Flash/HDD hybrids as cost saving for capacity not ideal solutions.

  19. oiaohm says:

    Deaf Spy
    Yeah, yeah, WorksForMe(TM).
    That also applies to people taking about SSD. Problem is it worse.

    By the way I have missed another horrible fault of SSD in density. Most SSD Flash go into overheated at 45C cutting performance instantly in half most SSD Flash under full load can get there from 20C if they don’t have higher airflow than a HDD requires. So yes a SSD Flash device uses less power if you measure it in isolation if you measure a rack unit of them that includes the cooling fans there is no a SSD flash setup like that is more power effective than HDD.

    You should have started by saying that you speak of a rather limited environment.
    This is so true for SSD Flash. SSD Flash is workable until you get to a particular scale. Once you are at that scale SSD Flash become problem child between fragmentation and heat.

    There are other problems. Like you want to put in a NVRam Dimm in a computer those things are strictly ECC meaning if the system does not do ECC they don’t work. So this means the consumer i5 and i7 chips by Intel cannot support NVRam Dimms. i3 can. The new AMD Ryzen has ECC but its not formally validated so it might not be NVRam compatible. So the NVRam is not in the general consumer market because the CPU in the general consumer market have been technology crippled.

    So those doing consumer reviews have not been putting NVRam head to head with SSD Flash and HDD. Leading to the false belief that SSD Flash is the fast thing out there and a lot of defects in SSD Flash as being acceptable.

    Also NVRam you have stupidity where a single NVRam Dimm is 16G and a full PCI-e slot consuming NVRam is also 16G at half the speed. History of NVRam tells you that a PCI card should be able to hold at least 4 to 8 DIMM and the newer DIMM produce less head than the old ones and have lower power draw per DIMM even that they are way larger capacity.

  20. Deaf Spy wrote, “Yeah, yeah, WorksForMe(TM).
    You should have started by saying that you speak of a rather limited environment.”

    Yes, some more data-points: Deaf Spy doesn’t get it. And yes, stuff works for people.

  21. Deaf Spy says:

    For the applications my company deals

    Yeah, yeah, WorksForMe(TM).
    You should have started by saying that you speak of a rather limited environment.

  22. oiaohm says:

    That’s not true. For instance, a lot of data gets written once, then archived and is only infrequently modified if ever, like weather or documents. So, you can keep all the world’s weather in SSD and analyze it forever for optimal speed, minimum size, minimum power, if not minimum cost.
    http://www.anandtech.com/show/9248/the-truth-about-ssd-data-retention
    Robert Pogson you statement is wrong. HDD have the volume of storage and the retention.

    Something that is not talked about is SSD Flash heated to 55C can lose it data in 1 week if you don’t power it up. Both HDD and Nvram drives will hold their data without trouble at those temperatures for at least 3 months.

    This temperature problem is why a laptop/phone left in a car with flash can be useless when you come back to the car.

    http://www.tomsitpro.com/articles/pmc-sierra-flashtec-nvram-drive,2-954.html

    This is 2015 a block device nvram. Issue we have is lack of work to increase capacity.

    NVram as long as it powered every 3 months to recharge it is insanely durable yes this would be extremely strange in a server set-up for it to be without power for3 months. NVram the most durable read/write storage media we have got as long as it operating conditions are meet. The HDD the spinning rust is the second most durable and the SSD flash comes in after that. To keep SSD stable it has to be powered up more often than NVram resulting in that you use more power vs capacity running SSD flash drives than using NVram.

    SSD flash only look good while you are comparing it to HDD. As soon as you start comparing SSD Flash to NVram solutions. SSD Flash is slow, Poor Durability and Power hungry when you compare it to the NVram stuff. The only things SSD flash has going for it able to out perform HDD to point and Larger capacities for area than what we can buy NVram in at moment. Now if NVram was made in the same capacities and cost same as SSD Flash you would not even look at SSD Flash. You would call a server using SSD Flash budget or area constrained.

    The reality there is not a single server task that cannot be serviced better by either HDD or NVram instead of SSD Flash. SSD Flash core technology is flawed.

  23. oiaohm wrote, “The reality is no server workload in fact suits SSD Flash.”

    That’s not true. For instance, a lot of data gets written once, then archived and is only infrequently modified if ever, like weather or documents. So, you can keep all the world’s weather in SSD and analyze it forever for optimal speed, minimum size, minimum power, if not minimum cost.

  24. oiaohm says:

    ram there is way more to it than this. The reality is we have been tricked.

    When SSD were fire proposed they were not SSD Flash. They were SSD Ram. Be it SSD Ram with battery or SSD NVRam both of these technology have almost unlimited read and write cycles.

    Flash we forgot what is was developed from EEPROM. EPROM are never designed to be read and written over and over again.

    The reality is no server workload in fact suits SSD Flash. Why are we using SSD Flash because manufactures are not making us SSD Ram or SSD NVram so meaning we have to live with the defects of SSD flash.

    1) Ram and NVram SSD have insane lifespans. 20-30 years is confirmed from the early prototypes. SSD Flash has projected life span matching that of HDD. This would make demand smaller with less volume required.

    2)Ram and NVram SSD don’t slow down as they get full. SSD Flash slows down due to having to internally having to perform defragmentation resulting in performance pattern matching that of a filling HDD. Now this degrade in performance could intentional. So needing a large volume of SSD Flash drives to give the same storage with performance as Ram SSD and NVram SSD would provide.

    3) SSD Flash is techically slower to read and write than Ram SSD or NVram SSD. This is physical limitation of what Flash is.

    One of the horrible things is how large a SSD flash stall can be. Drive could stall and take in some cases up to 15 mins to de-fragment it self before it ready for action again. Of course you can mistakenly think the drive is dead when there is nothing wrong with the drive. HDD is never going to-do this to you.

    So the the more we believe in SSD flash the more profit drive makers will make from us and the more they can push back providing the technology that works and lasts that will destroy their bottom lines.

    https://en.wikipedia.org/wiki/Expanded_memory
    The idea of a SSD Ram drive was done with the PC XT. Including battery backed up forms.

    Boot time on a system with NVram module in the motherboard makes booting from a SSD look like snails pace. Of course Linux page cache and other things is needing to be improved to not copy stuff back out of the nvram into general ram because performance wise there is no need to. So today boot from nvram is slower than they should be but that still many times faster SSD boot could ever do.

    ram so even on small systems its just those people falling for a con-job and we have no reason to support this.

  25. ram says:

    I think oiaohm has figured it out. For the applications my company deals with (AI, bioinformatics, media creation, scientific computing) we just have plenty of memory per core (averaging, right now, about 8 GB) and plenty (thousands) of cores. Only final results, in bursts, are written to spinning disks. The spinning disks are not the bottle-neck even though some of them are a generation or two old. Another set of disks handle caching, should it become necessary, which is nearly never.

  26. DrLoser wrote, ” 200TB is not that large.
     
    It is ridiculously large on a single machine, you dimwit, Fifi.
     
    As a matter of fact, it is ridiculously large on any given server in any of the data centres for any of those organisations that you hopelessly take a punt at.”

    Yes it is large but not that large. Folks with large databases or large archives could certainly find that useful. It depends on the volume of traffic and the size of files how suitable it is. If you have a large number of files and a large number of clients you will want to distribute the files over many servers just for throughput. If you have any number of files but just a few users, it’s very efficient to have them in a small space, with a copy or two or three for backup.

    8TB drives are readily available. Servers with 8 drive slots are widely available. If you have the money, HPE will sell you something that holds 563TB flash in 1U… I know I’m not rich enough to ask the cost but I would bet it’s a lot.

  27. oiaohm says:

    We’re not talking FAT here, Robert. We are trying to explain to Fifi (a worthless endeavour, but why not, just for yuks) the difference between a 2010 flash card — as amply quoted in his ninny “cite” — and a 2017 SSD unit. Which will almost certainly have in-drive microcode to extend the life of the thing, write-wise.
    DrLoser where is your cite. That right you are guessing.
    http://www.computerweekly.com/feature/MLC-vs-SLC-Which-flash-SSD-is-right-for-you
    The reality is the technology of flash has a fault. Every time you write you damage the silicon. Sooner or later you damage the silicon enough that flash fails. Is MLC and SLC technology the same today as what was used in 2010. Yes to all the important factors like write limit are basic the same.

    Can the microcode do a few tricks yes. But really no more tricks than what a HDD can to attempt to hide it a spinning disc. A physical problem is a physical problem and no amount of creative programming can truly cure it.

    Mind you some of the issues with SSD could be improved by OS change and SSD in fact providing useful information to the OS. Most of the spinning rust HDDs are these days 4k block sizes remember the trouble we had with 512 on 4K drives how the OS had to change for the 4k block size.

    What is the internal block sizes of most SSD drives. The answer is 32k to 128k. That is the size the SSD has to reclaim in. Nothing like having the OS as well make the SSD controllers life hard. That was another advantage of the old ram drives they do care if you use 512, 4k or any other write block size. Because they can write any size at any time where as SSD is restricted to particular block size operations.

    Its a shocking one to most people is that the SSD can fragment internally under the correct write conditions and be unable to recover without stalling. This would be reduced if we were using the correct block size.

    SSD has a set of physical problems made worse by the operating systems we use.
    200TB is not that large.

    It is ridiculously large on a single machine, you dimwit, Fifi.

    As a matter of fact, it is ridiculously large on any given server in any of the data centres for any of those organisations that you hopelessly take a punt at

    200TB is not that large for the large systems of 4096 cores per machine using interconnected motherboards as you find in shipping container sized stuff. That is 256 motherboards that is than 1TB harddrive per motherboard. 1 8T drive per board is 2PB of storage. Yes this fits in a single shipping container.

    The difference between large and median sizes is quite a bit. Yes large servers PB is common measurement for storage. The ram in large systems are measured in the TB. In fact you can have systems at 4096 with 65TB of ram. There are 8192 core systems in shipping containers that can be even more than 200TB in ram. At this point you can understand why the large systems are going Nvram as Nvdimm because filling the ram from SSD is going to take insane amount of time after a reboot.

  28. DrLoser says:

    200TB is not that large.

    It is ridiculously large on a single machine, you dimwit, Fifi.

    As a matter of fact, it is ridiculously large on any given server in any of the data centres for any of those organisations that you hopelessly take a punt at.

    Douglas is entirely correct. You are a worthless, ignorant, bloviating, moron.

  29. oiaohm says:

    I don’t see anything in that document to support this argument. All block devices degrade block-writing performance as they fill up. It’s a variant of fragmentation. The document seems to show that if you never let the SSD exceed 50% capacity with random writes, performance does not decline much.
    If I can find the Linux conference video HP before starting machine did longer duration testing of SSD. So even if you stay at 50% capacity the SSD systems for attempt to avoid writes to flash to extend life will bite you at some point.

    The reality here is the old story of the tortoise and the hare.

    Yes SSD can go faster. No it does not go faster with consistency. Lack of consistency can trigger a domino effect. Particularly when large scale and load balancing. Lack of consistency means load balancer can shove to many request to a particular server because in the pass that server handled that load without problems.

    SSD is the hare. HDD is the tortoise. NVram is a creature that knows how to run that neither of them can get close to .

    NVram between empty to full even when being used as a block device does not show the degrading performance as they fill up. They don’t have the HDD seek problem and they don’t have the flash SSD write avoidance problem causing garbage collection actions. There are Sata battery backed up ram examples. So saying all block devices show slowing down as they fill is not true. Ram based block devices don’t show the behaviour be it NVram or normal ddr with a battery. ANS-9010, an 8-slot RAM DDR2 64G Drive sata connected back in 2009 shows no slowdown. To the OS the ANS-9010 was a normal block device drive just it performance is constant and predictable. NVram is moving from being connected by PCI or Sata to sitting in the ram slots themselves to get higher transfer-speeds.

    dougman good video on flash cards maybe after that one DrLoser will learn to use right terms.

  30. DrLoser says:

    All block devices degrade block-writing performance as they fill up.

    Ow. Ow. Ow. You two are doing my head in. That is not the putative issue with SSDs.

    We’re not talking FAT here, Robert. We are trying to explain to Fifi (a worthless endeavour, but why not, just for yuks) the difference between a 2010 flash card — as amply quoted in his ninny “cite” — and a 2017 SSD unit. Which will almost certainly have in-drive microcode to extend the life of the thing, write-wise.

    Besides the point, I suppose. Both you and Fifi are clearly completely ignorant on the subject, and I don’t see why the rest of us should have to put up with your ignorance.

    Get a room, you two.

  31. oiaohm says:

    for the sake of everyone here, please define the term “large scale”, so are you trying to say that 200TB or smaller is small scale?
    200TB is not that large. Start looking at Amazon Facebook, Google. You are talking multi Pent-bytes.

    200TB 50-100 drives right dougman that is not that large. Particular when you look at a HP Machine. 200TB is not large enough to fill its memory storage.

    Its only like 4 x 19inch 4U 24 3.5 inch harddrive. Enclosures not enough to even properly fill 1 cabinet.

    200TB is large than small but not really big enough to be called large. Once you start using 40 foot shipping containers size items as cases or having to build buildings house your servers you are in large scale. Dougman you are no where near large scale. You might be able to claim mid sized. Mid size don’t have the budget of large scale.

  32. oiaohm wrote, “In a link you said was not relevant. Documents the SSD stall. This causes you database to stall. So causes you major latency hell. So SSD for databases wrong tool.”

    I don’t see anything in that document to support this argument. All block devices degrade block-writing performance as they fill up. It’s a variant of fragmentation. The document seems to show that if you never let the SSD exceed 50% capacity with random writes, performance does not decline much. For some kinds of data, say a gazillion identically sized files or tables aligned to the boundaries in the device, performance could be awesome compared to spinning discs. The situation could be nicely reversed with RAID 0 on spinning discs and similarly matched data and devices. Then again, one could do RAID 0 on SSD…

    Of course, instead of giving thoughtful replies, the trolls attack the age, size and source of the paper, age, size and source being almost irrelevant.

  33. DrLoser says:

    Wtf are you even talking about stall??

    What Douglas said.

    Once again, Fifi has jumped the shark.

  34. DrLoser says:

    Deaf Spy NVRAM has come into it own in the last 2 years. So no showing that to my boss will not get me fired.

    Get you fired, Fifi? Get you fired? I hardly think that a minor hiccup in your normal relentless quotation of completely irrelevant “cites” would get you fired.

    I mean, surely your boss has seen you cavorting under the lamp-post after dusk?

    He must have a “thing” for fish-net stockings and red leather miniskirts. After all, Fifi, that is basically all you have.

  35. oiaohm says:

    Can anybody here tell me the difference between a proper SSD and a flash card?
    Dr Loser
    https://en.wikipedia.org/wiki/Flashcard
    Please ask the question again use the right term. Memory Card.

    The difference between a Memory card and a SSD can very small.
    http://the-gadgeteer.com/2016/03/17/turn-10-micro-sd-cards-into-a-sata-ssd-drive/

    Yes 1 controller stacked with micro sd cards can call itself a SSD drive. So the only major differences normally a more powerful controller and a different connect port. In fact some SSD are multi level controller setups inside even if they are not built from Memory cards.

    “cretinous cite” what DrLoser could not find a counter cite so still having to keep on being insulting over it. You just cannot live with me being right.

    This is basically change the topic because DrLoser cannot stand the fact he stuffed up completely.

  36. dougman says:

    “at some point the SSD will garbage collect and stall”

    Wtf are you even talking about stall??

    “Large scale Linux system is not your small scale NAS item.”

    for the sake of everyone here, please define the term “large scale”, so are you trying to say that 200TB or smaller is small scale?

  37. DrLoser says:

    A small thought. Slightly larger than Fifi’s capability to understand anything at all to do with IT, but still so tiny as to be almost insignificant … apart from the fact that it is obviously relevant to that cretinous cite of a Berkeley MSc.

    Can anybody here tell me the difference between a proper SSD and a flash card?

    I think you all (bar one) can.

  38. oiaohm says:

    “It’s a five page MSc document based, on its own admission, on a comparison of a small number of alternative storage configurations from 2010.”
    DrLoser I don’t need to counter this. You don’t have a single cite that counters it results. Basically you are wanting me to present more cites with out you presenting any yourself.

    Attacking a document because it old and only tested a limited number of things is a fool move. If the document is wrong you should be able to find a document where the test were redone and proved the results wrong. In this case no such document exists. So you attacked me again without the require research and on points that are not classed as solid reasons to reject the results of a document.

  39. oiaohm says:

    DrLoser I am totally sick of you attacking keystone event cites without providing counter cites you do it all the time. If a cite is wrong you should be able to find a counter cite. DrLoser if you don’t have a counter cite don’t bother posting.

  40. DrLoser says:

    DrLoser where is your counter cite.

    I don’t need one, Fifi.

    “It’s a five page MSc document based, on its own admission, on a comparison of a small number of alternative storage configurations from 2010.”

    Fifi, where is your counter argument?

  41. DrLoser says:

    Yes, it’s looking like SSD have matured sufficiently for me to begin using them.

    I’m sure the manufacturers of SSDs will be relieved to hear that they have a brand new outlet in a remote part of the unregarded province you live in, Robert. I mean, they only managed about 25 million units in the second quarter of 2015, which … well, who knows? … might be about $2.5 billion dollars back then.

    They’ve almost certainly run out of opportunities to expand their market. Jump in, my boy, jump in!

    Special discount for people who love FLOSS so much that they can’t actually program in it!

  42. oiaohm says:

    It’s a five page MSc document based, on its own admission, on a comparison of a small number of alternative storage configurations from 2010.
    DrLoser where is your counter cite.

    The paper I quoted is the keystone event. If you run the same tests today on new SSD they behave exactly the same. There is limitation to using flash to attempt to reduce number of write to extend flash lifespan. So SSD delay deleting so that if you send a block identical to what they had before that you had deleted they can reallocate it.

    HP Moonshot was designed around the idea that SSD would always boost performance. That paper also end up explaining why Moonshot did not perform as well as HP developers theory expected. This has lead to HP development teams starting The Machine project. A system that is design to operate full using NVRAM/NVDIMMS and not use a single SSD or HDD.

    Do come back when you have MSc-standard figures, darling.
    DrLoser do come back when you have a cite backing up what you are saying. I provide a cite that yes is getting a little older. There are no newer documents that counter it findings. In fact go looking DrLoser and you will find the newer documents back it findings. There is a issue with how SSD flash has to operate. Solution is plan to stop using SSD flash at some point in the future and use NVRAM/NVDIMM where possible.

    I imagine that ram is quite capable of answering that question himself. Stop being rude and intruding into conversations that do not concern you.
    DrLoser if Deaf Spy had not been false refering a person I would have left it to ram to answer.

    And if Deaf Spy had done that neither of you would now be find yourself in a location where you cannot present a cite to dig your shelf out of the hole of being absolutely wrong.

  43. DrLoser says:

    Deaf Spy NVRAM has come into it own in the last 2 years.

    Are you now claiming to be ram, Fifi? Interesting. But unlikely, given your distinctive style of gibberish.

    And if you are not ram, why are you bothering to answer a question posed by Deaf Spy to ram?

    I imagine that ram is quite capable of answering that question himself. Stop being rude and intruding into conversations that do not concern you.

  44. DrLoser says:

    (Patiently, as if coaxing a particularly stupid young child to apply the minimal analysis required.)

    In a link you said was not relevant. Documents the SSD stall. This causes you database to stall. So causes you major latency hell. So SSD for databases wrong tool.

    It’s a five page MSc document based, on its own admission, on a comparison of a small number of alternative storage configurations from 2010.

    In passing, I won’t waste anybody here’s time asking the obvious question: Why does Fifi consider this to be so all-fired important?. We all know the answer. It’s because he has the mental agility of a particularly small child who lacks the required mental agility. But I will ask the question: what on earth happened to American postgraduate education? Does this qualify as an MSc dissertation from Berkeley these days?

    Across these tests no single device consistently out
    performs the others, therefore these results indicate that there is
    no one size fits all flash solution currently on the market …

    … in 2010, Fifi. Not now. Six or seven years ago.

    The approach of this, somewhat pitifully abbreviated, academic micro-document might be an appropriate approach. I encourage you, Fifi, to apply the same approach to a similarly tiny number, say five, of present-day comparative solutions.

    Do come back when you have MSc-standard figures, darling.

    In the meantime, I think we can all agree that any ignorant little fool who persists, time and time again, in claiming that this documentlet is any sort of authority whatsoever is just wasting our time.

  45. oiaohm says:

    Ram, are you sure you work in a large enterprise? If you really do, never show them this post of yours; they will fire you on spot.
    Deaf Spy NVRAM has come into it own in the last 2 years. So no showing that to my boss will not get me fired. It fixes a random latency problem that you have with SSD that can cause systems to get swamped.

  46. oiaohm says:

    Just one example of practical use of SSDs is databases. SSDs are a great choice to store your temporary tables (if you can’t provide enough RAM for them).
    Deaf Spy
    http://www.pdsw.org/pdsw10/resources/papers/master.pdf
    In a link you said was not relevant. Documents the SSD stall. This causes you database to stall. So causes you major latency hell. So SSD for databases wrong tool.

    You used SSD for databases because you did not have a board with enough ram slots. Not because it the right choice.

    Now Fifi will go into a hail of idiocies and irrelevant links, but don’t let that distract you. He is an idiot.
    No you are idiot that just put up an example that was nuked by what I had already cited.

  47. Deaf Spy says:

    Ram, are you sure you work in a large enterprise? If you really do, never show them this post of yours; they will fire you on spot.

  48. oiaohm says:

    Deaf Spy what ones. They document the issues. I guess you have not read the master thesis and understand what it showed. Remember Nv solutions don’t have the problem. NVRAM is 10ns response time.

    Please note my first post I said put more ram in I did not say what type of ram did I Deaf Spy.

    As for Fifi, he never fails to disappoint with two references that do nothing to support his absurd lunacies.
    Deaf Spy and you have not put up a single point of your own. The reality its your absurd lunacies to say my two references had nothing todo with it. 1 reference documented the SSD fault. The other documented workloads that did not depend. I had missed the nvram reference I correct latter and only mentioned ram.

    nvram being in mass production has changed a few things in the higher end hardware. Will make it way down to normal hardware at some point.

  49. Deaf Spy says:

    Just one example of practical use of SSDs is databases. SSDs are a great choice to store your temporary tables (if you can’t provide enough RAM for them). Even better application is reporting databases, including column stores, where data can be recreated at any moment.

    P.S.
    Now Fifi will go into a hail of idiocies and irrelevant links, but don’t let that distract you. He is an idiot.

  50. Deaf Spy says:

    See Ram? People do have other ideas, too.

    As for Fifi, he never fails to disappoint with two references that do nothing to support his absurd lunacies.

  51. oiaohm says:

    While there are many server-roles that do not require the speed of SSD, there are many that might benefit from the lower power-consumption and smaller size.

    The reality at larger scales the SSD garbage collection makes it not competitive with the nvram in power usage. Also not all SSD drives are better on power usage vs HDD either. HDD still has the storage density over SSD.

    Robert Pogson the reality is SSD is a hack to boost speed when you cannot use the nvram due budget limitation.

    There are very few examples where SSD are the perfect choice when you get into large scale Linux.

  52. oiaohm says:

    as not having one slows down my write speeds due to the array calculating parity.
    dougman at some point the SSD will garbage collect and stall.

    Large scale Linux system is not your small scale NAS item. The large scale can be finding SSD garbage collection issue ever few seconds.

    “Large scale Linux system” kills a lot of use options for SSD instantly due to those workloads bring out the worst in SSD.

    https://www.micron.com/products/dram-modules/nvdimm#/
    If you have unlimited budgets you can start looking in the nvdimm direction. Yes as fast as your normal ram except it in fact holds stuff and does not have a garbage collection problem. So once you get past a particular scale SSD are nothing more than trouble making bits of hardware.

  53. oiaohm wrote, “For server used by very limited people might be absolutely no advantage to using a SSD.”

    While there are many server-roles that do not require the speed of SSD, there are many that might benefit from the lower power-consumption and smaller size. In these days where density matters, SSD could be important. I don’t have a space problem so I use spinning discs for storage but SSDs do use less power and I could benefit from that as well as the speed. e.g. I could put / and /home on SSD and /var on spinning discs. Users would appreciate greater responsiveness and we’d still have the bulk of storage for images, videos, and databases.

  54. dougman says:

    “Outside of a webserver or webserver cache, when would an SSD be useful in a large scale Linux system?”

    I use a SSD on my UNRAID NAS for my cache array, as not having one slows down my write speeds due to the array calculating parity.

  55. oiaohm says:

    Deaf Spy Now put up your example if it the one I think it is you have completely got it wrong. As normal the clueless attacking someone.

  56. oiaohm says:

    https://flashdba.com/2013/06/27/storage-myths-iops-matter/
    Deaf Spy above from 2013 points out the issue. Hey you might put in a SSD with higher IOPS rate than a HDD Lets say instead you invest the extra cost of the SSD in ram, non x86 cpu and go for a HDD result might be once system is started is faster performance on the system without the SSD.

    You can have a lot of production workloads where SSD in fact makes absolutely no runtime performance difference compared to using a HDD with enough ram and suitable cpu. It can depend on what services you are starting and how much the the system is queuing up reads.

    So SSD is not win in all cases. For server used by very limited people might be absolutely no advantage to using a SSD.

    You can think of SSD being used like fancy cars. Yes fancy cars can run at high speed but does it matter when you can only drive at max 100kmh and you are always in traffic.

    Quite a few workloads for SSD would be faster with x86 if power cpu was used with suitable ram was used instead its not faster than using HDD. What is going on here. Simple power linux kernel use 32k blocks instead of the standard 4k blocks x86 linux uses. So in fact utilises IO better and reduces down on HDD seeking.

    This explains why the Linux kernel is needing order-9 feature in the page-cache to support bigger blocks than 4k. So there are still some quite big performance changes that can be done for the old school HDD.

    http://www.pdsw.org/pdsw10/resources/papers/master.pdf
    People forget SSD have to do a garbage collection to free up deleted blocks to be reused. So SSD might allow you system to boot faster but also cause it to stall random-ally under workload. The old HDD it might be slower IOPs but it can normally do its IOP 24/7 with zero stalls.

    So someone starts saying this workload is faster with SSD you have to take it with a serous grain of salt. Are they committing you to hard to debug random stalls?

    There are so many tasks where predictable Latency is more important that faster IOP speed.

    Yes boot faster and stall random-ally does not exactly sound like a good deal.

    SSD and HDD both have advantages and disadvantages. How many times people deploy SSD complete ignore it random stalling fault then waste tons of resources trying to fix server performance that would have been totally prevented if they had put more ram in the server and stuck to HDD. Bcache under linux where SSD is used in combination with a HDD is more stable in performance because if the SSD is stalled reads and writes can fall back to the HDD. In fact intel raid controller does this in some laptops.

    So stable useful SSD usage is caching. Other usages you hear of a lot can be simply playing with fire.

    Deaf Spy your turn now where is your points that RAM was wrong. Or is it that you were not up on the topic yourself.

  57. Deaf Spy says:

    I have no other real world practical example …

    Of course you don’t, sweetheart. But when has that ever stopped you from making up your own? Don’t give up, Fifi, we know you can do it.

    Let me give you a hint for one particular use. The first and the last letters of its single form are next to each other in the English alphabet.

  58. oiaohm says:

    Deaf Spy who is Fifi. No one named Fifi has answered. Wait this is because you cannot bare to use someone correct handle right.

    See, Ram? Fifi has got an idea indeed!
    If you think fifi is me I have no other real world practical example that are truly 100 percent independent that is commonly used. Only a variation just different forms of caching that could be used with a web server or other server solutions . Just you Deaf Spy guessing you know other people answers. About time you put up what your ideas are. Lets see how many of them are valid.

    How many times did the TMR guys say I wrote things when I never did.

  59. Deaf Spy says:

    See, Ram? Fifi has got an idea indeed!

    Not that I have any faith Robert would have chipped in on this one. The matter is beyond him. And all others would simply wait for brilliant ideas and good laugh.

  60. oiaohm says:

    Maybe if Deaf Spy had not referenced a person who does not exist maybe then some answer would have come.

  61. ram says:

    Don’t see anybody offering up any suggestions.

  62. Deaf Spy says:

    Outside of a webserver or webserver cache, when would an SSD be useful in a large scale Linux system?

    Bwa-ha-ha-ha!

    That is a good one, Ram. It perfectly explains your mostly meaning-depraved posts here. Even Fifi could have thought of at least one other perfect use of SSDs.

  63. ram says:

    Outside of a webserver or webserver cache, when would an SSD be useful in a large scale Linux system? I have a few SSD’s I’m not even using, ripped them out of some used machines I picked up and so far have not found a use for them.

  64. dougman says:

    “Beast is more complex a computing platform than your toy.”

    Your “beast” boots slower then my UNRAID NAS. Then again, I am not using systemd and if my “toy” wanted to it could run a terminal server and FTP filer server, without a hiccup.

    “Systemd pauses for >20s twice in the booting process.”

    Blacklist the offending process

    “the old drives are really old and even the motherboard is beginning to fail again. I intend to replace the whole system this year.”

    Why wait? Start on it now! You are going to wait too long and all your drives and board are going to die, leaving you nothing. Quit being so pig-headed and go buy that Intel board. All the time you have spent moaning, whining and bitching over a WINTEL conspiracy. I could have built what, a few hundred servers?? For someone that has limited time left on this Earth, you sure do fart around and waste copious amounts of time.

  65. dougman wrote, “offer nothing in your rebuttal explaining why you system is so slow. Obviously it must aggravate you, as you are looking at SSD’s now.”

    1. My system is not slow. It’s snappy. Systemd pauses for >20s twice in the booting process.
    2. My interest in SSD has nothing to do with booting, just file seek/transfer speed in normal use. Beast is a server. Booting is not normal use. I don’t care how it boots as long as it does. LibreOffice starts up in ~1.5s. FireFox is quick enough. Download speeds are amazing. What more do I want? I don’t need more speed but the old drives are really old and even the motherboard is beginning to fail again. I intend to replace the whole system this year.
  66. dougman says:

    More complex? Please explain how this is so.

    You blame systemd for your woes, and calling it stupid. But offer nothing in your rebuttal explaining why you system is so slow. Obviously it must aggravate you, as you are looking at SSD’s now.

  67. dougman wrote, “Yours takes 90-seconds, that’s terrible.”

    I agree, systemd is pretty stupid and Beast is more complex a computing platform than your toy.

  68. dougman says:

    “Most of the time it’s just counting down the clock waiting for something to happen and it times out. Silly.”

    I don’t think you know what you are doing.

    Looking at mine, I am under one second, to load all my settings. Yours takes 90-seconds, that’s terrible.

    ~ $ systemd-analyze
    Startup finished in 2.312s (kernel) + 1.021s (userspace) = 3.334s

  69. dougman wrote, “what kind of poopey scripts you have loading?”

    I know. With sysvinit, Beast can boot in ~30s. With systemd this is what I get. Most of the time it’s just counting down the clock waiting for something to happen and it times out. Silly.

  70. dougman says:

    “1min 25.870s (userspace)”

    WTF??????…what kind of poopey scripts you have loading?

  71. Deaf Spy wrote, “why did you preach that Linux had faster boot times than Windows (at least according to you) as a great advantage?”

    This is about SSD not OS. Back in the day, I did see GNU/Linux boot twice as fast as TOOS on identical hardware. The difference was startling. 2s or 10s is quite a larger ratio but not startling as it takes me that long to get into my chair these days. We are no longer in the range of 1m to get a useful desktop as was XP with 40gB hard drives. No SSD can make TOOS’ inadequacies go away. That was part of the myth of Wintel, that if you spent enough on hardware that TOOS would be fine, despite the malware and the EULA and the monopoly.

  72. kgibran says:

    I still prefer a mirror of three HDDs as it provides excellent reliability and sufficient speed.

  73. Deaf Spy says:

    Do I care if I use 8s longer …? Nope.

    Really? Then why did you preach that Linux had faster boot times than Windows (at least according to you) as a great advantage?

    You are a hypocrite.

  74. dougman wrote, “~ $ systemd-analyze
    Startup finished in 2.312s (kernel)”

    systemd-analyze
    Startup finished in 10.341s (kernel) + 1min 25.870s (userspace) = 1min 36.212s

    Do I care if I use 8s longer to get Beast started on a new kernel every couple of weeks? Nope.

  75. dougman says:

    Been using an SSD for many years now, you are laggard in many, MANY aspects. My Linux machine boots in two seconds.

    ~ $ systemd-analyze
    Startup finished in 2.312s (kernel)

    Do pry the cover off your SSD and check for Intel chips, you do not want to be caught running Intel again Pogsey, that was embarrassing, to say the least.

Leave a Reply

Your email address will not be published. Required fields are marked *