Thinking About Networked Storage

“block devices exported via AoE cannot be accessed by IP. Without this additional overhead, network performance does improve when accessing the exported block device(s). This non-routability also adds to the security of the technology. In order to access the volumes, you need to be physically plugged in to the Ethernet switch hosting them.
 
As for how data is transferred across the line, AoE encapsulates traditional ATA commands inside Ethernet frames and sends them across an Ethernet network as opposed to a SATA or 40-pin ribbon cable.”
 
See Mastering ATA over Ethernet
One of the problems I’ve been facing with replacing my AMD64 server, Beast, with small cheap ARMed systems is that the small cheap ones have very limited support for SATA and gigabit/s Ethernet. What’s the point of hanging a few hard drives on a cheap board if the network connectivity is just 100mbits/s or even 1X 1000mbits/s? That connectivity is a bottleneck. I still need a server that can access large storage quickly for web-applications/file-serving.

One possible solution is just to hang one hard drive on each of several small cheap ARMed SBCs (Single Board Computer) and have a server access the drives using AoE, ATA over Ethernet, in Linux. So, my server could be something a little more robust with multiple Ethernet ports and the smaller cheaper SBCs would just be interfaces to the drives. The server could see a total bandwidth like N x 1gbit/s to storage and serve 1gbit/s to clients. The server could also do the big I/O intensive jobs like building kernels or running databases. Some suitable small cheap boards are ~$50 and a more robust server board would be ~$200.

I could also use drives in USB3 enclosures and connect them to the server via USB3.

An advantage of this solution would be that I would not be limited to 1 or 2 SATA storage devices as many small boards allow. I could have 4-6 and add more as the need arises. I could add Ethernet ports to the server instead of SATA ports. That could be done via PCI-e or USB3 if the bigger board doesn’t have enough ports. Another advantage is that I could easily have some redundant interfaces in case one should die. More channels is a better situation than few. Total cost would be ~$200 + N x $50, perhaps half what one of the huge boards cost for N=4. This solution might also justify using a 10gbE port or two and a 10gbE switch. That’s unlikely as I only have 2-3 clients active normally.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , . Bookmark the permalink.