Project: Bootstrap

19 inches of...hardware.
Post Reply
User avatar
Stavros
ΜΟΛΩΝ ΛΑΒΕ
ΜΟΛΩΝ ΛΑΒΕ
Posts: 1104
Joined: 02 Jan 2006, 17:00
19
Location: Mississippi, U.S.A.

Project: Bootstrap

Post by Stavros »

So I've been wanting to build a really powerful All-In-One VM server/homelab. I've wanted to do this for a good 4 or 5 years at this point. I've been making due with dockers in my Synology 918+, but I've run into some un-solveable issues unless Synology updates their ancient linux kernel (4.4.302+). Not only that but I'm limited to 16GB of RAM. Don't get me wrong as a first time stepping into the NAS sphere it was and is a great set up and go, but when you want to run something modern like Wireguard it's limited by poor support. I wish I'd had the foresight to build a NAS.

Anyway to give you an idea of what I run in Synology I have a qbittorrent-vpn image (combined VPN and bittorrent client) docker image sitting behind a mullvad VPN using OpenVPN. I also have Radarr, Sonarr, Plex, Prowlarr, Unpackerr and Quassel IRC Core. I've got two other docker containers that I don't actually use: Bazarr and Readarr.

The two main reasons I want to build a VM server is because I tried running gitlab in a docker and it took up the majority of the memory in RAM and that I'm stuck with an ancient linux kernel as mentioned earlier. I also want to uncouple the VPN and bittorrent client so I can use the VPN container as a proxy for other containers. Also I need to host a backend for an application I'm writing (SQL/app/web/etc). Most likely an Appwrite or Supabase backend.

So what am I planning on running it on? Well that's still up in the air, but I've got a list of parts so far I'm reasonably sure I'm going to go with.

Parts List:
  • CPU: AMD Epyc 7702(P?)
  • Motherboard: ASRock ROME8D-2T1
  • RAM: 256GB DDR4 ECC
  • Case: Fractal Design Meshify 2 XL2
  • Extra Fans: 4x 120mm fans
[1] - Also considering a Supermicro H12SSL-NT although 7 full size PCIe x16 slots seems more flexible
[2] - Requires extra HDD Trays. 6 included, 18 positions total + 2 Multibrackets (whatever multibrackets are)

I found the Meshify 2 XL on Amazon for $160 (normally ~$205) and brackets on Walmart.com of all places ($14 per pair x 3) for the cheapest price. Dad sent me $200 on paypal for Christmas I forgot about until yesterday. Ended up costing me out of pocket about $15.

Decisions to make: SAS HDD vs SATA HDD vs SAS SSD vs SATA SSD. I'm still working my way around understanding this and whether I need an HBA (Host Bus Adapter) or SAS Expander. I'm unsure whether I want to get SAS drives, but if I do I'm going the used enterprise route and testing the shit out of them (another post on that if and/or when that happens).

User avatar
bad_brain
Site Owner
Site Owner
Posts: 11639
Joined: 06 Apr 2005, 16:00
19
Location: In your eye floaters.
Contact:

Re: Project: Bootstrap

Post by bad_brain »

noice! :D

I played with Apache Mesos for a while, too bad it's dead, I think you would have found it interesting.

and yeah, server hardware makes harder cuts in compatibility with updated software technology than desktop software. the last one was when cloud computing became the new hot shit, the used hardware market is always suddenly crowded then. the ancient Poweredge blade I had once became unusable when any attempt to run a half-way recent kernel resulted in a permanent and unfixable100% speed of the 2 40x40 fan banks....the whole room was vibrating...lol

for the HDDs, if I had the choice (and the spare change), I would go NVMe for the OS and SAS HDD for anything storage. but from what I have seen NVMe would require an extension card on the board you have in mind, which is of course extra cost and potential extra problems, so SATA SDD for the OS might be the best choice.
Image

User avatar
Stavros
ΜΟΛΩΝ ΛΑΒΕ
ΜΟΛΩΝ ΛΑΒΕ
Posts: 1104
Joined: 02 Jan 2006, 17:00
19
Location: Mississippi, U.S.A.

Re: Project: Bootstrap

Post by Stavros »

I definitely want a SAS board for the options of going either SAS or SATA. I like options. However it seems that the two headers would only let me connect to 8 drives. I'd still need an HBA (they aren't that expensive on ebay). I can get an LSI 9400-16i for about $100. Older are about half that. That of course brings its own challenges with cooling as I'll have to either strap a 48mm or 80mm fan to the heatsink. I guess I could go the cheap route and fill it up half way at first and use the connections on the motherboard and when need comes to expand I'll just get an HBA and attach the rest to that.

I realized I forgot to mention I was going to put Proxmox as my hypervisor and I wanted to use ZFS for my file system.

As far as storage goes I think I'm going to have a mix. I thought about doing all flash and that would be ideal if I could, but the downside is I'd have less space. I want a lot of space. I'm not sure how I want to split the pools up, but I'm thinking NVMEs for the OS (there are 2 M.2 NVME slots on the ROME8D-2T). like you said, bad_brain, then spinning rust for storage and SSDs for VMs. I think I still want to get used enterprise SAS drives and I can get used enterprise SAS drives off ebay (https://www.ebay.com/str/rhinotechnologygroup) for pretty good prices as well as serverpartsdeals.com and GoHardDrive.com. I just have to buy more and thoroughly test them, but I was given advice on how to do that:

User avatar
Stavros
ΜΟΛΩΝ ΛΑΒΕ
ΜΟΛΩΝ ΛΑΒΕ
Posts: 1104
Joined: 02 Jan 2006, 17:00
19
Location: Mississippi, U.S.A.

Re: Project: Bootstrap

Post by Stavros »

Well, I kind of screwed up. I ordered the Fractal Meshify 2. Not the Meshify 2 XL. I lose 4 bays, but in Server Configuration I still get 14 drive bays instead of 18. Holy crap is this a great case. I wish I'd spent the money in 2019 for a Fractal case. Don't get me wrong my Phanteks P400 case is a great airflow case, but the quality on the Fractal is on another level. Especially when it comes to cable management. That's all I've got ordered at the moment. I have a total of 10 Drive trays (comes with 4). The sides are completely tool-less.

User avatar
Stavros
ΜΟΛΩΝ ΛΑΒΕ
ΜΟΛΩΝ ΛΑΒΕ
Posts: 1104
Joined: 02 Jan 2006, 17:00
19
Location: Mississippi, U.S.A.

Re: Project: Bootstrap

Post by Stavros »

ZFS RAIDZ vdev expansion is now a feature: https://github.com/openzfs/zfs/releases

Of course there is a downside:
After the expansion completes, old blocks remain with their old data-to- parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distrib‐ uted among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to zfs list, df, ls -s, and similar tools.
However, there is zfs inplace rebalancing: https://github.com/markusressel/zfs-inplace-rebalancing

I've been doing reading on zfs, zpools, vdevs and raidz and came across an article that reccomends against RAIDZ for performance reasons. The downside of course is your "disk efficiency" is halved (e.g. 6x10TB HDDS would only have 30TB available instead of 60 in a striped mirror), but re-silvering a 2-way mirror is way faster than a RAIDZ-n mirror.

One thing I'm not completely sure on (though I suspect but haven't found any documentation) is that ROME8D-2T has SATA compatibility but I'm unsure if getting miniSAS (sff-8643) to SAS (SFF-8482) breakout cables would work out of the box or if I'd have to get an HBA.

User avatar
Stavros
ΜΟΛΩΝ ΛΑΒΕ
ΜΟΛΩΝ ΛΑΒΕ
Posts: 1104
Joined: 02 Jan 2006, 17:00
19
Location: Mississippi, U.S.A.

Re: Project: Bootstrap

Post by Stavros »

A little bit of an update. I ended up compromising on the CPU. Instead of getting a 64 core Epyc 7702 I found a 32 core Epyc 7532 for $200. I'd rather save $400 and put that towards enterprise spinning rust/enterprise ssds. Not to mention the 7532 does have a higher base clock speed. I ended up buying the ROMED8-2T new as everything on ebay was basically the same price as new. There was one listing for $450-ish but shipping wouldn't put it in until February 5th and I want to get all the parts in and start testing while under return period. And honestly this part is too expensive to leave to chance.

So the list of parts I've ordered so far:
  • CPU - AMD Epyc 7532 - $199.95 - Amazon
  • CPU HSF - Arctic 4U-M (Rev. 2) - $54.99
  • Motherboard - ASRock Rack ROMED8-2T - $639.99 - Newegg
  • RAM - A-Tech 128GB (4x32GB) - $99.99 - Newegg
  • Case - Fractal Design Meshify 2 - $149.99 Amazon
  • PSU - Corsair RM1000x 1000W fully modular ATX - $189.99 Amazon
  • OS SSD - (4 pack) WD Ultrastar DC SA210 120GB SSD - $29.99 Ebay
Grand total so far - $1,364.93

Will the 120GB M.2 drives be big enough for Proxmox? I don't know but I plan on running them in a mirror so if it turns out I need bigger drives or more endurance I'd rather be able to resilver quickly than rebuild a RAIDZ over the course of a few days with severely impacted performance.

Other things I'll need to buy:
  • 120mm Case Fans
  • SAS HDDs
  • SAS SSDs
  • SAS HBA
Of critical importance will be case fans. Since I plan on running enterprise drives and they run hotter I really need to keep this server cool. Add to the fact that at some point I'll need an HBA then I want to make sure I've got cooling covered. I ususally skimp on that. I've never really put extra case fans in my desktop builds.

As far as SAS HDDs I'm looking at HGST 10TB in mirror and if I need something more performant (say for a SQL DB) I want ideally get a Micron S630DC or something similar. Ebay has decent deals on used SAS hard drives.

Post Reply