Author: homenode

  • My Quest for the Perfect Off-Site Homelab Backup

    My Quest for the Perfect Off-Site Homelab Backup

    Struggling to choose an OS for your off-site homelab backup? Explore the options for protecting both Proxmox and TrueNAS data in a 3-2-1 strategy.

    Anyone who runs a homelab knows the feeling. You spend weeks, maybe months, getting everything just right. Your Proxmox nodes are humming along, your TrueNAS server is dishing out files perfectly, and your collection of VMs and containers is a thing of beauty.

    Then comes the quiet, creeping thought: What if this all just… disappeared?

    A power surge, a failed drive, a catastrophic mistake in the terminal—it happens. That’s why we all know the golden rule of data safety: the 3-2-1 backup strategy.

    • 3 copies of your data
    • On 2 different types of media
    • With 1 copy kept off-site

    I’ve got the first two parts down. My local backups are solid. But that last part, the “1 off-site copy,” has turned into a surprisingly tricky puzzle.

    My Homelab’s Two Halves

    My setup isn’t too complicated, but it has two distinct parts that don’t always want to play nicely together when it comes to backups.

    • The Proxmox Side: I have a couple of small PCs running Proxmox, hosting all my virtual machines and containers. This is the brains of the operation. For backups, I use the excellent Proxmox Backup Server (PBS). It’s incredibly efficient at what it does.
    • The TrueNAS Side: I also have a dedicated NAS running TrueNAS Scale. This is where all the “stuff” lives—media, documents, phone backups, you name it. It’s the heart of the storage.

    The challenge is getting both of these systems backed up to a single, off-site machine that I plan to leave with a trusted friend in another city. My first thought was that no matter what I chose for the off-site server’s operating system, one backup process would be easy and the other would be a headache.

    The Big Question: What OS for the Off-site Box?

    So, what operating system should I install on this remote backup machine? It needs to gracefully handle backups from Proxmox Backup Server and pull in all the datasets from TrueNAS. After a lot of thought, I narrowed it down to a few key options.

    Option 1: Use Proxmox Backup Server as the OS

    This was my first instinct. Why not use the tool designed for the job?

    The Good:
    Backing up my Proxmox cluster would be seamless. I could just add the remote PBS instance as a new “remote” in my datacenter and set up a sync job. It’s the native, intended way to do it. All the fancy features like deduplication, encryption, and verification would just work.

    The Complication:
    How do I get my TrueNAS datasets onto it? PBS is designed for virtual machine backups, not general file storage.

    But then I stumbled upon a clever solution: you can install the Proxmox Backup Client on another Linux system. Since TrueNAS Scale is built on Debian, it’s possible to install the client directly onto the TrueNAS machine. From there, you can write a script to back up specific datasets to the remote PBS server.

    It’s not a one-click solution, but it’s a very clean and powerful one. It keeps the PBS side of things pure while providing a robust, scriptable way to handle the TrueNAS data.

    Option 2: Use TrueNAS Scale as the OS

    What if I went the other way and put TrueNAS on the remote machine?

    The Good:
    Backing up my local TrueNAS server would be incredibly simple. I could use ZFS replication (zfs send/recv), which is built right into TrueNAS. It’s ridiculously fast, efficient, and reliable for syncing datasets between two ZFS systems.

    The Complication:
    This makes the Proxmox side of things much harder. How would I back up my PBS data to a TrueNAS box? I’d essentially have to treat the PBS datastore as a giant file and copy it over. This would completely defeat the purpose of PBS. I’d lose the ability to browse old backups, the amazing deduplication, and the simple restore process. This feels like a major step backward.

    Option 3: Just Use a Standard Linux OS (like Debian)

    What about a blank slate? I could install a minimal Debian server and build my own solution.

    The Good:
    Maximum flexibility. I could install the Proxmox Backup Server application on top of Debian. I could also set up ZFS on the disks and use it as a target for ZFS replication from my TrueNAS box. In theory, I could have the best of both worlds.

    The Complication:
    This is the most hands-on approach. I’d be responsible for configuring and maintaining everything from the ground up. While PBS itself is based on Debian, starting from scratch means more work to get to a stable, reliable state. It’s a great option if you love tinkering, but it adds complexity I’m not sure I want for a critical backup machine.

    What About Just Using the Cloud?

    I also had to ask myself: Should I even be building a physical machine for this? Maybe a cloud provider is the answer. Services like Backblaze B2 are popular in the homelab community for a reason.

    The Good:
    It’s simple. There’s no hardware for me to buy, set up, or worry about. It’s someone else’s job to keep the disks spinning.

    The Bad:
    The cost can be unpredictable, especially if I have a lot of data. And the biggest issue is the restore process. If my local lab goes down, downloading terabytes of data from the cloud would take a very, very long time. There’s also the matter of privacy and control over my data.

    My Final Decision

    After weighing all the options, I’m leaning heavily toward Option 1: using Proxmox Backup Server as the OS for the off-site machine.

    It feels like the most elegant compromise. It keeps the backup and restore process for my most complex systems—the VMs and containers—as simple and reliable as possible. The method for backing up the TrueNAS data using the client is a well-documented and powerful workaround.

    It’s a solution that prioritizes the integrity of the most critical backups while still providing a clear path for everything else. Now, I just need to build it. But that’s the fun part, right?

  • So You Got a VNXe3200 for Your Homelab. Now What?

    So You Got a VNXe3200 for Your Homelab. Now What?

    Struggling to configure your EMC VNXe3200 in your homelab? Learn the simple steps to find its IP and get it running with the connection utility.

    So you did it. You found a great deal on an old piece of enterprise gear, and now there’s a hefty, powerful EMC VNXe3200 sitting in your homelab. It’s exciting, right? All that potential for storage, for learning, for tinkering. You get it racked, plugged in, and powered on. The lights are blinking. The fans are humming (or, let’s be honest, roaring).

    You log into your network controller, ready to assign it an IP, and… nothing. You can see the device, a mysterious client with a MAC address, but it hasn’t pulled an IP. It’s just sitting there, silent.

    If this is you, don’t worry. Your new toy isn’t a brick. You’ve just hit the classic first hurdle of wrangling enterprise hardware.

    Why It’s Not Showing Up

    Unlike a simple Raspberry Pi or your desktop PC, these kinds of storage arrays don’t just ask for an IP address from your router out of the box. They are designed for corporate networks with specific setup procedures. They wake up with a default, hard-coded IP address and expect you to connect to them in a very specific way.

    For the VNXe3200, the system is waiting for you to find it. And to do that, you need a special tool and a specific network configuration.

    The Secret Weapon: The Connection Utility

    The key to unlocking your VNX is a piece of software called the EMC VNX Connection Utility (or sometimes called the Initialization Tool). This little program is designed to do one thing: scan the network for unconfigured arrays and let you perform the initial setup.

    The catch? Finding the utility can sometimes be a bit of a treasure hunt, as this hardware is a few generations old. The first and best place to look is the official Dell support website, which now hosts all the legacy EMC support files. You’ll likely need to search for your specific model (VNXe3200) to find the corresponding tool.

    Your Step-by-Step Guide to Getting Connected

    Ready to get this thing talking? It’s actually pretty straightforward once you know the steps.

    1. The Direct Connection

    First, forget your main network for a minute. You need to connect directly to the array.

    • Take a laptop or desktop computer.
    • Plug an ethernet cable directly from your computer into one of the management ports on the back of the VNXe3200. Don’t plug it into the storage (fibre channel or iSCSI) ports.

    2. Set a Static IP on Your Computer

    This is the most crucial step. Your VNX has a default IP address, and you need to put your computer on the same network “island” to talk to it. The default management IP for these units is usually 128.221.1.250 or 128.221.1.251.

    So, you need to set your computer’s IP address manually to something in that range.

    • Go to your network settings on your laptop.
    • Find the ethernet adapter and go to its TCP/IP v4 properties.
    • Set the following:
      • IP Address: 128.221.1.249
      • Subnet Mask: 255.255.255.0
      • Gateway: You can leave this blank.

    Save those settings. Now, your computer and the VNX are on the same tiny, private network.

    3. Run the Connection Utility

    Now, fire up that Connection Utility you downloaded. It will scan the network it’s connected to. Since you’re wired in directly, it should pop right up and discover your VNXe3200.

    4. The Initial Setup

    Once the utility finds your array, it will launch a configuration wizard. This is where you finally get to make the machine your own. The wizard will walk you through:

    • Creating a new admin username and password.
    • Assigning a new static IP address for the management port—this time, use an IP that actually belongs on your main homelab network (e.g., 192.168.1.50).
    • Configuring DNS settings.

    Once you complete the wizard and the array applies the new settings, you’re done with the hard part. You can unplug your laptop, plug the VNX’s management port into your main network switch, and reset your laptop’s network settings back to automatic/DHCP.

    You should now be able to access the VNX’s web interface (Unisphere) by typing the new IP address you just assigned into your web browser.

    Was It Worth It?

    Was that a bit more work than plugging in a Synology NAS? Absolutely. So, why bother?

    Because the point of a homelab isn’t just to have services running; it’s to learn. By going through this process, you’ve just done a basic storage array deployment. You’ve learned about default IPs, management networks, and initialization tools—all things that are common in the enterprise world.

    Plus, you now have a seriously powerful piece of kit to play with for a fraction of its original cost. Sure, it’s probably loud and uses more power, but the capabilities for learning about iSCSI, LUNs, and advanced storage features are fantastic.

    So take a moment to admire the login screen. You earned it. Happy labbing!

  • Why I’d Choose a Tiny PC Over a Huge, Cheap Server

    Why I’d Choose a Tiny PC Over a Huge, Cheap Server

    Thinking about a homelab? We compare cheap, used enterprise servers with modern mini PCs to help you decide which is truly the better value.

    I was scrolling through some tech forums the other day and saw a question that really made me think: “I can get a powerful old server for about $100. Why would I ever buy a mini PC?”

    It’s a great question. On the surface, it seems like a no-brainer. Why pay more for less?

    A decommissioned enterprise server, like a Dell PowerEdge R630, offers a ton of raw power. We’re talking dual CPUs, tons of RAM slots, and enterprise-grade reliability. For a hundred bucks, you can get a machine that was worth thousands just a few years ago. If you’re looking to build a serious homelab for running something like Proxmox with a whole cluster of virtual machines, the appeal is obvious. More cores, more memory, more power.

    But I’ve come to realize the choice isn’t just about the spec sheet. The real story is in the hidden costs.

    The True Cost of “Cheap” Power

    That $100 server is just the beginning. The first thing you’ll notice isn’t the performance, but the noise. These machines were designed for server rooms, where noise doesn’t matter. In your home office or closet? It’s like having a jet engine idling in the next room. Some people don’t mind it, but for many, it’s a deal-breaker.

    Then there’s the power bill. An old server like that can easily pull 100-200 watts at idle. Let’s be generous and say it averages 150W. Running 24/7, that’s over 1,300 kWh a year. Depending on where you live, that could add hundreds of dollars to your annual electricity costs. Suddenly, your “cheap” server isn’t so cheap anymore.

    And let’s not forget the size. A rack server is, well, big. It’s heavy, needs a dedicated space, and isn’t exactly something you can tuck behind your monitor.

    The Quiet Competence of the Mini PC

    This is where the mini PC comes in. For a few hundred dollars, you can get a brand new mini PC that’s smaller than a book. It’s quiet—often completely silent—and sips power, typically using just 10-15 watts at idle.

    Sure, it might not have 24 CPU cores or 128GB of RAM. But do you really need it?

    Here’s the thing I’ve learned about my own projects: most of the time, my server is just sitting there, waiting for me to do something. For running a handful of services like Pi-hole, a media server, or Home Assistant, a modern mini PC is more than capable. The processors in these little machines are surprisingly powerful and efficient.

    It really boils down to what you actually need versus what sounds cool.

    • Need a powerful virtual machine cluster? That old server might be the right call, as long as you can handle the noise and power draw.
    • Need to run a few key services reliably and efficiently? A mini PC is probably the smarter, simpler choice.

    I started my homelab journey thinking I needed the most powerful gear I could find. I ended up realizing I valued silence and a lower power bill a lot more. The mini PC won me over not with raw specs, but with its practicality. It just sits there, does its job, and stays out of the way.

    So, while the allure of a $100 server is strong, it’s worth looking past the price tag. Think about the hidden costs of noise, power, and space. Sometimes, the small, quiet, and efficient choice is the better one in the long run.

  • My Accidental Under-Desk Datacenter

    My Accidental Under-Desk Datacenter

    My journey into the world of homelabs started with a tiny Orange Pi. Here’s how I built a small, personal server and what I learned along the way.

    I have a confession to make. I’ve officially fallen down the homelab rabbit hole, and I’m not sure I ever want to get out.

    It all started with a simple idea: to build a tiny, low-power server for a few personal projects. Nothing fancy, just a little box that could handle some basic tasks without sending my electricity bill through the roof.

    After lurking on some online forums, I decided to start small. Really small. I picked up an Orange Pi Zero 3, a single-board computer (SBC) that’s not much bigger than a credit card. It seemed perfect. Inexpensive, tiny, and just enough power for what I had in mind.

    Or so I thought.

    The Best Laid Plans

    My initial plan was straightforward. I wanted to run a few services:

    • Navidrome: To stream my personal music collection, like my own private Spotify.
    • Homebox: To keep my home inventory and digital assets organized.
    • A torrent client: For… well, for downloading Linux ISOs, of course.

    I got the Orange Pi, installed Armbian (a lightweight operating system for these kinds of boards), and felt pretty good about myself. This was easy!

    But then, reality hit. The cheap, generic SD card I was using for the operating system wasn’t cutting it. It was slow and I worried about its long-term reliability. So, my first “small” upgrade was a high-endurance Sandisk SD card designed for constant read/write operations.

    Next came the storage. The little SBC has no native storage, so I needed a place to keep my music and files. I ended up with an Intel DC S3610 SSD—a 400GB datacenter-grade drive. Yes, you read that right. The SSD cost more than the computer it was plugged into. It felt a little absurd, but I wanted something solid and reliable.

    My tiny, budget-friendly project was already getting less tiny and less budget-friendly. And I was loving every minute of it.

    My Under-Desk Datacenter

    So here it is, my little under-desk lab. It’s a humble setup, but it’s mine.

    • The Brains: Orange Pi Zero 3 (1GB RAM model)
    • The Boot Drive: Sandisk High Endurance 64GB SD Card
    • The Storage: Intel DC S3610 400GB SSD

    It’s a strange little beast, a mix of budget-friendly computing and enterprise-grade hardware, but it works. Most of the time.

    Which brings me to my next lesson: heat.

    These tiny computers sip power, but they can still get hot, especially when they’re working hard. My little Orange Pi, with its tiny heatsink, was not prepared for the 40°C (104°F) ambient temperatures of a summer afternoon. I’d find it would randomly crash right around noon.

    The solution? It’s not elegant, but it works. I have the whole setup plugged into a smart plug. When it freezes, I just cut the power from my phone, wait ten minutes for it to cool down, and turn it back on. Problem solved. For now. A bigger heatsink or a small fan is probably next on the shopping list.

    What I’ve Learned

    Starting this project has been a fantastic learning experience. It’s one thing to read about self-hosting and another thing entirely to build, troubleshoot, and maintain your own server, no matter how small.

    If I could do it all over again, the only thing I’d change is getting the 2GB RAM version of the Orange Pi. That extra gigabyte would provide a little more breathing room for running multiple services.

    But regrets? I have none. I have my own private cloud for music and files, running silently under my desk on just a few watts of power. It’s a testament to how accessible this hobby has become.

    If you’ve ever been curious about setting up your own server, don’t be intimidated. Start small. You don’t need a rack of enterprise gear. All it takes is a tiny board, a bit of patience, and a willingness to fall down a very deep, very rewarding rabbit hole. You might be surprised at what you can build.

  • I Accidentally Created a Pet in My Kitchen

    I Accidentally Created a Pet in My Kitchen

    Thinking about making a sourdough starter? Here’s a real, honest story about the process, the failure, and the moment it finally comes to life.

    It started with a jar.
    A simple, empty glass jar on my kitchen counter. My plan was to create a sourdough starter. You know, like all those people on the internet with their perfect, crusty loaves of bread. It seemed simple enough. Just flour and water. What could go wrong?

    For the first three days, absolutely nothing happened.

    I’d mix the flour and water into a sad, grey paste. I’d look at it. I’d stir it. I’d put the lid on loosely, just like the instructions said. And I’d wait. The next day, I’d throw half of it out and feed it again. More flour, more water. More stirring. More waiting.

    It felt less like baking and more like a weird science experiment. Or maybe a test of my own patience. My husband would ask, “How’s your flour-pet?” and I’d just shrug. It was a lifeless, goopy mess. I was pretty sure I was just wasting flour.

    The First Sign of Life

    On day four, I almost gave up. I walked over to the jar, ready to dump the whole thing and reclaim my counter space. But I decided to give it one last look. I picked it up and tilted it toward the light.

    And there it was. A bubble.

    It was tiny. Almost invisible. But it was undeniably a bubble. It was proof that something was happening in there. All that waiting and feeding—it was actually doing something. Microscopic yeasts and bacteria were waking up and getting to work.

    I’m not going to lie, I got way too excited. I yelled, “It’s alive!” to an empty kitchen. I felt like a mad scientist who had finally succeeded. My little jar of paste wasn’t just paste anymore. It was becoming a living thing.

    What is a Sourdough Starter, Really?

    If you’re not familiar, a sourdough starter is just a community of wild yeast and bacteria. You’re not creating life, but you are capturing it and nurturing it.

    The whole process is surprisingly basic:

    • You mix: Just flour and water. That’s it.
    • You wait: You give the natural yeast in the flour and the air a chance to start multiplying.
    • You feed it: To keep the yeast happy, you have to give them fresh food regularly. This involves discarding a portion of the starter and adding new flour and water.

    This daily ritual becomes a strange, comforting routine. You get to know your starter. You can see when it’s hungry (it looks flat) and when it’s active and happy (it’s bubbly and doubles in size). Mine started smelling a little like vinegar, then a little like ripe apples. It was developing a personality.

    More Than Just Bread

    After a week, my starter was strong and bubbly. I finally used it to bake my first loaf of bread. It wasn’t perfect. It was a little dense, a little lopsided. But it was delicious. It had that signature tangy flavor that you just can’t get from a packet of yeast.

    But the real reward wasn’t just the bread.

    It was the process itself. It taught me a weird lesson in patience. In a world of instant gratification, it’s a strange and wonderful thing to tend to something for a week just to see if it will work. It’s a small, quiet act of creation.

    So if you’ve ever thought about it, I say go for it. All you need is a jar, some flour, and a little bit of patience. You might fail a few times. You might feel a little silly feeding a jar of paste. But one day, you’ll look inside and see that first bubble. And trust me, it’s a pretty great feeling.

  • How Much Power Does Your PC Really Need?

    How Much Power Does Your PC Really Need?

    Building a custom media server and confused about PSU wattage? Follow my journey and learn how to choose the right power supply for your rig.

    I’m a tinkerer at heart. I love taking things apart, putting them back together, and making them my own. So when my old media server, a faithful Dell from 2006, was ready for retirement, I didn’t just want to buy a new box. I wanted to build one.

    This wasn’t just any build. I was upgrading my gaming rig, which meant I had a bunch of powerful, relatively new components ready for a new home. My goal was to create the ultimate media server and disk-ripping machine for my family. We’re talking a serious setup:

    • An i9-10900K processor
    • 64GB of RAM
    • A whole bunch of storage: four 1TB SSDs and four 3TB hard drives
    • And for the main event: seven optical drives for archiving all our old physical media.

    Yes, you read that right. Seven.

    I was designing a completely custom case to house all this hardware. It’s a beast, with dedicated bays for all the drives and even front-mounted PCIe slots for extra ports. But as I was planning everything out, I hit a major roadblock.

    Power.

    The Big Wattage Question

    My plan was to use two power supply units (PSUs). A main 750W unit for the core components and a smaller, older PSU just to handle some of the optical drives.

    Why two? Because when I manually added up the maximum power draw for every single component, the total was scary high. It looked like the 750W PSU wouldn’t be enough on its own. So, the two-PSU plan seemed like a clever, if complicated, solution.

    But then I plugged everything into PCPartPicker, a popular tool for planning builds. It gave me a much, much lower number—well within the range of my single 750W PSU.

    So, who was right? My detailed, “worst-case scenario” math, or the trusted online tool?

    Manual Math vs. Online Calculators

    Here’s the thing about power consumption: it’s not a single, fixed number.

    When you do the math by hand, you’re usually looking at the maximum possible power draw for each part. That’s the amount of power a component could pull if it were running at 100% capacity. Your CPU under a heavy benchmark, your graphics card rendering a complex scene, every drive spinning up at the exact same moment—it’s a perfect storm of power usage.

    Does that happen in the real world? Almost never.

    Your media server isn’t going to be transcoding 4K video, spinning up all seven optical drives, and running a stress test on all eight storage drives simultaneously. Most of the time, many of your components will be idle or close to it, sipping a tiny fraction of their maximum power.

    PCPartPicker and other online calculators know this. Their estimates are based on more realistic, typical usage scenarios. They account for the fact that you won’t be redlining your entire system 24/7. That’s why their numbers are usually lower and often more practical.

    So, What’s the Right Call?

    In my case, the answer was to trust the PCPartPicker estimate, but with a healthy dose of caution.

    While my manual calculation was an overestimate, it highlighted a crucial point: you need headroom. A good rule of thumb is to aim for a PSU that can handle your estimated load at around 50-60% of its total capacity. This is the sweet spot where PSUs are most efficient, generating less heat and running quieter.

    A 750W PSU for an estimated 500W load is a great fit. It provides plenty of power for the current setup and leaves room for future upgrades (like the 3-fan radiator I’m planning to add).

    Using two PSUs is certainly possible, but it adds a lot of complexity to the wiring and setup. For a build that’s already this custom, simplifying the power delivery is a smart move. Sticking with a single, high-quality PSU is safer, cleaner, and more reliable in the long run.

    A Quick Word on the Case

    Building a custom case from scratch is its own adventure. My design is focused on function, with massive bays for all the drives. The key challenge with any custom case is airflow. With so many components packed in, you have to be mindful of how cool air gets in and hot air gets out.

    My advice if you’re thinking of doing something similar:
    * Plan your airflow: Think about intake and exhaust fans from the very beginning.
    * Cable management is your friend: With this many components, clean wiring isn’t just for looks; it’s crucial for good airflow.
    * Think about the future: What else might you add? Leave space for it now, whether it’s more drives, a bigger cooler, or extra I/O.

    This project has been a deep dive into the nuts and bolts of what makes a computer tick. And that power question? It was a good reminder that sometimes the “by the book” answer isn’t always the most practical one. Now, if you’ll excuse me, I have a case to finish.

  • Proxmox vs. UnRAID: How I Finally Chose My Home Server OS

    Proxmox vs. UnRAID: How I Finally Chose My Home Server OS

    Stuck choosing between Proxmox and UnRAID for your home server? I break down the key differences in plain English to help you pick the right OS for your needs.

    So you’ve got a computer, a list of cool projects, and a desire to build your own home server. Welcome to the club. It’s an exciting first step. But it leads to the first big, head-scratching question: which operating system do you use?

    After a bit of searching, you probably landed on the two most common answers: Proxmox and UnRAID. And now you’re stuck.

    I’ve been there. I spent way too much time staring at forums and watching videos, trying to figure out which path to take. It felt like a massive decision, and in some ways, it is. But the choice is actually simpler than it seems. It all comes down to one question: What is the primary job of your server?

    Let’s break it down.

    My Server “To-Do” List

    First, I had to get clear on what I actually wanted this thing to do. My plan looked a lot like the ones I see people discussing online. I wanted a central place to:

    • Store and serve files: A network-attached storage (NAS) for my important documents and backups.
    • Run a media server: Plex was the goal, so I could stream movies and shows anywhere.
    • Host a smart home hub: Home Assistant was a must-have.
    • Block ads on my network: Using something like Pi-hole or AdGuard Home.
    • Tinker: I wanted the freedom to spin up virtual machines (VMs) and containers to experiment with new software without breaking my main setup.

    This is a classic “all-in-one” server. Both Proxmox and UnRAID can do all of these things. But they approach the job from completely different angles.

    UnRAID: The Storage-First Friend

    The easiest way to think about UnRAID is as a super flexible NAS that also happens to be great at running apps.

    Its killer feature is how it handles hard drives. You can take a bunch of drives of completely different sizes, toss them in a box, and UnRAID will pool them all into one giant storage space. It protects your data using a “parity drive.” This means if one of your data drives fails, you can pop in a new one and rebuild your lost files.

    This is amazing for a media server where you’re constantly adding more storage. Found a cheap 8TB drive on sale? Great, throw it in. Your friend gave you an old 4TB drive? No problem, add it to the pool. You don’t have to worry about matching drive sizes like you do with traditional RAID setups.

    The Bottom Line on UnRAID:
    * Its strength: Unbeatable storage flexibility. Perfect for building a massive, ever-expanding media library.
    * The user experience: It has a very friendly web interface. Setting up apps (as Docker containers) and managing your storage is incredibly straightforward. It’s designed for home users.
    * The catch: It’s not free. You pay a one-time fee based on the number of drives you plan to use.

    If your number one goal is building a NAS, UnRAID is probably your answer. It makes the storage part dead simple.

    Proxmox: The Virtualization Powerhouse

    Proxmox comes at the problem from the opposite direction. It’s a “hypervisor.” Its main job is to run virtual machines and containers, and it does this incredibly well.

    Think of Proxmox as a powerful, bare-metal foundation for all your virtual projects. It’s built on Debian Linux and includes enterprise-grade tools, but it’s completely free and open-source. You can slice up your server’s resources and dedicate them to a Windows VM, a dozen different Linux containers, and anything else you can dream up.

    Storage on Proxmox is more traditional. The most popular choice is ZFS, which is a fantastic file system known for its data integrity features. It can detect and repair data corruption on its own. But it’s also more rigid. With ZFS, you typically create storage pools with drives of the same size. You can’t just toss in random drives like you can with UnRAID. It requires a bit more planning upfront.

    The Bottom Line on Proxmox:
    * Its strength: It’s the king of virtualization. If you want to run a lot of complex VMs and learn how enterprise-level systems work, this is it.
    * The user experience: It has a steeper learning curve. The web interface is powerful but dense. You’ll probably find yourself using the command line to get things done.
    * The catch: Storage is less flexible. It’s powerful and safe, but not as forgiving as UnRAID’s mix-and-match approach.

    So, How Did I Choose?

    After laying it all out, the answer became clear for me. I looked back at my list. While a NAS was on there, my real excitement came from the “tinker” category. I wanted to experiment, to break things in a safe environment, and to have a dozen little projects running at once.

    My primary goal was to learn and experiment with VMs and containers. The NAS part was important, but secondary.

    Because of that, I chose Proxmox.

    It gave me the powerful, flexible foundation for virtualization I was craving. I was willing to accept the more rigid storage requirements and the steeper learning curve because it was the best tool for my main job. I set up my NAS inside a VM on Proxmox, which gives me the best of both worlds.

    Don’t Choose the “Best” One, Choose the Right One

    There is no single “best” home server OS. Anyone who tells you otherwise is probably trying to justify their own choice.

    • If your server’s main purpose in life is to be a NAS that holds your growing media collection, and you want the simplest path to get there, start with UnRAID.
    • If your server’s main purpose is to be a playground for virtual machines and containers, and you’re excited by the idea of learning a more powerful, professional tool, start with Proxmox.

    Forget the hype. Just look at your to-do list, identify your real priority, and pick the tool that’s built for that job. You can’t go wrong. Good luck with your build!

  • Why Does My Computer Fan Get Quiet When I Press On It?

    Why Does My Computer Fan Get Quiet When I Press On It?

    Is your computer fan making a racket? Learn the simple reasons why pressing on it makes it quiet and how to fix the noise for good. No tech skills needed!

    You know that sound. The one you try to ignore, but it just keeps getting louder. It’s the constant, irritating whir of a computer fan that’s decided to throw a party at the worst possible time. I had this happen with an old PC, and it drove me nuts. The fan was so loud, but if I gently pressed on the case, the noise would die down. What gives?

    If this sounds familiar, you’re not alone. A noisy fan is a common problem, and it’s usually a sign that something is a little off-kilter. The good news is you can probably fix it yourself without having to call in a pro or, even worse, buy a new computer.

    Let’s walk through why this happens and what you can do about it.

    So, Why Does Pressing on It Help?

    When you press on your computer case and the fan gets quieter, you’re essentially providing temporary stability. The pressure you apply is likely stopping something from vibrating. Think of it like holding a rattling picture frame against the wall—the noise stops because the vibration stops.

    This little diagnostic trick points to a couple of likely culprits:

    • Worn-Out Fan Bearings: This is the most common cause. Inside the fan’s motor are tiny bearings that allow it to spin smoothly and quietly. Over time, they wear out. When you press on the case, you’re slightly shifting the fan’s position, forcing the bearings into a less-worn groove, which quiets them down for a moment.
    • Something is Loose: The fan itself might not be screwed in tightly. The vibrations from its normal operation can cause a loose fan to rattle against the computer case or its own housing. Your hand pressure dampens that vibration.
    • Dust and Grime: A fan caked in dust and pet hair is an unbalanced fan. This imbalance can cause vibrations and noise. It can also make the fan work harder and spin faster than it needs to, which only adds to the racket.

    How to Actually Fix the Noise

    Okay, so we know why it’s happening. Now for the fun part: fixing it. Before you start, a quick word of caution.

    Important: Unplug your computer from the wall before you open the case. Don’t just shut it down—unplug it completely. Static electricity is the enemy of computer components, so it’s a good idea to ground yourself by touching a metal part of the case before you start poking around inside.

    Here’s a simple plan of attack.

    1. The Clean-Up Crew

    First things first, let’s get rid of the dust.

    • Open the Case: Most desktop computer cases have a side panel that comes off with a couple of thumbscrews on the back.
    • Grab Some Canned Air: This is your best friend for computer cleaning. Do not use a vacuum cleaner! Vacuums can create a static charge that can fry your components.
    • Blow It Out: Hold the fan blades still with one finger (so they don’t spin like crazy and damage the motor) and use short bursts of canned air to blow the dust off the blades and out of the fan housing. Get the dust off the power supply and any other fans you see in there, too.

    Sometimes, a good cleaning is all it takes. If the noise is gone, congratulations! You’re done. If not, on to the next step.

    2. The Tighten-Up

    While the case is still open, check if the fan is securely mounted.

    • Check the Screws: You’ll see a few screws holding the fan to the case. Gently check if they’re tight. If you can turn them easily, they might be the source of your rattle. Just snug them up—don’t overtighten.
    • Check the Fan Housing: Some fans are clipped into a plastic housing. Make sure it’s all snapped together properly.

    If you’ve cleaned and tightened everything and the noise persists, it’s probably time to face the music. The fan itself is likely the problem.

    3. The Last Resort: A New Fan

    If the bearings are shot, no amount of cleaning or tightening will be a permanent fix. Replacing a case fan is surprisingly easy and inexpensive.

    • Identify Your Fan: Look for a sticker on the fan hub. It will usually have the model number and, most importantly, the size (commonly 80mm, 120mm, or 140mm). Note how it connects to the motherboard (usually a small 3- or 4-pin connector).
    • Buy a Replacement: You can find replacement fans online for just a few dollars. It’s a cheap and effective upgrade.
    • Swap It Out: Unplug the old fan from the motherboard, unscrew it from the case, and then simply screw the new one in its place and plug it in.

    It might sound intimidating, but it’s usually a 10-minute job. And the sweet, sweet sound of a quiet computer is totally worth it. You’ll be able to hear yourself think again.

  • Building My All-in-One Homelab in a Single Desktop PC

    Building My All-in-One Homelab in a Single Desktop PC

    Learn how a used HP Z440 workstation was transformed into a powerful, budget-friendly hyperconverged homelab running Proxmox, VyOS, and ZFS.

    From Humble Desktop to All-in-One Server

    It all started with a simple idea: what if I could build a powerful, flexible lab for my networking projects without filling a room with equipment? I’m fascinated by what’s happening in data centers with hyperconvergence—this idea of collapsing the network, compute, and storage into a single, efficient solution.

    So, I decided to try it myself. My goal was to combine everything into one chassis. The foundation for this project? A used HP Z440 workstation. It turns out, these machines are an amazing platform for building out massive compute power on a budget.

    The Hardware Foundation

    The Z440 was pretty barebones when I got it. It was a solid starting point, but I knew it needed some serious upgrades to handle what I had in mind.

    First up was memory. I wanted to run multiple virtual machines without breaking a sweat, so I went big. By combining the existing RAM with four new 16GB sticks, I brought the total up to a whopping 96GB of DDR4 ECC memory. For a homelab of this scale, that’s a fantastic amount of headroom.

    Next was networking, which is my main area of interest. The onboard 1-gigabit ethernet port is fine for management or as a backup, but I needed more speed. I installed an HPE FLR-560 SFP+ card, which gives me a 10-gigabit connection. This card is based on the solid Intel 82599 controller, which is great for virtualization. It connects to a Mikrotik CRS210 switch, which acts as the core of my entire network.

    For storage, I needed a way to connect multiple drives and manage them efficiently. I chose a Dell PERC H310 SAS controller. These are popular because you can “cross-flash” them with firmware from LSI, turning them into a very reliable Host Bus Adapter (HBA). This allows my virtualized storage operating system to talk directly to the drives.

    Here’s the final hardware breakdown:
    * Chassis: HP Z440 Workstation
    * RAM: 96GB DDR4 ECC
    * Networking: 10G SFP+ via HPE FLR-560, plus onboard 1G
    * Storage HBA: Dell PERC H310 (flashed to LSI firmware)
    * Drives: A mix of drives for different purposes—an M.2 NVMe for fast VM booting, a 2TB HDD and 4TB HDD for bulk storage, and a couple of SSDs for other tasks.
    * Cooling: An extra fan pointed directly at the expansion cards. Enterprise gear can run hot, and this simple addition keeps temperatures under 40°C even under heavy load.

    The Z440 has a surprising number of expansion slots, which gave me the flexibility to put all of this together in one box.

    The Software: Making It All Work Together

    Hardware is only half the story. The real magic is in the software architecture that brings it all to life. I chose Proxmox VE as my hypervisor—it’s a powerful and free platform for managing virtual machines.

    A Virtualized Network and Router

    Since I’m a networking person, this is where things get fun. All the network traffic flows through a single VLAN-aware bridge in Proxmox. I have about 20 different VLANs to segment traffic based on trust, purpose, and tenants.

    For routing, I’m running VyOS in a virtual machine. I used to run OPNsense on a separate mini-PC, but I found that managing many networks and VPN tunnels through a web UI became counterproductive. With VyOS, I can manage everything through a command-line interface, which is much faster and more powerful for my needs. I even use BGP to connect my homelab routes with some of my cloud deployments.

    A Virtualized Approach to Storage

    This is one of the parts I’m most proud of. Instead of just installing a NAS operating system like TrueNAS directly on the hardware, I virtualized my storage. I passed the HBA controller directly through to a FreeBSD virtual machine.

    Why? Two main reasons.

    1. Future-Proofing: This design separates my applications from my storage. In the future, if I want to scale up, I can build a dedicated storage server and disk shelf, and my VMs won’t even know the difference. They access the storage over the network (NFS or iSCSI) and are completely blind to the underlying hardware.
    2. Flexibility: I was already using ZFS pools from an old FreeBSD setup. This approach allowed me to import them without any of the conflicts I ran into when trying TrueNAS SCALE. It just works.

    What Am I Actually Running?

    With all this setup, you might be wondering what I’m doing with it. To be honest, it’s more of a lab for networking experiments than a server running a hundred different apps.

    My main workloads are:
    * CDN projects I contribute to.
    * Personal chat relays and Syncthing for file synchronization.
    * A Jellyfin media server (still a work in progress!).

    To keep track of it all, I use Netbox to document all my network prefixes, VLANs, and VMs. At this scale, good documentation isn’t optional; it’s a necessity.

    This project has been a blast. It’s proof that you don’t need a full server rack to explore advanced data center concepts. A well-chosen desktop workstation can be the perfect, budget-friendly heart of a seriously powerful all-in-one homelab.

  • Gem or Garbage? The Lure of Second-Hand Server Gear

    Gem or Garbage? The Lure of Second-Hand Server Gear

    Found a cheap server or enterprise hardware on Facebook Marketplace? Here’s why it might be more e-waste than treasure, and what to look out for.

    You know that late-night scroll. You’re not looking for anything specific. You’re just browsing Facebook Marketplace, Zillow, or eBay, seeing what’s out there. It’s digital window shopping.

    Most of the time, it’s just a blur of used couches and questionable car mods. But every once in a while, you see it. A deal. A real, head-turning, “wait, what?” kind of deal.

    That happened to me the other day. I stumbled upon a listing for a full-sized server rack. The kind of thing that runs a whole office building. And it wasn’t just the rack. It was loaded with six power supply units, or PSUs. And these weren’t your average computer parts. Each one was rated for 2,700 watts.

    My first thought was, “Wow, what a beast.” My second thought was a quick, back-of-the-napkin calculation.

    2,700 watts x 6 = 16,200 watts.

    Sixteen. Thousand. Watts.

    For a moment, I let myself dream. I could run anything on this rig. It was the kind of hardware that could handle some serious computing. The price was low, and the temptation was high. It felt like finding a retired race car for the price of a used sedan. Sure, it’s not practical, but look at all that power!

    But then, reality started to creep in.

    The Harsh Reality of “Pro” Hardware

    This is the moment where the dream of a bargain meets the reality of hidden costs. That server rack wasn’t just a piece of hardware; it was a commitment. A commitment to noise, heat, and an electric bill that would make my eyes water.

    Let’s talk about that power draw again. A standard wall outlet in a U.S. home runs on a 15-amp circuit, which provides about 1,800 watts. That single server rack, running at full tilt, could demand the power of nine separate household circuits.

    Plugging this thing in wouldn’t just trip a breaker. It would be a declaration of war on my home’s electrical system. And even if I had a dedicated 240v circuit, like the one for an electric dryer, the monthly cost would be staggering. Enterprise hardware is built for performance, not efficiency.

    Then there’s the noise.

    If you’ve never been in a server room, it’s hard to describe the sound. It’s not a gentle hum. It’s a constant, high-pitched scream of fans working tirelessly to keep everything from melting. Those six 2,700-watt PSUs would make it sound like a jet was preparing for takeoff in my basement. It’s not something you can just ignore or get used to. It’s an actively hostile sound.

    And all that power and noise is really just a side effect of the main event: heat. Every single one of those 16,200 watts is eventually converted into heat. Running this rack would turn any normal room into a sauna in minutes. You don’t just need the rack; you need a dedicated cooling system to keep the rack—and the room it’s in—from overheating.

    So, Is It a Gem or Just E-Waste?

    This is the question, isn’t it? When does a piece of powerful, second-hand tech stop being a diamond in the rough and start being a piece of junk you’re paying to haul away?

    For 99% of people, this server rack is a trap. It’s a classic example of something that’s cheap to acquire but expensive to own. The “deal” isn’t in the purchase price; it’s in the operational cost.

    • For the average home lab enthusiast: It’s complete overkill. You can run a fantastic Plex server, a network storage system, and a dozen other services on a modern, low-power machine that sips electricity.
    • For the tinkerer: Maybe there’s some value in stripping it for parts? The rack itself is useful. But the main components, the PSUs, are the most impractical part of the whole package.

    The only person who could maybe, maybe justify it is someone with a dedicated, soundproofed, and separately-wired workshop who needs to do some serious number crunching, like training AI models or some other high-performance computing task. But even then, there are probably newer, more efficient ways to do it.

    So I closed the tab. I walked away. It’s fun to look at the monster truck, but you probably shouldn’t buy it to get groceries.

    The real lesson here is a simple one. The next time you see an incredible deal on professional or enterprise gear, take a second. Look past the shiny specs and the low price tag. Ask yourself about the hidden costs: the power, the noise, the heat, and the sheer practicality of it. Sometimes the best deal is the one you let someone else have.