Author: homenode

  • My Favorite Home Server Trick: The Mini PC + NAS Combo

    My Favorite Home Server Trick: The Mini PC + NAS Combo

    Want a powerful home server on a budget? Learn how to combine a new mini PC with your old computer as a NAS for the perfect, efficient setup.

    I have an old desktop computer sitting in my closet. You probably do, too. For me, it’s an old Dell tower that saw me through college. It’s slow, it’s big, and it feels wasteful to just throw it away. For a while, it was my makeshift home server, running Plex and a few other things. But it was noisy, power-hungry, and frankly, not very good at it anymore.

    Then I started looking at mini PCs. These little boxes are quiet, sip electricity, and pack a surprising punch. This led me to a thought: what if I could get the best of both worlds? What if I used a new mini PC for all the smart stuff, but kept my old computer around for one simple job: holding all my files?

    It turns out, this is a fantastic idea. You can absolutely pair a zippy new mini PC with a clunky old desktop, and it might just be the most budget-friendly, common-sense way to build a powerful home server.

    The Big Idea: Splitting Brains and Brawn

    Think of it like this: your home server does two main things. It runs applications (the “brains”), and it stores files (the “brawn”).

    • The Brains: This is stuff like Plex or Jellyfin (for your movie library), game servers, or backup software. These tasks need a decent processor (CPU) to work well, especially Plex when it has to transcode a video file for your phone or tablet. This is the perfect job for a modern mini PC.
    • The Brawn: This is just storage. It’s a bunch of hard drives holding your movie and music files. This task doesn’t require a fast processor at all. It just needs to be reliable and accessible. This is the perfect new job for your old computer.

    By splitting these jobs between two machines, you get some serious benefits. Your 24/7 server (the mini PC) uses way less electricity, and you get a much faster, smoother experience for your apps. All while reusing hardware you already own.

    So, How Does This Actually Work?

    The setup is simpler than you might think. We’re essentially turning the old PC into a Network Attached Storage, or NAS for short. A NAS is just a computer whose only job is to serve files to other devices on your home network.

    When you want to watch a movie, the process looks like this:

    1. You open Plex on your TV.
    2. Your TV tells your mini PC (where Plex is running) to play the file.
    3. The mini PC says, “Okay, I need that file.” It then requests it from your old PC over your home network.
    4. The old PC finds the movie file on its hard drive and sends it back to the mini PC.
    5. The mini PC streams it to your TV.

    The key to making this fast and seamless is your home network. You absolutely want to have both computers plugged directly into your router with Ethernet cables. Don’t try to do this over Wi-Fi. A standard gigabit wired network is more than fast enough to handle even high-quality 4K movie files.

    Your Two Paths to a DIY NAS

    Okay, you’re sold on the idea. How do you actually turn that old machine into a file-hoarding workhorse? You’ve got two main options.

    1. The Super Simple Way: A Windows Share

    This is the path of least resistance. You don’t have to install any new software on your old PC.

    • Just gather all your media files into a single main folder (e.g., D:\Media).
    • Right-click on that folder, go to “Properties,” then the “Sharing” tab, and set it up as a network share.
    • That’s it. Your mini PC can now see and access this folder over the network.

    2. The “Proper” Way: A Dedicated NAS Operating System

    This approach takes a little more work but gives you a much more powerful and robust solution. You’ll wipe the old computer and install a free operating system designed specifically for being a NAS.

    Two popular choices are TrueNAS CORE and OpenMediaVault (OMV).

    These tools give you a clean web interface to manage your drives, check on their health, and set up advanced features (like RAID, which protects you from a single hard drive failure). It’s the more professional route, and it’s what I’d recommend if you’re feeling a bit adventurous.

    Putting It All Together

    Once you have your old PC set up as a NAS (using either method), the final step is to make that storage easily accessible on your mini PC.

    On your mini PC (assuming it’s running Windows), you’ll want to “map” the network drive. This makes the storage on your old PC show up as if it were a local drive. You’d open File Explorer, go to “This PC,” and find the “Map network drive” option. You can assign it a letter like M: (for Media), and point it to the network share you created.

    Now, just tell your applications where to find everything. In Plex, you’ll add a new library and point it to your shiny new M: drive.

    And you’re done. You’ve successfully upgraded your home server, saved a bunch of money, and given that old computer a useful new lease on life. It’s a win-win-win.

  • So You Have Some Old Servers. Now What?

    So You Have Some Old Servers. Now What?

    Feeling curious about what’s possible with your own servers? This friendly guide helps you start your home lab journey and find cool, practical projects.

    So you’ve got that feeling.
    That little itch in the back of your brain that says, “I know how to use this stuff, but I don’t really get how it all works.”

    Maybe you’ve set up a home network. Maybe you’ve even got a couple of old servers from work sitting in a corner, waiting for a purpose. You see people online building these incredible digital playgrounds in their basements—home labs—and you want in. But you’re stuck on the first step: Where do you even begin?

    It’s easy to feel overwhelmed. You’re not just trying to follow a tutorial; you want to grasp the fundamentals. You want to move past copying and pasting commands and start truly understanding the logic behind them.

    If that sounds familiar, you’re in the right place. Let’s talk about how to get started on this journey.

    First, Forget “Mastery”

    The goal isn’t to become a master of everything overnight. That’s impossible and a sure path to burnout. The real goal is to build a foundation, one brick at a time. The best way to do that is to stop thinking about “learning networking” and start thinking about solving a problem.

    Your first project shouldn’t be some grand, abstract goal. It should be something tangible that makes your own life a little better or more interesting.

    Think about it:
    * Are you tired of ads on every device in your house?
    * Do you wish you had your own private cloud for files and photos, separate from Google or Apple?
    * Want to host a private Minecraft server for you and your friends?
    * Curious about automating your smart home devices?

    Pick one. That’s your starting point.

    Your Digital Workbench: The Hypervisor

    You mentioned having some servers, which is awesome. Old enterprise gear is perfect for a home lab. It’s built to run 24/7 and gives you tons of room to experiment.

    So, what’s the best way to set them up? My advice is to start with a hypervisor.

    Don’t let the name scare you. A hypervisor is just a lightweight operating system that’s designed to run other operating systems. Think of it like a digital workbench. Instead of installing one OS on your server, you install a hypervisor like Proxmox (it’s free and amazing for this).

    Once Proxmox is running, you can create Virtual Machines (VMs) and Containers (LXCs) with just a few clicks.

    • A Virtual Machine (VM) is exactly what it sounds like: a full, independent computer running inside your server. It has its own virtual hard drive, RAM, and network connection. This is perfect for when you need to run a completely separate OS, like Windows inside your Linux-based server.
    • A Container (LXC) is a more lightweight solution. Instead of virtualizing a whole computer, containers just wall off a piece of the main OS for a specific app. They’re faster, use fewer resources, and are perfect for running single applications like a web server or a database.

    Using a hypervisor is the key to a flexible lab. It lets you spin up a new VM to test an idea and tear it down if it doesn’t work, all without messing up your other projects. It’s the ultimate sandbox.

    Where to Find Cool Projects

    Okay, so you’ve picked a goal and set up your Proxmox workbench. Now for the fun part: finding amazing projects and the guides to get them running.

    The internet is overflowing with ideas, but here are a few places I always come back to:

    • Reddit Communities: The absolute best places for inspiration are subreddits like r/homelab and r/selfhosted. You’ll see what other people are building, the problems they’re running into, and the creative solutions they find. It’s a goldmine of real-world experience. r/datahoarder is another great one for storage nerds.
    • “Awesome” Lists on GitHub: Search GitHub for “awesome self-hosted” or “awesome-sysadmin.” You’ll find curated lists of incredible open-source software for almost any task you can imagine.
    • YouTube Tinkerers: Find creators who are actually doing the things you want to do. Channels like Techno Tim and Jeff Geerling (who you might already know) offer fantastic, step-by-step tutorials for specific projects.

    A Few Starter Project Ideas:

    • Block Ads Network-Wide: Install Pi-hole or AdGuard Home in a container. It’s a relatively simple project that provides an immediate, noticeable benefit.
    • Build Your Own Cloud: Set up Nextcloud. It’s like having your own private Google Drive, complete with photo backups, calendars, and contacts.
    • Stream Your Media: Install Plex or Jellyfin. You can load them up with your movies and TV shows and stream them to any device, anywhere.
    • Monitor Your Services: Run Uptime Kuma to get a cool dashboard that shows you if all your other services are online.

    The Real Secret: Just Start

    That’s it. That’s the secret.

    The difference between the person who dreams of a home lab and the person who has one is that the second person just started. They picked one small project, probably failed a few times, broke things, and learned something in the process.

    Your lab is a lab. It’s meant for experiments. Don’t be afraid to break it. With a hypervisor like Proxmox, the cost of failure is just a few clicks to delete a VM and start fresh. Every mistake is a lesson. Write down what you did, what went wrong, and what you learned.

    You don’t need the newest gear. You don’t need to know everything. You just need a little curiosity and the willingness to try. So go ahead, fire up that old server, and build something cool.

  • My Server Froze and Blamed Me: Cracking the NMI Code

    My Server Froze and Blamed Me: Cracking the NMI Code

    Facing the ‘Unrecoverable System Error (NMI)’ on your HP ProLiant server? Here’s a step-by-step guide to diagnosing and fixing this frustrating issue.

    I was so excited. I’d just gotten my hands on an HP ProLiant MicroServer Gen8. If you’re a home lab enthusiast, you know this little box is a legend. It’s compact, capable, and the perfect foundation for building a new setup. My plan was to run Debian 12 on it, maybe for a NAS, maybe for some containers. The possibilities felt endless.

    I got everything set up, installed a fresh copy of Debian, and let it idle. For the first day, everything was perfect. And then, it wasn’t.

    I walked over to the server to find the screen frozen. The cursor wasn’t blinking. Nothing. Even worse, the “Health LED” on the front was blinking a menacing red.

    My heart sank. The red light of doom.

    Chasing Ghosts in the Logs

    Thankfully, HP servers have iLO (Integrated Lights-Out), a fantastic tool that lets you manage the server remotely, even when it’s powered off or frozen. I logged into the iLO web interface and checked the “Integrated Management Log.”

    And there it was, in black and white:

    Unrecoverable System Error (NMI) has occurred.

    Right below it, another entry:

    User Initiated NMI Switch

    NMI stands for Non-Maskable Interrupt. In simple terms, it’s a hardware error so critical that the system can’t just ignore it. It’s the equivalent of your computer’s hardware screaming, “STOP EVERYTHING! Something is seriously wrong.”

    The “User Initiated” part was just weird. I certainly hadn’t pressed any magic NMI button (which, by the way, is a real thing on some servers for forcing a crash dump). It felt like the server was freezing and then blaming me for it.

    The First Suspect

    My first thought went to the newest component I’d added: a cheap SAS card I’d bought from AliExpress. It was an Inspur 9211-8i, which I was hoping to use for connecting a bunch of large hard drives. It seemed like the most likely culprit.

    So, I pulled the card out.

    I reinstalled a fresh copy of Debian 12 on an SSD connected to the server’s built-in ports and let it run. For about 24 hours, things were quiet. I thought I’d fixed it.

    Then, the red light started blinking again. Same freeze. Same NMI error in the logs.

    It wasn’t the SAS card. The problem was deeper.

    What Could It Be? A Process of Elimination

    This is the part of any troubleshooting process that can be either fun or maddening. You have to work through the possibilities, one by one.

    Here was my thinking:

    • It happens with the OS running. I noticed the server was stable if it was just sitting in the BIOS or stuck in a boot loop without an OS drive. The NMI error only happened after Debian was up and running for a day or two. This meant it was likely an issue with the OS interacting with a piece of faulty or incompatible hardware.
    • It’s not the storage controller. I’d ruled out the add-in SAS card, and the problem still happened with an SSD on the internal controller. While a bad SAS cable could theoretically cause issues, it felt less likely to be the root cause of such a critical system halt.
    • So, what’s left? The core components. The CPU and the RAM.

    The CPU was a Xeon E3-1220L V2, a solid processor for this machine. While not impossible, a CPU failure is relatively rare.

    That left the RAM. I was using two sticks of DDR3 ECC memory. The specs were correct, but it was non-HP branded RAM. And with servers, especially older ones like the Gen8, that can be a big deal. They can be incredibly picky about memory. Even if the specs—ECC, speed, voltage—all match, a tiny incompatibility in the module’s design can cause bizarre, intermittent errors just like this.

    The Path to a Solution

    An NMI error is almost always a hardware problem. While a software or driver bug can trigger it, the root cause lies in the physical components. Based on my experience, here’s the checklist I’d recommend to anyone facing this exact problem.

    1. Test Your Memory First. This is the number one suspect. Don’t just assume your RAM is good because it seems to work for a while. Download MemTest86+ and let it run for a full 24 hours. Intermittent RAM faults often don’t show up in a quick 1-hour test. If you can, beg, borrow, or buy a single stick of official HP-branded RAM for this server and see if the system is stable. If it is, you’ve found your culprit.
    2. Strip It Down. Go back to basics. Disconnect everything that isn’t absolutely essential. Run the server with just the CPU, one stick of RAM, and your boot drive. If the system is stable for a few days, start adding components back one at a time, with a few days of testing in between each addition.
    3. Check Your Temperatures. Use iLO to keep an eye on the system and CPU temperatures. Overheating can absolutely cause the system to trigger a protective NMI and shut down. Make sure your fans are spinning and the heatsinks are free of dust.
    4. Reseat Everything. It sounds too simple, but it works surprisingly often. Power down the server, unplug it, and physically remove and reseat the CPU, the RAM, and all power and data cables. A slightly loose connection can cause chaos.

    For me, the journey was a reminder that when you’re building with used or non-certified hardware, you’re sometimes in for an adventure. These cryptic errors aren’t just a roadblock; they’re a puzzle. And while it was frustrating, solving it piece by piece is what makes running a home lab so rewarding. The server isn’t just a tool—it’s a project.

  • Can You Plug a Power Strip Into a UPS? Let’s Talk About It.

    Can You Plug a Power Strip Into a UPS? Let’s Talk About It.

    Ever wondered if you can plug a surge protector into a UPS? Discover the real reasons why it’s a bad idea and what you should do instead to protect your tech.

    So you finally did it. You bought an Uninterruptible Power Supply (UPS) to keep your precious tech safe from power flickers and surges. Smart move.

    You unbox it, plug it in, and start connecting your gear. Your computer gets an outlet. Your monitor gets another. And then… you’re out of plugs. But you still have your router, your speakers, and a phone charger to connect.

    Sitting right there in your box of spare cables is a trusty power strip. It seems like the perfect solution. Just plug the strip into the UPS, and you’ve got five or six new outlets. Problem solved, right?

    Well, maybe not. If you’ve ever Googled this, you’ve probably seen a dozen forum posts all screaming the same thing: Don’t plug a surge protector into a UPS!

    But hardly anyone explains why. Most of the warnings are vague, talking about overloading or just saying “it’s bad.” I get it, that can be super frustrating when you’re just trying to understand your gear. So let’s actually break down the real reasons.

    It’s Not Just About Overloading

    Most people assume the big risk is plugging too much stuff into the power strip and overloading the UPS. And yes, that’s definitely a risk. A UPS is rated for a specific maximum wattage (or VA). A power strip makes it incredibly easy to plug in a bunch of devices that, when running all at once, could exceed that limit.

    But let’s assume you’re careful. You’ve done the math, and you know your total power draw is well under the limit. You’re in the clear then, right?

    Not quite. The main issue isn’t about the amount of power; it’s about how the power is managed.

    The Real Problem: Conflicting Protection

    Here’s the thing: most modern UPS units are more than just a big battery. They also have high-quality power filtering and surge protection built right in. They work constantly to provide clean, stable electricity to your devices.

    Surge-protected power strips also have their own protection circuitry. It’s usually a simple component called a Metal Oxide Varistor (MOV). An MOV’s job is to sit there and watch the voltage. If it sees a sudden, massive voltage spike (like from a lightning strike), it diverts that dangerous excess energy to the ground wire, saving your electronics.

    When you plug one into the other, you create a situation where these two systems can start fighting each other.

    Think of it like this: your UPS is trying to create a perfect, clean bubble of power. The power strip’s surge protector is designed to pop any voltage spikes. But because the UPS is already conditioning the power, its sensitive circuits can misinterpret the power strip’s MOV as a power anomaly or a wiring fault. This can cause a few problems:

    • It can wear out your UPS battery. The UPS might think something is wrong with the incoming power and switch to its battery unnecessarily, even when there’s no outage.
    • It can reduce the overall protection. In a real surge event, the two systems can interfere. The power strip might try to divert the surge, but the UPS might not react as it should because the strip gets in the way. In essence, they can make each other less effective.
    • It can send dirty power to your devices. The way a simple MOV works can introduce noise and fluctuations into the power line, which is the exact opposite of what your UPS is trying to achieve.

    You’ll Probably Void Your Warranty

    If the technical explanation isn’t convincing enough, here’s a much simpler one: virtually every UPS manufacturer will void your warranty if you do this.

    Companies like APC, CyberPower, and Tripp Lite are very clear in their manuals. They state that daisy-chaining surge protectors or power strips to the battery-backed outlets of a UPS is not supported. This also includes their connected equipment protection policy—that guarantee that they’ll replace your gear if it gets fried while connected to their UPS. If you’ve got a power strip plugged in, that policy is likely null and void.

    So, What’s the Right Solution?

    Okay, so you can’t use your surge protector. But you still need more outlets. What are you supposed to do?

    Luckily, the solution is simple.

    Use a basic, non-surge-protected power strip.

    These are often called “power distribution units” or PDUs in the data center world, but you can find simple versions for home use. They are essentially just extension cords with a bunch of extra outlets. They have no internal filtering, no MOVs, nothing. They just pass power through.

    This is the perfect solution. You let the UPS handle all the surge protection and battery backup, and the PDU simply gives you the extra outlets you need for low-power devices like your router, modem, or chargers.

    So, to recap:

    • Don’t plug a surge-protected power strip into the battery side of a UPS.
    • Do let your UPS handle the protection.
    • Do use a basic, non-surge-protected PDU if you need more outlets.

    It’s a small distinction, but it makes a big difference. It ensures your equipment is properly protected, keeps your warranty intact, and lets your expensive UPS do its job without any interference.

  • Older vs. Newer CPUs: A Surprising Look at Home Server Power Draw

    Older vs. Newer CPUs: A Surprising Look at Home Server Power Draw

    Thinking of using an older Xeon CPU for your home server? We compare a Broadwell Xeon vs. a modern Coffee Lake chip to see which is more power-efficient at idle.

    I spend a lot of time thinking about my home server. Maybe too much time. It’s a fun hobby, and like any good hobby, it sends you down some interesting rabbit holes. Recently, I got stuck on a question about power consumption.

    I have a pretty decent little server box running right now. It’s built around a 9th-gen Intel chip—a Coffee Lake processor with a good balance of cores and speed. It handles my TrueNAS setup and a couple of virtual machines without breaking a sweat.

    But I’m a tinkerer. And I have a spare motherboard for older, high-end server hardware sitting on a shelf. It’s an X99 board, which uses the LGA-2011-3 socket. This got me thinking. Could I swap my modern setup for an older, beefier server CPU? I was looking at Xeon processors from the “Broadwell” era, some with a massive 18 cores.

    The trade-off seemed simple enough: I’d lose some single-core speed, but I’d gain a ton of cores. For a server running multiple VMs, that sounds great, right?

    But then I hit the real question: what about power draw when the server is just… sitting there?

    The Big Deal About Idle Power

    For a machine that’s on 24/7, the power it uses while doing nothing is actually a huge deal. Most of the time, my server isn’t transcoding video or running complex calculations. It’s idling, waiting for a request.

    My goal was to keep the whole system humming along at under 50 watts, and ideally, under 40 watts at idle. This is where the comparison between a modern consumer chip (Coffee Lake) and an older server chip (Broadwell) gets really interesting.

    On paper, more cores running at a lower clock speed might seem efficient. But technology has come a long way.

    The Allure of Old Server Gear

    First, let’s admit it: building with used enterprise hardware is cool. You can get CPUs that once cost thousands of dollars for a tiny fraction of the price. The idea of having a 16 or 18-core beast humming away in a closet is tempting. For virtualization, more cores are almost always better.

    So, the idea of swapping my 8-core Coffee Lake chip for something like a Xeon E5-2690 v4 felt like a massive upgrade in multitasking power. I was fine with the single-core performance being a bit worse. My server tasks are spread out, not dependent on one super-fast core.

    But could this powerhouse system sip power gently when it wasn’t busy?

    Modern Tech Has a Secret Weapon

    Here’s the thing about newer CPU generations like Coffee Lake. The improvements aren’t just about raw speed. A huge amount of engineering has gone into making them incredibly efficient at doing nothing.

    Modern CPUs have very sophisticated “C-states,” which are sleep states that let them power down parts of the chip when they aren’t needed. They can drop into a very deep sleep almost instantly between keystrokes or mouse movements.

    Older platforms are just… not as good at this. And it’s not just the CPU. The motherboard chipset is a huge factor. The X99 chipset, which the Broadwell-E Xeons use, is known for being a bit of a power hog itself. It was designed for performance first, at a time when idle efficiency wasn’t the top priority for servers.

    So, the fight isn’t just “Broadwell CPU vs. Coffee Lake CPU.” It’s “Broadwell CPU + X99 Platform vs. Coffee Lake CPU + Modern Platform.”

    So, What’s the Likely Outcome?

    After digging around and looking at what other home labbers have experienced, a clear picture started to form.

    • The Coffee Lake System: A system with a 9th-gen Intel CPU, a modern motherboard, and a couple of drives can easily idle in the 20-35 watt range. That’s incredibly low.
    • The Broadwell-E System: Getting an X99-based system with a high-core-count Xeon to idle below 50 watts is a real challenge. It’s not impossible, but it’s tough. Most setups seem to idle in the 50-80 watt range. That extra 30-40 watts of idle power draw adds up quickly over a year of 24/7 operation.

    The higher core count of the Broadwell Xeon is amazing, but you pay for it with a much higher power “floor.” The system just uses more energy to exist, even before you ask it to do any real work.

    My Final Verdict: Sticking With Modern

    In the end, I decided to stick with my current Coffee Lake setup.

    While the siren song of 18 cores was tempting, the practical reality of the higher idle power was a dealbreaker for me. My server spends 95% of its life idling. That power floor matters more than the performance ceiling. For my use case—a storage server with a couple of lightweight VMs—the modern chip is simply the more sensible, and cheaper, choice to run long-term.

    If I had a workload that constantly hammered all the cores, the math might be different. But for a typical home server, it turns out that newer, even with fewer cores, is often the smarter path. It’s a great reminder that progress isn’t always about the biggest numbers, but often about the quiet efficiency happening in the background.

  • The Hunt for the Perfect Short-Depth DAS: DIY or Buy?

    The Hunt for the Perfect Short-Depth DAS: DIY or Buy?

    Looking for a short-depth rackmount JBOD or DAS for your home lab? Explore the pros and cons of buying a pre-built unit versus building your own.

    I found myself nodding along to a question someone asked online the other day. It’s a problem I’ve run into myself, and it’s one of those things that seems like it should be simple, but it really isn’t.

    The gist was this: “Does a short-depth, rack-mountable case for a bunch of hard drives even exist?”

    They were looking for something that could hold around 16 hard drives but wouldn’t stick out the back of a shallow server rack—those 18-inch deep ones common for home networking or audio gear. It’s a classic home lab dilemma. You want to add a ton of storage, but you don’t have a massive, enterprise-grade rack that can fit a 30-inch deep server chassis.

    So, you start searching. And you quickly discover two things:
    1. The options are surprisingly limited.
    2. The options you do find are outrageously expensive.

    Let’s talk about why this is such a tough spot to be in and what you can actually do about it.

    The Problem with “Off-the-Shelf”

    When you look for a pre-built solution, often called a DAS (Direct Attached Storage) or JBOD (Just a Bunch of Disks) enclosure, you’ll see names like QNAP, Synology, or TerraMaster. They make some beautiful, high-quality gear.

    But they are pricey. Often, shockingly so.

    Why? Because you’re not just buying a metal box. These units are turnkey solutions. They come with their own redundant power supplies, controller cards, cooling systems, and the support and warranty to back it all up. They are designed for small businesses or prosumers who need a plug-and-play system that just works. You’re paying for convenience and reliability, not just the hardware itself.

    For a lot of us building a home lab, paying thousands for an empty enclosure just feels wrong. We’re used to getting our hands dirty, finding deals on used parts, and building things ourselves. Which leads us to the other path.

    The DIY Path: Building Your Own Short-Depth DAS

    If the pre-built options feel out of reach, you’re not out of luck. You’re just entering DIY territory. Honestly, this is where the fun begins, and it’s almost always cheaper.

    Building your own short-depth DAS isn’t as intimidating as it sounds. It breaks down into a few key components.

    1. The Chassis

    This is the most important piece of the puzzle. You need a rack-mountable server chassis that is 18 inches deep or less. This is the “short-depth” part. You’ll also want one that has a lot of drive bays—like 8, 12, or even 16 hot-swap bays for 3.5″ drives.

    You’ll have to do some digging here. Look for 3U or 4U cases, as they have the height needed for multiple rows of drives. Sites like ServerCase.com or even eBay and AliExpress can be gold mines. Search for terms like “short-depth server chassis” or “4U ITX case.” Just be prepared to triple-check the dimensions before you buy.

    2. The Backplane

    A hot-swap chassis will come with a backplane. This is a circuit board at the back of the drive cage that all the hard drives plug into. It simplifies everything. Instead of running 16 separate power and data cables, you just have a few main connectors on the backplane. It’s the magic that makes a clean, multi-drive setup possible. Most use standard SATA connectors or, more commonly, SAS connectors (like SFF-8087 or SFF-8643) that bundle four SATA connections into one cable.

    3. The Power Supply

    You don’t need anything crazy here. A standard, reliable ATX or SFX power supply from a brand like Corsair or Seasonic will do the job perfectly. Just make sure it has enough SATA power connectors to feed your backplane.

    4. The “Brain” (or lack thereof)

    This is what makes a DAS different from a NAS (Network Attached Storage). A NAS is a standalone computer. A DAS is just a “dumb” box of drives that attaches to another computer.

    To make this work, you need two things:

    • Inside your DIY DAS: You need a way to connect all the drives to an external port. This is usually done with a SAS Expander card. It takes all the connections from the backplane and funnels them into one or two external SAS ports. Think of it as a USB hub, but for hard drives.
    • Inside your main server: You need a Host Bus Adapter (HBA). This is a PCIe card that gives your server the external SAS port needed to talk to your new drive enclosure. A used LSI card from eBay, flashed to “IT Mode,” is the go-to for the home lab community.

    You connect the two with a simple external SAS cable (like an SFF-8088 or SFF-8644 cable), and suddenly your main server sees all 16 drives as if they were plugged in directly.

    Is It Worth It?

    So, back to the original question. Does a short-depth, 16-bay JBOD exist?

    Yes, but probably not in the way most people hope. The affordable, easy, off-the-shelf option is mostly a myth.

    Instead, you have a choice:
    * Buy: Spend a lot of money on a polished, pre-made unit for a plug-and-play experience.
    * Build: Spend some time and effort to create a custom solution that perfectly fits your rack and your budget.

    For me, the answer is almost always to build. It’s more rewarding, you learn a ton, and you end up with a system you know inside and out. It might seem like a hassle, but the hunt for the perfect parts is half the fun.

  • Should You Put a Fire Detector in Your Garage? (The Answer Is Yes)

    Should You Put a Fire Detector in Your Garage? (The Answer Is Yes)

    Wondering if you need a fire detector in your garage? Learn why a heat detector is a better choice than a smoke detector and explore your options.

    I was cleaning out my garage the other day, and it hit me just how much flammable stuff is in there. Between the lawnmower’s gas can, leftover paint thinners, and oily rags I probably shouldn’t have kept, it’s a bit of a fire hazard.

    That got me thinking. I have smoke detectors all over my house, but what about the garage? It’s probably the one place where a fire is most likely to start.

    So, I started looking into it. My first thought was to just stick a regular smoke detector on the ceiling and call it a day. But it turns out, that’s not the best idea.

    Why a Regular Smoke Detector Isn’t Great for a Garage

    Your garage is a dusty, dirty place. Car exhaust, sawdust from a project, and even big temperature swings can trigger false alarms with a standard smoke detector. Imagine your alarm going off every time you start your car on a cold morning. No, thank you.

    That’s when I stumbled upon heat detectors.

    They work differently. Instead of “sniffing” the air for smoke particles, heat detectors do exactly what the name suggests: they detect a rapid rise in temperature. They don’t care about dust or fumes. They only care about heat. This makes them perfect for places like garages, workshops, or attics.

    A heat detector won’t go off because of a little exhaust smoke. But it will go off if a fire starts and the temperature suddenly skyrockets. Fewer false alarms, but you still get the protection you need.

    So, What Are the Options?

    Once I decided a heat detector was the way to go, I had to figure out which one to get. There are a few different paths you can take.

    1. The Simple, Standalone Detector

    This is the easiest option. You can buy a battery-powered heat detector from most big-box hardware stores. You just screw it to the ceiling, and you’re done. It has its own loud alarm, just like a smoke detector.

    * Pros: Cheap, easy to install, no wiring needed.
    * Cons: It only alerts you if you’re home and can hear it. If you’re away, you won’t know there’s a problem.

    2. The Interconnected System

    If you have a modern, hardwired smoke detector system in your house, you can often add a compatible heat detector to it. When the heat detector in the garage goes off, all the alarms in your house will sound.

    * Pros: Whole-house alert system. More reliable since it’s hardwired (with battery backup).
    * Cons: More complex to install. You might need an electrician, and you have to find a model that works with your existing system.

    3. The Smart Home Route (Z-Wave, Zigbee, etc.)

    This is where things get interesting for smart home fans. I started looking for a Z-Wave heat detector, thinking I could connect it to my smart home hub. That way, I’d get an alert on my phone no matter where I am.

    The options for dedicated Z-Wave heat detectors are surprisingly thin. But I found a clever workaround that many people use:

    You can get a “listening” device or a sensor that is designed to detect the specific sound frequency of a standard alarm. For example, you can place a Z-Wave smoke/CO “listener” near a basic, standalone heat detector. If the heat alarm goes off, the listener hears it and triggers your smart home automations—sending a notification to your phone, flashing your lights, you name it.

    There are also some smoke detectors with built-in heat sensors that are designed to be less prone to false alarms from dust and bugs, and these often come in smart versions. While not a pure “heat detector,” they’re a solid smart-home-friendly option for the garage.

    * Pros: Get alerts on your phone, integrate with other smart devices, peace of mind when you’re away.
    * Cons: Can be more expensive and require a bit of tech-savviness to set up.

    What Did I End Up Doing?

    For now, I’m leaning toward the interconnected route, as my system is due for an upgrade anyway. But that smart home listener idea is really compelling. The thought of getting an alert on my phone if something happens while I’m out is a huge plus.

    Ultimately, doing something is better than doing nothing. The garage is often overlooked, but it’s a critical spot to monitor. A simple heat detector is a small investment for a whole lot of peace of mind.

    So, take a look at your garage. If it’s anything like mine, it might be time to think about adding that extra layer of safety.

  • That Used Server Won’t Wreck Your Power Bill (Here’s the Math)

    That Used Server Won’t Wreck Your Power Bill (Here’s the Math)

    Curious about the real cost of a home server with used enterprise gear? I break down the power bill, noise, and hardware costs. It’s cheaper than you think.

    You’ve probably heard the warnings. Maybe you were scrolling through forums or chatting with tech-savvy friends about building your own home server. And someone, with the best intentions, probably told you to steer clear of used enterprise gear.

    “It’s too loud!”
    “It’s a power-hungry monster!”
    “Your electricity bill will go through the roof!”

    I heard it all, too. When I was piecing together my own home lab, I was surprised how many people were convinced that buying powerful, second-hand server hardware was a terrible idea. The common wisdom is that it’s just not practical for a home setting.

    But I love a good deal, and the prices for used enterprise parts are ridiculously low for the performance you get. So I decided to ignore the warnings and do the math myself.

    So, What’s the Real Cost?

    Let’s get right to it. The biggest fear is the power bill. People picture a giant, humming rack that spins their electric meter like a top. The reality, at least for my setup, is much less dramatic.

    My server is built from a single-socket motherboard and a solid Xeon processor—all bought for pennies on the dollar from eBay. I measured its power draw. Most of the time, when it’s just sitting there, it pulls about 200 watts.

    When it’s working harder—maybe transcoding a video for movie night or running a backup—that number might climb to 250 watts. I’ve never even seen it hit 300 watts.

    Okay, but what does that mean in dollars and cents?

    Where I live, the average cost of electricity is about 13.5 cents per kilowatt-hour (kWh). Let’s be pessimistic and assume my server runs at that higher 250-watt level, 24 hours a day, 7 days a week.

    The math looks like this:

    • 0.25 kW * 24 hours/day * 365 days/year = 2,190 kWh per year
    • 2,190 kWh * $0.1356/kWh = $296.96 per year

    That comes out to just under $25 a month.

    Honestly, that’s nothing. It’s far, far less than the cost of the streaming and storage subscriptions I no longer need because of my server. Even if my power costs were double, or if the server used way more energy, it would still make financial sense. For the value I get, I’d be perfectly happy paying up to $100 a month.

    But Isn’t It Loud?

    The second myth is about noise. People think “enterprise” and imagine a jet engine in their basement. Again, not necessarily true.

    It all comes down to how you build it. I didn’t stuff my server into a tiny 1U chassis, which uses small, high-RPM fans that have to scream to move any air. Instead, I built it inside a big, spacious 4U server case.

    This lets me use large, 120mm fans that spin slowly and quietly. The result? The server makes less noise than my gaming PC when its graphics card spins up. The loudest parts are the hard drives clicking away, and there’s not much you can do about that with any computer.

    Here’s the Gear, if You’re Curious

    This powerful, quiet, and affordable setup wasn’t built with magic. It was built with smart shopping on eBay. Here’s a quick look at the core components and what I paid for them:

    • Motherboard: Supermicro X11SPI-TF – $200
    • CPU: Intel Xeon 6240 – $50 (Yes, really)
    • CPU Cooler: A decent air cooler – $60
    • HBA Card (for storage): 3008-16i HBA – $60
    • RAM: 192GB of DDR4 ECC – I had this already, but you can get 32GB sticks for around $25 each online.

    Before storage, the grand total was just over $500. For that price, I have a machine that can handle anything I throw at it, from running virtual machines to managing a massive library of Linux ISOs.

    So, next time someone warns you away from used enterprise gear, just smile and nod. It doesn’t have to be a power-hungry, noisy beast. With a little bit of research and simple math, you can build an incredibly powerful home server that’s surprisingly cheap to run. Don’t let the myths scare you.

  • My New PC Runs on Nostalgia (and a Re-Modeled ’56)

    My New PC Runs on Nostalgia (and a Re-Modeled ’56)

    Tired of boring PC towers? Discover how to build a powerful computer inside a piece of vintage furniture for a unique, stealthy, and stylish homelab setup.

    I love technology. I really do. But I’ve never loved the way most of it looks. For years, my home office has been a battleground between function and style. On one side, the powerful computer I need for work and play. On the other, the clean, warm, and vaguely mid-century aesthetic I want for my home.

    The two have never gotten along.

    The typical computer tower is a metal box. It can be black, white, or lit up like a traveling carnival, but it’s still a box. It screams “computer” and doesn’t blend with wood, plants, and soft lighting. I was tired of it. So I decided to build a new homelab with one primary goal: it had to be completely invisible.

    Meet the ARM56

    I started hunting for furniture. Not desks, not shelves, but a proper piece of vintage furniture that could secretly house a computer. After a few weeks of searching through thrift stores and online marketplaces, I found it: a beautiful, solid wood media console from 1956.

    It had great lines, that unmistakable smell of old, well-cared-for wood, and just the right amount of space inside. This became the heart of the project, which I’ve nicknamed the “ARM56″—short for A Re-Modeled ’56.

    The idea was simple. Gut the inside, keep the outside pristine, and build a powerful PC within its wooden shell.

    The Challenge: Making It Actually Work

    Hiding a computer is easy. Making it run well without melting is hard. My two biggest challenges were airflow and cable management.

    1. Keeping It Cool

    Computers generate a lot of heat. A wooden box is basically an oven. Without proper airflow, I’d have a very expensive space heater that couldn’t even run a web browser.

    • Intake and Exhaust: I carefully measured and drilled a series of holes in the back panel, which faces the wall. I used a pattern that looked almost decorative, so it wouldn’t be an eyesore.
    • Silent Fans: I mounted two large, ultra-quiet fans inside—one for intake, one for exhaust. They pull cool air in from the bottom back, guide it over the components, and push hot air out the top back. They run at a low RPM, so you can’t even hear them.
    • Strategic Placement: The motherboard and processor are mounted in the direct path of the airflow. The power supply, which has its own fan, is positioned to exhaust its heat directly out the back.

    2. Hiding the Wires

    The second problem was the mess of cables. The beauty of a vintage cabinet is its clean exterior. The last thing I wanted was a tangle of power cords, USB cables, and display wires ruining the illusion.

    I drilled one discreet hole in the floor of the cabinet, right behind a leg where it’s impossible to see. All the necessary cables are bundled together and run through there, hidden from view.

    The Fun Part: The Specs (and the Puns)

    Now for the fun stuff. The setup is running a custom “Windows Sill” image. It’s a cheeky name, I know, but it feels appropriate since the whole thing sits right by my window.

    But the absolute best feature? The storage.

    I’ve got over 1,000 square feet of storage.

    …Wait, no, that’s my apartment. Inside the cabinet, though, there’s more than enough room for a few terabytes of digital storage, with plenty of physical space left over for my old records and a bottle of whiskey. That’s a kind of storage you can’t get with a standard PC tower.

    More Than Just a Computer

    Now, when friends come over, they compliment the beautiful mid-century console in the living room. They have no idea it’s the machine running my media server, handling my backups, and letting me tinker with new software.

    It’s a quiet, powerful, and stealthy machine hiding in plain sight.

    This project was a reminder that technology doesn’t have to exist in a sterile, beige box. You can integrate it into your home and your life in a way that feels personal and creative. You just have to think outside the box—or in this case, inside a much older, more interesting one.

  • I Closed All My Firewall Ports. Here’s What I Do for Security Instead.

    I Closed All My Firewall Ports. Here’s What I Do for Security Instead.

    Learn how to replace traditional firewall rules and open ports with Cloudflare Zero Trust for your homelab. A simpler, more secure approach to self-hosting.
    I have a confession to make. For years, my homelab setup was a patchwork of open ports, dynamic DNS scripts, and a constant, low-level anxiety. Every service I wanted to access from outside my home—be it Plex, a file server, or a new app I was testing—meant punching another hole in my firewall.

    Each hole felt like a tiny, unlocked window. I’d tell myself it was fine, that the services were secure. But I was always aware that I was exposing my home network to the entire internet, hoping no one would jiggle the handle. The constant IP address changes from my ISP didn’t help, adding another layer of clunky fixes to the mix.

    Then I switched to Cloudflare Zero Trust, and I closed every single one of those ports. Forever. The relief was immediate. No more open ports. No more dynamic DNS headaches. It just worked.

    But after the initial honeymoon phase, a new question popped up. If my firewall isn’t managing the traffic anymore, how do I set up the rules I used to rely on? You know, the basics like blocking traffic from certain countries or only allowing specific connections.

    I hit a wall. It seemed like everyone had a different opinion or a slightly different method. It turns out, the solution isn’t about recreating your old firewall; it’s about adopting a new, simpler mindset.

    The Big Wins: Why This is Better Than an Old-School Firewall

    First, let’s just touch on why this move is so great. It boils down to two things for me.

    • No More Open Ports: This is the big one. With a Cloudflare Tunnel, you’re not opening any inbound ports on your router or firewall. Instead, a lightweight service runs on your server and creates a secure, outbound-only connection to Cloudflare’s network. Nothing on your home network is exposed. It’s like having a secure, private bridge that only you know how to access, instead of leaving your front door unlocked.

    • Dynamic IPs Don’t Matter Anymore: My ISP can change my home’s IP address daily, and I wouldn’t even notice. Because the connection is made from my server to Cloudflare, my public IP address is irrelevant. Cloudflare handles everything, giving my services a stable, consistent address.

    This is a fundamentally more secure and robust way to expose services. But it does leave that one big question: What about the rules?

    Recreating Firewall Rules: The Zero Trust Way

    Here’s the mental shift you have to make: You’re no longer managing traffic at the network level (like with a firewall). You’re now managing access at the application level.

    Instead of a blanket rule like “block all traffic from outside the US,” you set a rule that says “only people in the US can access this specific application.” It’s more granular, and ultimately, more secure.

    Here’s a simple way to think about setting this up.

    1. Start with “Block All”

    By default, Cloudflare Zero Trust blocks everything. No one can access the applications you’ve set up through your tunnel unless you explicitly allow them. This is the core principle of “Zero Trust”—trust no one by default.

    2. Add Your Access Policies

    For each application (like your dashboard, your file server, etc.), you create an “Access Policy.” This is where you define who is allowed in. This is your new “firewall rules” dashboard.

    Inside a policy, you can get very specific. Want to block traffic from outside your country? Easy.

    • Create a policy for your application.
    • Set the “Action” to Allow.
    • In the “Include” rules, add a “Countries” selector and choose your country.

    That’s it. Now, only traffic originating from your selected country can even see the login page for that application. You can just as easily use an “Exclude” rule to block specific countries you’re worried about.

    3. It’s About “Who,” Not Just “Where”

    But you can go so much further. This is where it gets really powerful. You can add rules that require a user to log in with a specific email address, use a hardware security key (like a Yubikey), or be on a specific IP address (like your work office).

    A typical policy might look like this:

    • Allow traffic if ALL of the following are true:
      • The request is from the United States.
      • The user’s email ends in @my-personal-domain.com.
      • The user successfully authenticates with their Github account.

    You’re no longer just checking an IP address. You’re verifying the person and the context of their request.

    What About Port Ranges?

    This is one of the biggest differences. In the old world, you might open a range of ports. With Zero Trust, you don’t think about ports anymore.

    You create a tunnel for a specific service running on a specific internal port. For example, http://localhost:8080`. You then assign a public hostname to it, likemy-dashboard.my-domain.com`.

    The security policy is applied to that hostname, not the port. The port is never exposed to the internet. If you want to secure another service on another port, you just add it as a new hostname and create a new policy for it. You never have to think about “allowing a port range” again, because the concept is obsolete in this model.

    It’s a Shift, But It’s Worth It

    Moving from a traditional firewall to a Zero Trust model felt like a step into the future. It simplified my setup, removed a ton of security anxiety, and ultimately gave me more meaningful control over who can access my stuff.

    There’s a small learning curve, and it requires you to change how you think about access. You move from broad network rules to specific application rules. But once it clicks, you’ll wonder why you ever did it the old way. You haven’t lost your firewall rules; you’ve replaced them with something much better.